Page 2 of 2 FirstFirst 12
Results 11 to 16 of 16

Thread: LLVMpipe Still Doesn't Work For Linux Gaming

  1. #11
    Join Date
    Oct 2010
    Posts
    90

    Default

    Quote Originally Posted by allquixotic View Post
    LLVMpipe just uses LLVM's optimizing compiler (although how much optimization it does is debatable, considering the poor performance of its generated code at least for x86 binaries)
    Uhm? The x86 output is perfectly decent, as long as it's not using openMP - slightly slower than the best gcc results, but very seldom anything you'd actually notice in a real-life situation.

  2. #12
    Join Date
    May 2012
    Posts
    4

    Default

    I think these results are actually quite impressive. Sure, it isn't matching the performance of dedicated hardware, but did anyone seriously expect that at this point? The gap between the CPU and GPU is remarkably small considering that the CPU is a fully generic processor.

    I am particularly interested in what this might hold for the future. The AVX2 instruction set extension has four times the floating-point vector throughput, two times the integer vector throughput, and gather instructions which replace 18 legacy instructions!

    So what are the LLVMpipe developers' thoughts on the future of real-time CPU rendering?

  3. #13
    Join Date
    May 2007
    Posts
    319

    Default

    Quote Originally Posted by smitty3268 View Post
    Not really, no.

    Well, the next major milestone is GL 3 support. It seems like it's pretty close, so hopefully it makes it as part of the Mesa 8.1 release, but no one has actually committed to making sure that happens.


    I think the main thing is just adding new features, like GL3. I'm not sure anyone has thought about the best way to bring OpenCL support to it yet.

    There was that one project to add a kernel side to the driver, which would let it avoid making a bunch of memory copies that it currently has to do. I'm not sure what the status of that was, if it's in with some of the DMA-BUF work or what. Beyond that, I don't think anyone is particularly focused on the performance of the driver. Just adding new features seems to be what most people are looking at.
    There isn't really an llvmpipe roadmap, nobody is really pushing features on it at the moment, vmware seem to be using it but no new major speedups have shown up.

    I was doing GL3 support as a spare time project, but my spare time decided it would rather do something else, so I might get back to it eventually,

    The kernel stuff was only for making texture-from-pixmap faster so gnome-shell can go faster, it doesn't make llvmpipe itself go faster at all.

    Dave.

  4. #14
    Join Date
    Apr 2011
    Location
    Slovakia
    Posts
    77

    Default

    Quote Originally Posted by sandy8925 View Post
    Are multiple threads being used or only a single thread?
    This would interest me too. I always read here how the LLVMpipe is better than the old softpipe in using all cores and new CPU instructions. But I never see that in real usage. If I kill direct rendering (e.g. by chmod -w /dev/dri/card*) there should be SW rendering. I only compile LLVMpipe and R600 (both gallium3D) so I think the SW rendering should be using LLVMpipe. But no 3D program is shown in top as using more threads to use all cores (and the programs are rendering slowly).

    What am I doing wrong? Some bad arguments to Mesa build? Or is this only starting at some specific LLVM version?

  5. #15
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,130

    Default

    glxinfo will say which driver you have, and you can force softpipe with the GALLIUM_DRIVER=softpipe var IIRC.

    I seem to recall there was a var to limit llvmpipe's threads too, but not sure on that.

  6. #16
    Join Date
    Apr 2011
    Location
    Slovakia
    Posts
    77

    Default

    Quote Originally Posted by curaga View Post
    glxinfo will say which driver you have, and you can force softpipe with the GALLIUM_DRIVER=softpipe var IIRC.
    OK, so I have this:
    OpenGL vendor string: VMware, Inc.
    OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300)
    OpenGL version string: 2.1 Mesa 8.1-devel
    OpenGL shading language version string: 1.20

    But I have tested it just now on celestia and it now has 5 threads (on 4 core cpu) which I haven't noticed before. They not using the cores at 100%, but about 25% each and the program is slow. But I haven't noticed this threading before so there may be progress. Maybe it is due to me upgrading to X.org server 1.12 and llvm 3.0 recently.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •