The Pipe-Dream Persists About Pairing LLVMpipe With GPU Hardware/Drivers

Written by Michael Larabel in Mesa on 15 January 2016 at 08:46 AM EST. 9 Comments
MESA
More than a few times over the years various Linux users have come forward to profess their "new" idea for improving open-source Linux GPU driver performance: the CPU-based LLVMpipe should work in tandem with a graphics card's hardware driver to deliver better performance.

LLVMpipe as the common software rasterizer fallback these days runs solely on the CPU. Even with a very powerful x86 CPU, LLVMpipe is very slow as CPUs and GPUs are very different beasts. The proposal of various users has been that LLVMpipe paired with a graphics card and its driver could deliver better performance. In very simple terms it may sound fine, just like multiple GPUs can render together if making use of AMD CrossFire or NVIDIA SLI, but in reality, it'd certainly be a mess pairing the software rendering on the CPU with hardware rendering on the GPU and trying to yield increased performance.

The Nouveau and Radeon Gallium3D drivers do not even support CrossFire/SLI rendering as it is and there's nothing on their road-map to do so in the near future. Implementing LLVMpipe+GPU rendering would too be a gigantic undertaking for likely minimal to no gain.

This week was yet another proposal along the same general concept came up in the FreeDesktop.org bug tracker for a user hoping the upstream Mesa developers would implement GPU+CPU balanced rendering. In this user's case, he's talking about the performance of his Ivy Bridge laptop.

Intel's Kenneth Graunke immediately responded, "This has been proposed many times. It's an idea that sounds nice on paper, but ends up being really complicated. Nobody has ever come up with a good plan, as far as I know. What should be done where? How do we avoid having to synchronize the two frequently, killing performance? It's not likely to happen any time soon. It might be a viable academic research project, but it's not a sure bet."

Roland Scheidegger of VMware also commented, "We have seen some though asking if we couldn't combine llvmpipe with less capable gpus to make a driver offering more features, that is executing the stuff the gpu can't do with llvmpipe (but no, we really can't in any meaningful way). This proposal sounds even more ambitious in some ways, I certainly agree we can't make it happen. With Vulkan, it may be the developers choice if multiple gpus are available which one to use for what, so theoretically there might be some way there to make something like that happen, but I've no idea there really (plus, unless you're looking at something like at least 5 year old low-end gpu vs. 8-core current high-end cpu, there'd still be no benefits even if that could be made to work). There is one thing llvmpipe is 'reasonably good' at compared to gpus, which is shader arithmetic (at least for pixel shaders, not running in parallel for vertex ones, with tons of gotchas as we don't currently even optimize empty branches away), but there's just no way to separate that."

The original poster then continued to comment that he still feels it's a good idea and was citing his Ivy Bridge glxgears results. Long story short, nothing like this is likely to happen in the foreseeable future short of any academic research papers.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week