I wonder if it is possible to combine this renderer with the ATi/Nouveau renderers in a sort of SLI setup for a performance boost?
You mean use LLVM for the FLOSS drivers? Isn't this already done?
Originally Posted by monkeynut
Or do you mean use LLVMpipe instead of Mesa softpipe to draw unsupported functions on older GPU's?
I think Monkeynut means dividing the rendering work between CPU (with LLVMPipe) and GPU (using regular drivers), like Crossfire or SLI but with one real and one "fake" GPU.
Quick answer is "yes in principle" but because of the overhead associated with splitting and recombining the rendering work it's usually only worth doing if the two renderers are fairly close in performance. In most cases the GPU would be a lot faster than the CPU renderer so the overhead of supporting multiple GPUs would probably match or outweigh the benefit from the additional performance.
That doesn't make LLVMpipe any less cool, though
That's an oversimplification. CPUs have larger caches than GPUs, so they won't typically need as much memory bandwidth. OTOH, GPUs do have more number crunching power...
Originally Posted by Qaridarium
Personally, I would be fine with a software rasterizer, if that would drive my normal desktop use. It should also be much easier to get bugs out of it, since there isn't a multitude of incompatible hardware models to test it against.
Oh, and regarding dynamic load balancing... Couldn't that in principle be used to power down most of the GPU when not needed? Something like Optimus, but CPU+GPU instead of IGP+GPU.
But 3D rendering memory access is typically horribly non-localised, which is one reason why GPUs don't bother with large caches: adding more processing capacity benefits them more than adding megabytes of cache.
Originally Posted by Otus
But what if the CPU would do just a fraction of the work in a 'SLI' configuration and sync afterwards? Each time the sync function would compare the difference in time spend rendering and adjusts the load dynamically?