I think Monkeynut means dividing the rendering work between CPU (with LLVMPipe) and GPU (using regular drivers), like Crossfire or SLI but with one real and one "fake" GPU.
Quick answer is "yes in principle" but because of the overhead associated with splitting and recombining the rendering work it's usually only worth doing if the two renderers are fairly close in performance. In most cases the GPU would be a lot faster than the CPU renderer so the overhead of supporting multiple GPUs would probably match or outweigh the benefit from the additional performance.
only a 48core Opteron 6000 (155gb/s ramspeed)can beat a GPU (hd5870 160gb/s )
a normal PC do have 5-15gb/s compared to an hd5870 160gb/s its very slow.
this benchmark only show us this divergence
That's an oversimplification. CPUs have larger caches than GPUs, so they won't typically need as much memory bandwidth. OTOH, GPUs do have more number crunching power...
Personally, I would be fine with a software rasterizer, if that would drive my normal desktop use. It should also be much easier to get bugs out of it, since there isn't a multitude of incompatible hardware models to test it against.
That's an oversimplification. CPUs have larger caches than GPUs, so they won't typically need as much memory bandwidth.
But 3D rendering memory access is typically horribly non-localised, which is one reason why GPUs don't bother with large caches: adding more processing capacity benefits them more than adding megabytes of cache.
But what if the CPU would do just a fraction of the work in a 'SLI' configuration and sync afterwards? Each time the sync function would compare the difference in time spend rendering and adjusts the load dynamically?