Yeah I get it. LLVM will run like molasses on older CPUs, for 3D and gaming (which are all the phoronix benchmarks on it). But what about as a simple 2D driver for something like Gnome classic and video playback?
It would be nice to have a modern driver that supported all the latest opengl features, even if I couldn't use them very quickly. As it is now, old video cards are just using basic frame buffer drivers that don't support anything, except to change the resolution. The CPU still draws everything.
So which would be faster in that scenario? An LLVM based driver for basic 2D rendering, or the plain old VESA driver, or framebuffer driver? You don't necessarily have to have zero compositing for older hardware either. XFCE's window manager is surprisingly fast doing software compositing for shadows and transparency. It doesn't do a lot, but the effects it does have don't kill my old laptop.
It's not so much what the effects are, but the way they're drawn.
After all, it's trivial to make shadows in a static image render fine on a 386: just make some pixels darker ;-)
The reason that "2d" graphics such as GTK/Qt rasterization is so cheap on the CPU is that it's much less computation to do that than it is to execute pixel shaders to calculate dynamic shadows, etc.
I've already done such tests... click the LLVMpipe link in the news item to see all of the LLVMpipe Phoronix articles and there should be at least two or three pertaining to multi-core scaling, which does include some HT tests in there.
Cool, thanks for the direction. I'm not totally sure about these things, but the gulftown llvmpipe scaling was dated in Nov 2010. Could there be any improvements since then with the 3.0 release? Also, would be cool to see if/how the AVX AVX2 instructions improve the LLVMpipe driver later this year when 3.1 is released. Vielen Dank!