Google Brings GLX_EXT_texture_from_pixmap To Software
Phoronix: Google Brings GLX_EXT_texture_from_pixmap To Software
Here's some very exciting news coming out of the Google Chromium OS team for upstream work they continue making to Mesa... They have enabled GLX_EXT_texture_from_pixmap in the software drivers! This means that it may now be possible to use compositing window managers nicely from the Gallium3D software drivers like LLVMpipe and Softpipe on your CPU, in case your graphics processor doesn't have hardware acceleration available...
Great! Now, we have just one more thing to bog down our CPUs with...
First off, this is nonsense. If you have a supported GPU, you won't even see this code path; this extension would be executed in hardware like the rest of OpenGL.
If you don't have a supported GPU (i.e. one of the mainstream Intel/Nvidia/ATI GPUs), then there are plenty of viable use cases for a software renderer. Even if you do have a supported GPU it can be useful.
If your driver is bugged, and you're a developer, you can compare the behavior of a hardware driver like r600g against the software renderer, which should (in time) mature to be bug-free and operate the same on every platform, because CPU behavior is much more rigorously standardized than GPU behavior... so regardless of which CPU they tested the original software code on, you can be pretty sure that it will work on your hardware.
If you have a decent CPU (or a very small screen resolution, such as a smartphone) but no working drivers or no graphics card (e.g. servers), you can still learn, test, and use DEs like Gnome3 and Unity3D.
The majority of virtualization platforms (especially considering "stable/stale" versions) still don't support hardware virtualization of OpenGL, at least not enough to enable compositing managers. With software rendering that actually works, you always have that option, even if your virtualization solution of choice happens not to support guest 3d acceleration on Linux.
The computational complexity of graphics operations increases with O(n^2) based on the resolution. So if you have a very small screen (embedded devices, smartphones) or one with a low number of pixels, you end up with few enough pixels that you can actually get useful OpenGL performance out of an average CPU. For example, a whole lot of graphics operations are still done on the CPU even for top of the line smartphones, because the screen resolution is so low.
As the line between CPU and GPU starts to blur, you might see even more parallel / SIMD type instructions introduced for CPU architectures like EM64T and AMD64. These instructions would be designed to handle graphics-ish operations on a normal CPU, and have the instructions compiled and assembled using a traditional CPU toolchain like gcc. If we start thinking software rendering in mesa, and then get gcc to generate the SIMD or parallel instructions from our software rendering code, you could get some really decent graphics performance out of a CPU, but gracefully fall back to older instructions if the running CPU doesn't support the latest.
The difference between "software" and "hardware" acceleration is becoming less and less clear as things like AMD Fusion and Intel Sandy Bridge start to combine CPU and GPU within a single part. The next step (in my opinion) is to unify the instruction set architecture, so that you can generate "ordinary" CPU code, or GPU code as you please, from the same compiler suite, without even really paying attention to the fact that, underneath the machinations of the compiler, it has generated instructions that perform hardware operations comparable to a traditional discrete GPU.
You should be glad that things like LLVMpipe are already in the works, because I have the feeling that, for some hardware, in just a couple of years, a "driver" like LLVMpipe would be sufficient to give you GPU-like performance, but using a traditional x86 architecture as the basis.
Isn't that what Intel's Larrabee project was all about? I don't think Larrabee has gone away, it's more like Intel sent it back to the drawing board for some re-architecting after its initial iteration had some troubles. Doesn't make it a bad idea.