Mesa 8.0 LLVMpipe: Fine For Desktop, Not For Gaming
Phoronix: Mesa 8.0 LLVMpipe: Fine For Desktop, Not For Gaming
Continuing in the coverage of the soon-to-be-out Mesa 8.0, here are some benchmarks of the CPU-based LLVMpipe software driver for Gallium3D...
Thanks for your coverage of llvmpipe, Michael.
I am wondering what the performance is on a Pentium 4 or Pentium M, since that class of processor is more likely to be saddled with a SiS/VIA/Intel i8xx GPU that doesn't have hardware 3D accel.
Specifically, I'm wondering if they can run gnome-shell, since the gnome devs are going to eliminate fallback mode.
I wouldn't recommend it!!
Originally Posted by DanL
LLVMPIPE on a single core CPU... BAD IDEA... unless you like to watch paint dry.
If you had a dual core, it would be a very different story.
Last edited by Sidicas; 01-13-2012 at 04:40 PM.
Actually, I too wonder about that. I have old 855gm hardware, and while it used to have hardware acceleration, Intel dropped support for it.
So now I'm stuck with the framebuffer (fbdev) or generic vesa driver.
Would llvmpipe be faster on older hardware? Even if it's just 2D.
Do you have any EXPERIENCE (yes, the excessive CAPITALIZATION is sarcastic) with this or are you just extrapolating from the small amount of data available on this?
Originally Posted by Sidicas
LLVM benchmark request
I would like to see how the LLVMpipe software driver scales with cores and turning on/off hyper-threading when using Mesa 8.0. You could use your Core i7 3960X processor.
I've already done such tests... click the LLVMpipe link in the news item to see all of the LLVMpipe Phoronix articles and there should be at least two or three pertaining to multi-core scaling, which does include some HT tests in there.
Originally Posted by FourDMusic
With llvmpipe, gnome-shell is giving me 1 fps on a 2007-era top of the line (at the time) laptop with a Core 2 Duo CPU and 4GB of RAM.
Originally Posted by DanL
There's a reason why llvmpipe and LLVM try so hard to support the newer instruction sets: because they give real performance gains on the massively parallel, compute-intensive workloads that 3d rendering demands, whether the rendering is taking place on CPU cores or GPU cores.
Actually, if you look at the CPUs and GPUs coming out in 2012, the differences between CPUs and GPUs are quickly evaporating. The major difference right now seems to be that CPUs still operate with cores numbering less than 10, while GPUs operate with much less-capable, smaller cores that, while fully general-purpose, are much greater in number than the cores on most CPUs (except heinously expensive HPC parts). But as far as their general ability, the two models are very very similar.
What we need to do is find a "happy medium" number of cores that allows us to make the best use of parallelism for graphics, while packing enough power and on-board cache and registers and streaming instructions into each core to give us great serial performance, too, since serial throughput is often needed for most information systems and server software, where the software is designed as a long sequential string of operations with maybe 2 - 8 threads.
I'd say that if we could take the serial throughput of a Nehalem-era Core processor and build out the number of cores to, say, 64 or 96, we'd have a single integrated circuit that could adequately perform the tasks of both high-end GPUs and high-end CPUs. And since the instruction set is still x86, your "hardware-accelerated" graphics driver would become llvmpipe, utilizing whatever the latest SSE instruction set revision is.
When you compare the ideal x86-CPU-suitable-for-graphics that I just described with a very old (2005 or older) dual core or single-core CPU, with about 33% of the serial performance of one Nehalem core and zero scalability, you will quickly realize that it's just physically impossible to make these old clunkers perform the kinds of calculations needed for OpenGL.
I mean, Michael has run benchmarks using absolutely state of the art CPUs, and we're seeing just tolerable performance from compositing. Do you have any idea how far processors evolved from ~2011 (which is around the timeframe that the CPUs Michael tests now were manufactured) all the way back to 2004? The technology is proceeding at a frightening pace.
IMHO, llvmpipe should look toward the future, not the past. The folks who want to get state of the art features like compositing working on downright obsolete hardware are hoping for something that will not happen. Upgrade your hardware. I mean, hell, my laptop from 2007 is obsolete by all accounts, and the ONLY way I can get semi-reasonable performance out of gnome-shell is to use the i965 IGP inside, whose modest parallelism is just barely adequate to run a smooth compositing session with a few 2d windows open (web browser, terminal and IDE seems to work well, but it slows down if too many programs are open.)
Now, on the other hand, if you were to manufacture a very small, low power CPU using state of the art instruction sets (SSE 4.2 etc) and a bunch of tiny cores, you could probably get fantastic performance out of llvmpipe. You don't need a TON of horsepower to get good 3d (just look at what Nvidia achieved with Tegra 2 and Tegra 3 on mobile platforms!) -- you just need the tight loops implemented in hardware, and a lot of cores. So you are more than welcome to ask for llvmpipe to be useful on small platforms, just not old platforms.
Yeah I get it. LLVM will run like molasses on older CPUs, for 3D and gaming (which are all the phoronix benchmarks on it). But what about as a simple 2D driver for something like Gnome classic and video playback?
Originally Posted by allquixotic
It would be nice to have a modern driver that supported all the latest opengl features, even if I couldn't use them very quickly. As it is now, old video cards are just using basic frame buffer drivers that don't support anything, except to change the resolution. The CPU still draws everything.
So which would be faster in that scenario? An LLVM based driver for basic 2D rendering, or the plain old VESA driver, or framebuffer driver? You don't necessarily have to have zero compositing for older hardware either. XFCE's window manager is surprisingly fast doing software compositing for shadows and transparency. It doesn't do a lot, but the effects it does have don't kill my old laptop.
Last edited by hechacker1; 01-14-2012 at 12:39 AM.
In my experience, when you have only one CPU and it is bogged, everything slows down. If llvmpipe is using the CPU to it's limit to achieve sub-20 frames per second on a single core processor, I can only imagine how horrible the experience would be (when you factor all the other things that are being added to the CPU's work queue). IMHO, you should only use llvmpipe if you have more than one CPU core, though you might get away with a single core CPU if it's really efficient (fast/powerful).
Originally Posted by DanL
Edit: Of course, for XFCE's (and similar) compositor, you probably don't need to worry as much.
Last edited by Nobu; 01-14-2012 at 01:27 AM.