Can anyone explain what the difference between LLVMpipe and Gallivm is? Both seem to be projects to use LLVM, but why do they both exist? As I understand it, LLVMpipe is based on the same design as a normal softpipe CPU driver, does Gallivm take a different approach somehow?
I buy a new computer every six years. By the time UT3 was released and I still dual booted Windows, my then six year old 2800 XP+, 1GB RAM and 9800 pro could still run it at medium settings with a res of 1024*786 @ 24-30fps.
Now I have a Phenom 9950 X4, 8GB RAM and a 5770 and I don't want to upgrade whenever I get an 'sorry unsuported error', simply because my hardware can't do some OpenGL version that would be supported in the next four years, while my rig would still be able to somewhat cope with semi-HW fallback.
Remember that my old 9800 pro was OpenGL 1.5! :/
PS: Almost 3yo already. Damn time flies...
Modern hardware has evolved faster than your typical software/games (a mid-range card can play everything at 1920x1200) so unless you are using specialized programs you'll probably won't face performance issues during your system's lifetime. Add a SSD next year or so and you are golden! (Not kidding, a fast SSD will make your run-of-the-mill netbook feel like a supercomputer).
Gallivm is part of LLVMpipe -- think about LLVM (C++) wrapped up for use in a C driver -- but is also likely to be used "above the line" for *generating* TGSI (as part of the OpenCL stack and possibly as part of a future GLSL compiler).
http://cgit.freedesktop.org/mesa/mes...lvmpipe/README- The driver-independent parts of the LLVM / Gallium code are found in src/gallium/auxiliary/gallivm/.
Gallivm is in the src/gallium/auxiliary folder, along with other "helper" routines like draw.
Disclaimer - I found this on the internet so while it was probably true at one time it may no longer be true today
I don't trust an SSD with my data, yet. Better does this tech mature a little more over time and even then I have to do some homework. Of course I could put /home on my HDD, but I currently don't feel that file operations are too slow for my liking
And upgrading in between the lifetime of my rig is a little pointless; I paid money for a computer that I configured to have a lifetime of 6 years
'til they reach their write limit, or if they have flaky firmware, or a flaky controller and don't save data in the first place, or...
The nice thing is that SMART data from SSDs is actually accurate and you can predict failures well in advance. Much nicer than spinning disks which can fail catastrophically without any prior indication. Google's grand hard disk test concluded that "models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures."
After a year of hard use (1.65TB writes, virtual machines, browser caches, etc) my Intel SSD (X25-M G2) has a wearout indicator of 98/100 and a single reallocated sector. The drive is considered broken when the wearout indicator reaches 10/100 and with the current rate it will last >10 years. Not bad!
(I'm gaining levels in thread derailment, btw.)