Yeah, I've heard no complaints on the Intel drives.
Thread derailment? That's par for the course on Phoronix
AFAIK when SSDs have errors it's in the writing process and HDDs tend to fail during reads, so you'll only be aware of errors when you try to read the data you thought was safe. So at least in this regard SSDs seem safer. Don't know about real life expectancy though.
There are still too much problems with SSD's. For example when abad HDD write happens it fails when transmitting to the HDD and the file save dialog tells you it failed.
Due to the horrible nature of SSDs, the OS succesfuly transmits the file to the SSD, but then the SSD controller could write it unsuccesfully to its own disk without the user knowing 'till it's already too late.
On paper SSD's rule, but in practiae they still don't, but I'm sure it'll be ready when I'm in the market for internal computer storage again, so...
Excuse me for the wierd wrong characters and double posts but my phone is almost dead; broken touch screen and lots of wierd Java errors everywhere too. I'll get a new one tomorow... <_<'
It reminds me the times when I've run Quake 1 on Nexgen CPU (586 without FPU, recognised by apps as 386). I've done some workarounds with modification of few different FPU emulators for DOS and some other quirks to make it advertising as 486DX for DOS to make it work but I was very very happy when I finally saw it running even that it was less than 1FPS but it worked.
Originally Posted by V!NCENT
Edit: It was even ~5-20 seconds for 1 frame for 100MHz CPU with software FPU emulation.
Originally Posted by xeros
My 486DX2 66MHz only ran it at something like 15FPS with minimum resolution.
Originally Posted by xeros
How do I enable llvmpipe? I have Kubuntu 10.04 (Lucid) with xorg-edgers packages. My machine has an intergrate ATI express X1250 card. XBMC runs very slow on it due to r300-dri driver lacking a good GLSL implementation. I want to try out llvmpipe to see if it's better than r300-dri, but don't know how to enable it.
Have you tried the 300g (Gallium3D HW accelerated driver) on your hardware ? AFAIK that should give you the best of both worlds -- LLVM JIT-compiled vertex shaders running on the CPU plus HW-accelerated fragment shaders running on the GPU.
my glxinfo shows:
Originally Posted by bridgman
OpenGL vendor string: X.Org R300 Project
OpenGL renderer string: Gallium 0.4 on RS690
OpenGL version string: 2.1 Mesa 7.10-devel
OpenGL shading language version string: 1.20
Does that mean I'm running r300g? If so, it's still very slow for xbmc. I got only 15fps. Not much improvement since 5 months ago:
Tags for this Thread