I find it extremely fascinating how Intel developers consider glxgears a bad benchmark. Now given that the same glxgears in Windows Vista yields 1200 frames a second on Intel GMA 950, maybe "clearing and swapping" is something Nvidia is pretty damn fast at and Intel used to be fast at.
Has anyone stepped forward to challenge the assertions, assumptions and statements made by most of these guys?
I'm speculating here, but I've yet to see that much of a difference in the OpenGL experience. Is GEM being used in Vista, for that matter XP? Not sure what is happening within the realm of Vista/Win32 but something is strange.
Simple test. Use display lists, a particle engine, and some texture mapping while moving primatives around. Some in display lists some outside. With the old TTM / XAA drivers you'd see a snappy crisp framerate with hardly any slowdown even when the particle engine activated. I don't see the validity of the points brought up in the URL. I've been hacking OpenGL code for years. Started on a 3DFX Voodoo2, TNT 2 Ultra, and a couple of radeons. OpenGL is suppose to hide most of this mess.
Now compile the latest stack of components or just use Ubuntu 9.10 rc6 + upgrades and watch it get saturated in almost no time. Then there is the quality of GL_Blend. I get Marbles compared to angelic stars. The starts are preferred. Simple sprites.