Michael, don't assume that "slow on the binary drivers" == "slow on the open drivers".
For one, you yourself know from many benchmark runs that VDrift is a sharp counterexample.
Also, I know from personal experience that fglrx gets a lower framerate (and poorer framerate stability) in the recent-ish MMORPG Star Trek Online (with its DirectX 9.0c renderer which makes heavy use of the advanced post-DX9 features, especially shaders) compared to r600g on Evergreen. And that's going through Wine.
In fact, the game was downright playable on r600g for me, after Jerome's 2D color tiling patches stabilized. On fglrx, there were huge latencies incurred within the graphics driver for certain operations, so you'd get a "micro-hang" of the entire system (kernel, audio, mouse, everything) several times per second. It'd render a small group of frames, then seize up for a couple msec, then render another group of frames (the groups appear to be about 15 frames big). This basically destroyed the audio because not even pulseaudio's timer-based scheduling can deal with a situation where ALSA randomly gets blocked for ~50ms for four intervals per second.
In fact, both binary drivers seem to like to do that. It's like they're trying extra hard to optimize stuff and wrap their virtual heads around the context of the calls so that they can kick into "overdrive mode" where everything is super-optimized and the rate of transfers between the CPU and the card falls off the table because they're optimized out by the driver. In comparison, r600g doesn't really try to do this (except for when it queues up some batch buffers), so you normally don't see behavior with r600g where any component of it just blocks for egregious amounts of time. As long as you don't accidentally hit any software rendering paths, it's all about get in quickly, do your rendering and get out (i.e. return control to the caller in userspace). This may result in lower "top-end" FPS, but it does result in more consistent FPS, and more resilience to odd OpenGL call patterns (and wine, if anything, is definitely going to have an "odd" call sequence compared to native GL renderers).
Also, you can't compare the performance of AAA games running through wine with the expected performance if they were ported to native. Especially if the Windows version is using Direct3D. There's just no comparison -- the D3D translation layer is an engine of its own, and if you eliminate that inner platform and use direct OpenGL, the results can be completely, stunningly different (missing features such as FSAA aside; but then again I never ever use that and I consider myself a hardcore AAA gamer).
Last edited by allquixotic; 04-03-2012 at 09:07 AM.
VDrift has got a no shaders fallback which is triggered if a few bits are missing, like floating point textures. The difference is quite noticeable and logged in the output. The easiest way to make it a more fair benchmark is to force the game to run without shaders I guess.
Vegastrike is another game which follows this scheme by offering a number of fallbacks and shader options.
Well, if you were really not trying to get people riled up, I will take that back then.
Originally Posted by crazycheese
My friend noticed this micro lagging on Windows when he was playing Skyrim. He said this only occurs with latest Catalyst drivers.
Originally Posted by allquixotic