Actually, overall, glamor acceleration feels even slower, and there's ugly tearing everywhere. Using glamor might be easy, but I don't think it is the way to go for great 2D acceleration. IIRC Chris Wilson came to the same conclusion.2. EXA was designed years ago and does not provide the necessary infrastructure to accelerate mode advanced RENDER features without an overhaul.
Thus, you end up with SW fallbacks for certain operations which means data ping-ponging between GPU and CPU buffers which almost always ends up being slower than pure CPU rendering or pure GPU rendering. You can try the glamor support in git which should improve things going forward as glamor picks up support for accelerating more and more operations using OpenGL.
Intel has already shown that it is possible to accelerate Render well and without (slow) fallbacks.
Gee, I didn't want to turn bitter. As I wrote, I explicitly prefer AMD hardware because of said OSS policy. And I wanted to let AMD know about it as an encouragement. It's just sad that the nouveau guys can do an equally good job without any support from nVidia.
Actualy i just know it is kms/exa thing , ums/exa don't have all these problems/slowness .
Last edited by dungeon; 07-17-2012 at 04:38 AM.
Last edited by brent; 07-17-2012 at 05:15 AM.
The main problem with performance is that there aren't many developers working on making the driver go faster. There are what seem to be obvious problem areas for investigation (eg Warsow performance is out of line) and generally finding a good fix for the slow cases makes other apps run a bit faster as well. Our guess was that 60-70% performance on average could be reached with the anticipated # of community developers available to work on the driver and the r300 stack seems to have already reached that point. We expected that level of 3D performance combined with better integration with compositors would provide a pretty good user experience for all but gaming on the very lowest end hardware, and I think that is still the case.
One thing I didn't expect is the degree of focus on higher levels of GL support -- our expectation had been that there would be more interest in raising performance than in raising GL level (so yes I expected to see a faster GL 2.1 driver at this point, running at 60-70% of fglrx performance, rather than GL 3-ish driver with 40-50% of fglrx performance).
I didn't say 250 years, and obviously if I thought it would take 250 years I would be doing something that provided more immediate gratification, like planting walnut trees or something. The relationship between "useability" and effort is quite non-linear. My question was why anyone was expecting performance and feature parity quickly and easily in the first place ? The developers certainly aren't saying that; we certainly aren't saying that, and the few people who *were* saying it didn't have enough development background to be credible sources.
When you talk about feature parity what are you actually talking about ? UVD was carved out at the start, power management has been discussed and has been made a priority recently, performance is largely independent of documentation and GL level is continuing to progress pretty quickly (and is already pretty close to the limit of what your hardware will support anyways) -- what else is there ?
Last edited by bridgman; 07-17-2012 at 07:31 AM.