Stability can be an even more important concern (ruling out UXA), as is providing the latest support for newest chips .
I would be very interested in hearing from others who experience problems running with "greedy" enabled, to help inform our decision.
The Intel graphics driver now has a proper memory manager in the form of the Graphics Execution Manager.
Well, these results pretty much match exactly my experience with intel performance using EXA and EXA Greedy.
I'm working on the application Marble (http://edu.kde.org/marble). Marble is a virtual globe that uses the 2D graphics and software rendering to work without any 3D hardware dependency. So Marble creates Pixmaps all the time and puts them onto the X Server. This works perfectly fine on Mac and on Windows. On X11 without greedy migration heuristics this is very slow unfortunately and the application is at the edge of being usable. When turning on "EXA greedy" the framerate is decent on X11 and the application works smoothly.
For me as an application developer the results of this test are pretty clear:
For use cases related to pixmaps EXA greedy is the best option most of the time. In theory for other use cases UXA and normal EXA are sometimes superior.
In practice the latter superiority is totally useless most of the time: If my pixmap support is fast and text or gradients are slow then I can program around that easily in 90% of the cases by caching onto a pixmap (That's what e.g. Qt offers to do in its QGraphicsView framework for all graphics items by specifying a matching cache mode). So slowness of text and gradients don't matter that much to me as one can work around these.
However if displaying pixmaps is slow and text or gradient support support is fast then I can't do anything to improve that situation.
That's why we recommend to use EXA greedy to Marble users who experience slow performance with (intel) XOrg drivers.
At least in my daily use 2D performace with heuristic "smart" is most balanced and a lot faster than default "always". But I'm using radeon driver (r200 and r500).
I have noticed in practice that greedy sometimes has bad performance. Ussualy when user thinks about 2D performance it isn't important if already fast operation is some 20% faster but it is important that nothing is very slow. In my practical testing greedy has shown some bad slow down (specially with firefox).
Always (default) is even worse than greedy because it has horrible performance bottlenecks (wine 2D and firefox scrolling/resize).
Smart in contrast provides balanced performance that might lose a bit in average for operations that aren't problematic in greedy or always but it doesn't have any visible slowness in specific operations.
Ubuntu 9.04 is shipping with 2.6 series. And 2.6.3 is still the latest *stable* release.
And there's serious performance degredations present with it. I've got Jaunty on my eeePC 701 and previously with eeeBuntu NBR 2.0, it was able to actually play Caster with the settings trimmed back to minimal values. Now...heh...the thing is down into the single digits with no other changes other than the OS version and driver.
I've made a DEB-package for driver-version 2.7.0. It is based on the latest Ubuntu-package (I've included the quirks-patches) and compiles cleanly with libdrm 2.4.5 (normally 2.4.6 is needed). I've disabled KMS because Ubuntu 9.04 doesn't provide a KMS-capable kernel.
On my Dell Latitude D505 (855GM-chipset) the driver works stable (EXA and UXA). At the moment there are only packages for the 32bit-version of Ubuntu because i don't own a 64bit-capable machine and i don't have the slightest idea how to crosscompile a package.
Here are the links to the 32bit-packages, the source-code and my patch for the source-code :