Sounds like the blur is being done in software, which would be very slow. If it's a GL 2.0 function being used in blur that would make sense; the open drivers should pick up GL 2.0 support once Gallium3D is up and running (Gallium3D was integrated into mesa master recently, which is *great* news).
For the other apps it also sounds like they are taking SW fallbacks; there's a way to disable that but I don't remember the details.
I need to talk to the Google folks; Google is such a big advocate of open source but it seems like they only test their apps on binary drivers, not open source
I'm not sure what the latest problem with Google Earth is; will ask around.
Is that a flash video or something ? I would expect video playback to be faster on the open drivers than on fglrx these days; this should be fixable today.
Just for the record: setting the kernel parameter "nomodeset" has solved a number of issues for me:
Google-Earth doesn't work with the open source driver. => solved!
When DPMS mode is enabled and the screen goes to sleep it will not wake up from it anymore. => opengl screensaver issue with kms enabled => solved!
Pretty much everything else I do with this driver is al least 50% slower then with the fglrx driver. => solved!
But the compiz alpha blur thing still isn't working as aspected,
But without the *blur* effects it is much more useable now. Xvideo plays fine now (also windowed) and everything is a lot more "snappier".
Also, I read on the fedoraproject wiki that the radeon open source driver will support kms and dri2 in the upcoming Fedora 11 release for the older r100, r200 and r300 cards.
So, it seems there still is some future for my *old* hardware here.
(ii) module exa: Vendor="x.org foundation"
compiled for 1.5.3, module version = 2.4.0
abi class: X.org video driver, version 4.1
(**) radeon(0): Rage theatre crystal frequency was specified as 29.50 mhz
(**) radeon(0): Rage theatre tuner port was specified as 0
(**) radeon(0): Rage theatre composite port was specified as 2
(**) radeon(0): Rage theatre svideo port was specified as 6
(**) radeon(0): Tuner type was specified as 0
(**) radeon(0): Rage theatre microcode path was specified as /usr/lib/xorg/modules/multimedia/ativmc20.cod
(**) radeon(0): Rage theatre microcode type was specified as binary
(==) radeon(0): Assuming overlay scaler buffer width is 1920
(ii) radeon(0): Cannot access bios or it is not valid.
If your card is tv-in capable you will need to specify options ragetheatrecrystal, ragetheatretunerport,
ragetheatresvideoport and tunertype in /etc/x11/xorg.conf.
(!!) radeon(0): For information on using the multimedia capabilities
of this adapter, please see http://gatos.sf.net.
With previous versions of the driver it was no problem.
Can somebody help me?
this also is solved for me setting the *nomodeset* parameter.
There is microcode (a "binary blob") in the Radeon driver in the Linux kernel. Because of this, according to the Free Software Definition, the open source Radeon drivers are non-free software.
Has the information about what this microcode does been released by ATI? Is there anything stopping ATI from releasing it?
The microcode appears to only be used to initialize the Radeon command processor. With proper documentation for the microcode, the open source Radeon drivers could be 100% free software. This would allow free software users the opportunity to use many powerful Radeon video cards for 3D graphics in GNU/Linux.
This is one of the few places where I suspect the Free Software Definition is being misapplied, or at least being used in a way which brings unintended consequences. I can't believe that the definition was written to deliberately penalize hardware vendors who load microcode in the driver rather than autoloading it from ROM or permanently storing the microcode on-chip, but that is what is happening right now.
We are documenting all of the programming information required to write 100% free, highly functional and performant drivers. We are not documenting the internal functioning of the hardware, and the microcode images are part of that internal hardware function. No GPU vendor allows modification of the internal microcode, whether it be stored permanently on the chip or loaded at power-up. If I was being told that all microcode was evil and that you were not going to use any chips which run internal microcode unless you had source for that microcode I could understand, but that is not the case here.
The argument, as I understand it, is "since your microcode image is loaded at power-up by the driver then the microcode needs to meet all the rules we apply to the rest of the driver code, even though it is not driver code, is never executed by the CPU, and once it is loaded it is NO DIFFERENT from microcode which had been permanently stored in the chip or autoloaded from ROM".
Honestly I don't see why a microcode image loaded at power-up is any less free than the same microcode stored permanently in the chip. Neither one can be read or understood, neither one allows learning or changing or redistributing the changes, yet apparently one is Good and one is Evil. I can't believe that was Mr. Stallman's intention.
AFAIK most of the older Radeon GPUs can use the 3D engine *without* the command processor, albeit more slowly, since there is a command FIFO which can accept command packets without requiring that the microcode images be loaded. I have mentioned this to a couple of developers but so far there hasn't been much interest. Only the 6xx-and-up parts absolutely require the command processor microcde for 3D.