However, "rovclock -i" shows lots of different frequencies when I run it multiple times, up to about 600+ mhz (close to its maximum frequency) so I'm not sure if radeon_pm_info is not detecting the changed frequency, or rovclock is wrong:
Don't use rovclock. It only supported early radeons (r1xx-r3xx) and even then it didn't properly handle all the pll dividers. On newer radeons it's just reading garbage.
One more bit of info: Steam is detecting only 268.44 MB of VRAM, when in theory it should be supporting 512MB. This could account for some of the performance issue, especially since I run games at 1920x1080. This thread discusses the issue: http://steamcommunity.com/app/221410...8532588748333/
EDIT: Nevermind, probably just an issue with the way Steam detects available ram according to that thread.
OpenGL does not provide a standard way for apps to query the amount of memory available. There have been several proposals, but nothing has come of it so far. Apps end up having to do their own hacks to guess how much memory is available.
Cool, I'll try it out. What does it do? And thanks for the troubleshooting help in general, all of you.
If a request to change the clock exceeds the so-called "default clock", it is simply set to the default clock instead, which is quite low on APUs. This limit is probably in place to make sure the thermal specification is never exceeded, even without active thermal management. Desktop GPUs usually have the highest clock as the default clock, so there is no such problem with them.
In my experience, power consumption and heat only increase slightly with this hack, and if the device has suitable cooling, there won't be a problem. AFAIK, in theory the APU's design requires active thermal management though, i.e. the driver has to monitor the temperature and reduce the clock if it gets too hot. Since that hasn't been implemented yet, the driver stays on the safe side and doesn't allow high clocks.
I'd *love* a driver option in the vanilla kernel that allows users to use the full clock range at their own risk.
r600g default = 21.5fps.
r600g + optional optimizations in mesa master using environment variable + recompiling kernel to remove PM limitations = 64.5 fps.
windows default = 73.5 fps.
That's not bad at all. We just need to get some of these optional/experimental things into shape and enabled by default now.
Yeah, pretty much. I'd need to run a few more benchmarks on both Windows 8 and Linux and make sure settings were exactly the same, but I don't expect they'd be much variation other than a couple of fps here and there.
Although this is true for source games. Serious Sam 3 for instance ran awfully and looked graphically off, and Brutal Legend tended to lock up after a bit of play. This could be true on Windows as well, though, I haven't tested yet.
I will say that performance was likely to be superior to FGLRX based on memory from the performance I was getting when it was working. For example, loading up a save of Last Remnant running under wine gives me performance of 22 FPS when using R600 SB + high profile, where as I was getting something 11-14 FPS with FGLRX. Portal runs about the same as Lost Coast mostly hitting slightly above 60 fps, whereas it was more like 30-40 with FGLRX, even when using the discrete card (which ran even worse than the integrated card, BTW).
Overall, at least in a best case scenario the open souce driver looks very good. Just need better power management and getting some more extensions supported I think.