It's incredible how fast is HD4000, especially compared to the dead-slow A8-3870.
I still didn't understand if Michael set AMD apus to the high power profile, otherwise the benchmarks are simply useless.
Originally Posted by russofris
would be interesting to check where is supposedly highly optimized cuz in my tests
opengl + wine = black magic and voodoo if it renders correctly
xvba(UVD) = almost everything i throwed at it crash it and half of the X with it or has a zillion render artifacts (gallium mpeg/2 accel is flawless for me using vdpau excellent work here)
opencl = never made it work but nvidia blob do just fine (but ok could be that my code hate fglrx)
2d accel = slow/crappy/crashy/full of render errors (but bit better than before)
xv = well crappiness at its highest exponential (new option helps but kills gl performance so ....)
pm = only good thing so far (<-- maybe what michael means)
3d performance on linux native games = slower than windows gl and lot slower than DX but faster than FOSS, so ... not exactly higly optimized in the broad sense of the word but ok is faster than mesa for the time being
browsers GPU accel(webgl) = capped at 60 FPS and mesa hit 60 too in all the test i've done with mozilla and chromium examples
browsers GPU accel(render) = browser crash festival at least chromium 18 and firefox aurora hate the bastard but opera next seems to hate it less (r600g show some crashes from time to time too but is way more usable)
Flash accel = blacklisted cuz no matter what i never seen flash using anything else than software renderer (but r600g with html5 is a pleasure so DIE FLASH) and when you fullscreen there is a high chance that fglrx kills X or get a kernel panic hard lockup LOL
so no other choice than laugh as hard as i can.
btw i got running in wine with r600g:
codemaster grid(race game) (disable blur since it hurts performance)
starcraft 2 wings of liberty
at 1366x768 in my 4850x2 and the frame rate is playable if you skip AA and go to ultra and amazingly the render is perfect crystal clear, so i think r600g is mature enough to start benchmarking wine games, i will try to run some games i know have in game benchmarks later and see how much juice i can get out of r600g
Why? Catalyst has excellent 3D performance compared the to open-source radeon driver. Have you ever seen a changelog for Catalyst? There are often a bunch of per-app optimizations.
Originally Posted by russofris
I suggest doing the /sys interface in C, then doing a GUI in something more high-level like python.
Originally Posted by Veerappan
In fact, I have a python-Qt front-end for changing power profiles and dynpm for radeons, which I never really sanitised enough for a release. It calls a minimal C-program (which needs root privileges to write to /sys, so very minimal) and lets you change between dynpm and different profiles for 2 different gfx cards. It might give you an easy start into GUI stuff.
If you're interested, I can send it to you, let me know.
Am I the only one that would have liked to have seen a Video playback comparison??? I don't game on IGP's, at least not for long (maybe some TeamNations Forever) ... But I do watch movies/TV on my desktop/laptop. I have a working MythTV setup running, but currently only with nVidia cards inside of them. I would have liked to know if I can go a different route with the HD 4000 in the Ivy Bridge chips.
I wouldn't mind taking a look at what you've got, as you might have come up with something better than I've currently got. I'll PM you my email address. My motivation for this was to replace the shell scripts that I had previously written to do the same thing. I've currently written the back-end to discover an arbitrary number of cards that are exporting nodes in /sys/class/drm/card[\d+]/ and then trying to probe around as safely as possible for the needed sysfs nodes (power_[method|profile]. The user-facing GUI would enumerate those cards, and let the user configure power profiles independently for each individual card as desired.
Originally Posted by pingufunkybeat
It does make sense to split off a separate executable from the GUI since you need root privileges to write to the nodes, and you also need root to READ the temperature values (it's under /sys/kernel/debug/... *grumble*). To read the values you can be an unprivileged user, but a small program to set the values makes sense as it minimizes attack surface.
have moderate performance but like i said is much less than its windows counterpart and DX but yes is faster than gallium but is also lot less stable and more error prone than gallium. hence not "highly optimized" LOL
Originally Posted by DanL
btw thereis no such thing as per app optimizations, but fixing crappy app code at driver level or fixing nasty error on the driver. use google and find out what those apps optimizations really means (you eyes will drop tears of blood).
Another one: but catalyst support gl X.x.x faster than gallium, OK yes they expose the the api number very fast but normally is not even usable after a bunch of releases later and most likely apps need a bunch of workarounds that arent needed with nvidia or gallium and normallly the performance is very subpar with nvidia (with same class hardware) until a bunch of releases later, so again not exactly "highly optimized" (ask unigine how pleaseant is to use fglrx)
so we don't discuss that in benchmarks it give more fps than gallium, we laugh at the fact that someone call that bloody mess of crashy driver "highly optimized"
I'm surprised how close AMD and Intel are in some of the tests. I thought AMD is much faster.
In windows, the Llano chips seem to usually beat the Intel ones fairly easily in graphics workloads, but it seems either the Intel Linux drivers are much better than the Intel Windows drivers (very possible), or the Catalyst drivers in Linux are slower than in Windows.
Originally Posted by mikkl