Page 2 of 6 FirstFirst 1234 ... LastLast
Results 11 to 20 of 57

Thread: Radeon Gallium3D Still Long Shot From Catalyst

  1. #11
    Join Date
    Nov 2009
    Location
    Italy
    Posts
    909

    Default

    Quote Originally Posted by pingufunkybeat View Post
    With a more complex game at high resolution (Xonotic @1080p), the OSS drivers are at 60-65% of performance. This is a really good number. With HiZ and some shader compiler optimisation, it should reach 75%, which is really close to the best that can be reasonably expected from the open stack. At 75% performance, the OSS drivers becomes a real option for everyone
    I would have agreed if we were nearing 75% of catalyst in Unigine Heaven.
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

  2. #12
    Join Date
    Jan 2008
    Posts
    297

    Default

    Quote Originally Posted by log0 View Post
    Btw I think one could actually use api traces of games as benchmarks. This would additionally ensure that the same call paths are executed, no fall-backs or workarounds for specific hardware taken.
    Regrettably, this doesn't work. When you profile an apitrace replay, you find that a huge portion of the profile is simply apitrace parsing the multigigabyte trace file.

  3. #13
    Join Date
    Oct 2009
    Posts
    2,081

    Default

    Quote Originally Posted by Qaridarium View Post
    LOL you always can buy a faster card but you can't buy a open catalyst
    Sure you can, it just costs about $10 bazallion.

  4. #14
    Join Date
    Sep 2006
    Posts
    714

    Default

    I think it's very promising. The Xonontic benchmarks are very pleasing.

    I am guessing from the benchmarks is that there is still some stuff falling back to software that is killing performance for certain things. With some optimization to applications and filling in some missing pieces in the drivers and we are golden. Once open source gets within about 70-80% of proprietary then I'd call it success.

  5. #15
    Join Date
    Jul 2010
    Posts
    475

    Default

    Quote Originally Posted by mattst88 View Post
    Regrettably, this doesn't work. When you profile an apitrace replay, you find that a huge portion of the profile is simply apitrace parsing the multigigabyte trace file.
    Hmm, I've got a couple traces from games and my own stuff(20-70fps, 100-400MB). And they take about the same time.

    Just did a quick run with vdrift about 2min, 130MB trace. Frame rate without tracing is about 22fps, with tracing 17fps, retracing 15fps(68% of original fps). Are my results atypical?

    As I see it, the slowdown would be the same for all benchmarked cards and we are interested in the relative performance only.

  6. #16
    Join Date
    Sep 2008
    Posts
    989

    Default

    Buy lots of RAM and store the apitrace in a Snappy or LZ4-compressed ramdisk That should provide for a faster load time for the apitrace...

  7. #17
    Join Date
    Mar 2010
    Posts
    158

    Default

    Quote Originally Posted by drag View Post
    I think it's very promising. The Xonontic benchmarks are very pleasing.

    I am guessing from the benchmarks is that there is still some stuff falling back to software that is killing performance for certain things. With some optimization to applications and filling in some missing pieces in the drivers and we are golden. Once open source gets within about 70-80% of proprietary then I'd call it success.
    Indeed. The thing about open source drivers is that they can be debugged. It is possible to find out where they are slow, and then further optimise those parts of the code.

    After adding HiZ and doing some further chasing down of performance bottlenecks in the open source code, performance can be expected to reach perhaps 80% of the closed binary drivers. Since almost no-one needs 200 fps performance, and the difference between 160 fps and 200 fps is all but imperceptible anyway, the perfromance issue with open source drivers will essentially be solved.

  8. #18
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,033

    Default

    Bridgman, given than GCN moved to hardware scheduling, I assume lacking an advanced compiler in Mesa becomes less of a bottleneck. How would you estimate the effect of that move?

    E.g. do you see GCN cards getting to 80% of catalyst, where earlier can get 70% etc?

  9. #19
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,402

    Default

    Yeah, I don't have any real numbers but from a pure shader compiler POV my guess is that half the gap between open source and proprietary driver might go away with GCN.

    For compute the impact will probably be even greater (since graphics is naturally short-vector work while compute is naturally scalar). We're also picking up some compiler improvements at the same time by using LLVM, so it could get interesting.

    The bigger question is how much of the performance delta today comes from shader compiler rather than things like HyperZ, since the impact of both of them increase with display resolution.
    Last edited by bridgman; 03-24-2012 at 12:03 PM.

  10. #20
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,033

    Default

    I also think the opposite will happen with nouveau/kepler, because they removed hw scheduling there. Half the fps on a newer gen card there, on a shader-heavy workload?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •