Page 1 of 6 123 ... LastLast
Results 1 to 10 of 74

Thread: Radeon 3D Performance: Gallium3D vs. Classic Mesa

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    13,418

    Default Radeon 3D Performance: Gallium3D vs. Classic Mesa

    Phoronix: Radeon 3D Performance: Gallium3D vs. Classic Mesa

    Gallium3D, the graphics driver architecture started by Tungsten Graphics to overhaul the hardware driver support in Mesa, has been around for a few years but it is finally getting close to appearing on more desktop systems. Now that the Nouveau DRM code is in the mainline Linux kernel and its main 3D driver is Gallium3D-based, we will hopefully be seeing that adopted by more distributions soon -- it's already being flipped on with Fedora 13. On the ATI side the "r300g" Gallium3D driver that provides Gallium3D support for the R300-R500 (up through the Radeon X1000 series) is also being battered into surprisingly good shape. To see where the Radeon Gallium3D support is at for these older ATI graphics cards we have run a set of tests comparing the OpenGL performance under the latest Mesa 7.9-devel code with the Gallium3D driver to running the classic Mesa DRI driver.

    http://www.phoronix.com/vr.php?view=14690

  2. #2
    Join Date
    Sep 2009
    Posts
    33

    Default

    Probably better to be at a constant fps then climbing the highest hill as this would be best for power saving on laptops and smoothness. What's the point of rendering frames that missing the screen's refresh rate. 60fps should the be the cap on most gfx driver settings.

    Anyone think otherwise?

  3. #3
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by 0e8h View Post
    Probably better to be at a constant fps then climbing the highest hill as this would be best for power saving on laptops and smoothness. What's the point of rendering frames that missing the screen's refresh rate. 60fps should the be the cap on most gfx driver settings.

    Anyone think otherwise?
    Well, there was that one guy I got into an argument with some weeks back on this very topic... special people are a fact of life, I guess.

    When it comes to benchmarking, you usually have the vsync off to see how fast the engine/driver/level/whatever can actually run. When you're actually playing a game, though, you almost certainly want it on for a variety of reasons including the ones you listed.

    The problem from a benchmark perspective with having vsync on is that you won't be able to see the threshold by which you are passing the monitor refresh rate (be it 60hz, 75hz, or whatever). There's an important distinction between just barely managing 60hz and being able to manage 100hz. If the game is just barely hitting 60hz then a more complex scene or some other load on the system will cause it to drop below 60hz, which is problematic for a number of reasons. If the game is capable of running at 100hz, however, then we know that some unexpected spikes in scene complexity are far less likely to drop the frame rate below the desired level.

    Remember that with vsync, barely missing 60hz does not mean that you run at 59hz, it means that you run at 30hz. Each frame takes just long enough to miss the first vsync and ends up waiting on the second, halving the rate of frame updates. While it's possible a game could turn off vsync dynamically when the frame rate drops low enough, the result may end up even more jarring in that case, since the game would just periodically having tearing and an uneven frame rate.

    There are other problems with missing the 60hz mark too separate from the monitor refresh rate itself. Many game physics simulations rely on fixed intervals for simulation updates (especially any games where complex environmental physics actually matter to gameplay, which is more and more of them these days). Many games use a 1/60th second interval to match the lowest common denominator of monitor refresh rates, so dropping below 60hz would require two simulation updates per frame, which just makes things even slower.

    I'm not sure if the seemingly fixed frame rate on the Gallium driver is intentional or not (the numbers do look like vsync behavior on a 60hz display), but if it is, there should be a way to disable it for benchmarking purposes.

  4. #4
    Join Date
    Jan 2009
    Posts
    88

    Default

    Quote Originally Posted by 0e8h View Post
    Probably better to be at a constant fps then climbing the highest hill as this would be best for power saving on laptops and smoothness. What's the point of rendering frames that missing the screen's refresh rate. 60fps should the be the cap on most gfx driver settings.

    Anyone think otherwise?
    Easy. Input lag. 1 second/60 frames = 16ms. 1/120 = 8ms. The difference is very noticeable, ask any decent musician or pro player.

    Why should I wait 16ms for my shot to fire when I can wait 8ms... I'll hear it and it'll get sent to the server sooner, even if I don't see it that fast.

  5. #5
    Join Date
    Aug 2007
    Posts
    153

    Default

    As Michael said, the constant framerate is probably due to syscall overhead. Dave pushed a handful of things that should amortize that a bit, and there's still more optimizations that could be done.

  6. #6
    Join Date
    Jun 2009
    Posts
    28

    Default

    Thanks for providing some developer insight rather than just a fuck ton of numbers like the benchmarks usually are. This actually made for an interesting read.

    I have one suggestion though, could you pick colours that are easier to distinguish. I have a very difficult time figuring out which line is which driver. Could be because i'm dichromatic (red/green colour blind). Maybe make one line a bright colour and the other a darker.

  7. #7
    Join Date
    Nov 2007
    Location
    Die trolls, die!
    Posts
    525

    Default

    Thanks for the article. It was a good read. I liked the insight it provided along with the numbers too which cleared up many questions I had about the performance.

  8. #8
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Hey that was pleasantly unexpected (the results)

    When Gallium3D gets optimized (somewhere in the distant future, right?) then will it be faster than classic Mesa?

  9. #9
    Join Date
    Jul 2007
    Posts
    428

    Default And the Universe will explode for your pleasure :-)

    Well, "celestia" at any rate:

    Code:
    [drm:radeon_fence_wait] *ERROR* fence(f04020a0:0x00081CC5) 504ms timeout going to reset GPU
    [drm:r300_ga_reset] *ERROR* VAP & CP still busy (RBBM_STATUS=0x84000140)
    [drm] GA reset succeed (RBBM_STATUS=0x00000140)
    [drm] radeon: cp idle (0x10000000)
    [drm] radeon: ring at 0x00000000C8000000
    [drm] ring test succeeded in 0 usecs
    [drm] GPU reset succeed (RBBM_STATUS=0x00000140)
    [drm:radeon_fence_wait] *ERROR* fence(f04020a0:0x00081CC5) 512ms timeout
    [drm:radeon_fence_wait] *ERROR* last signaled fence(0x00081CC5)
    [drm:radeon_fence_wait] *ERROR* fence(f5967900:0x00081CD2) 508ms timeout
    [drm:radeon_fence_wait] *ERROR* last signaled fence(0x00081CD5)
    [drm:radeon_fence_wait] *ERROR* fence(f059d0a0:0x00081CE9) 504ms timeout going to reset GPU
    [drm:r300_ga_reset] *ERROR* VAP & CP still busy (RBBM_STATUS=0x84000140)
    [drm] GA reset succeed (RBBM_STATUS=0x00000140)
    [drm] radeon: cp idle (0x10000000)
    [drm] radeon: ring at 0x00000000C8000000
    [drm] ring test succeeded in 0 usecs
    [drm] GPU reset succeed (RBBM_STATUS=0x00000140)
    [drm:radeon_fence_wait] *ERROR* fence(f059d0a0:0x00081CE9) 512ms timeout
    [drm:radeon_fence_wait] *ERROR* last signaled fence(0x00081CE9)
    [drm:radeon_fence_wait] *ERROR* fence(f0bc4900:0x00081CEC) 492ms timeout
    [drm:radeon_fence_wait] *ERROR* last signaled fence(0x00081CF9)
    [drm:radeon_fence_wait] *ERROR* fence(c4c0b260:0x00081D3F) 504ms timeout going to reset GPU
    [drm:r300_ga_reset] *ERROR* VAP & CP still busy (RBBM_STATUS=0x84000140)
    [drm] GA reset succeed (RBBM_STATUS=0x00000140)
    [drm] radeon: cp idle (0x10000000)
    [drm] radeon: ring at 0x00000000C8000000
    [drm] ring test succeeded in 0 usecs
    [drm] GPU reset succeed (RBBM_STATUS=0x00000140)
    [drm:radeon_fence_wait] *ERROR* fence(c4c0b260:0x00081D3F) 512ms timeout
    [drm:radeon_fence_wait] *ERROR* last signaled fence(0x00081D3F)
    [drm:radeon_fence_wait] *ERROR* fence(c6e62900:0x00081D43) 488ms timeout
    [drm:radeon_fence_wait] *ERROR* last signaled fence(0x00081D4F)
    This was with vanilla 2.6.33.1 and a Radeon 9550 (RV350).

  10. #10
    Join Date
    Oct 2008
    Location
    Sweden
    Posts
    983

    Default

    Quote Originally Posted by garytr24 View Post
    Easy. Input lag. 1 second/60 frames = 16ms. 1/120 = 8ms. The difference is very noticeable, ask any decent musician or pro player.

    Why should I wait 16ms for my shot to fire when I can wait 8ms... I'll hear it and it'll get sent to the server sooner, even if I don't see it that fast.
    I'm guessing the amount of pro players here are pretty small

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •