Page 4 of 4 FirstFirst ... 234
Results 31 to 39 of 39

Thread: ATI's Gallium3D Driver Is Still Playing Catch-Up

  1. #31
    Join Date
    Oct 2008
    Posts
    3,125

    Default

    Quote Originally Posted by bridgman View Post
    The back end of LLVM didn't seem to be a good fit for architectures where a single instruction word included multiple instructions, which is the case for 3xx-5xx (vector + scalar) and for 6xx+ (up to 5 scalar instructions).

    As MostAwesomeDude said, the existing shader compiler in the 300 and 300g driver seems pretty good - might not be as good as the one in fglrx but my guess is that it's pretty close.

    Right now there are no signs that the shader compiler is the bottleneck. The performance graphs indicate that the driver is CPU limited quite a bit of the time, and as airlied said there are also some GPU optimizations still to be done.

    LLVM may turn out to be more useful for single-instruction architectures like the Intel and NVidia GPUs, not sure.
    From what I've heard, a real compiler with serious optimization work will be required for decent performance for more modern hardware that can run much more complex shaders. That may not be the case for r500, which is relatively limited. If no one steps up to do the necessary work for VLIW backend support in LLVM, then there's also been talk of just sending gallium TGSI -> LLVM -> TGSI -> and on to the radeon compiler. That would allow serious optimizations to be run on the gallium "bytecode" before returning it for the normal driver to send to the VLIW hardware.

  2. #32
    Join Date
    Sep 2009
    Posts
    16

    Default

    When using Gallium 3D Compiz crashes when i resize windows heavily.

    greetings

  3. #33
    Join Date
    Mar 2010
    Posts
    27

    Default

    sorry if this is kind of a stupid question
    the mesa/gallium3d drivers is trying to do the same as radeon, radeonhd and so on?
    if yes: why are there so many projekts trying to do pretty much the same?
    and whats the difference between the mesa/gallium3d stuff and radeon(hd)

  4. #34
    Join Date
    May 2007
    Posts
    231

    Default

    Quote Originally Posted by EvilTwin View Post
    sorry if this is kind of a stupid question
    the mesa/gallium3d drivers is trying to do the same as radeon, radeonhd and so on?
    if yes: why are there so many projekts trying to do pretty much the same?
    and whats the difference between the mesa/gallium3d stuff and radeon(hd)
    DDX driver radeon | radeonhd
    OpenGL driver radeon classic or gallium

    DDX driver only accelerate X11 rendering it doesn't provide GL acceleration of any kind. The OpenGL driver provide OpenGL acceleration on top of a DDX driver, thought with gallium a gallium driver can also replace the DDX.

    So no mesa/gallium aren't duplicate effort.

  5. #35
    Join Date
    Mar 2010
    Posts
    27

    Default

    thanks
    again one less thing i dont know

  6. #36
    Join Date
    Jul 2007
    Posts
    446

    Default I haven't noticed any improvement with ColorTiling enabled

    Quote Originally Posted by airlied View Post
    I think the final speeds up in order of when they'll get done look like won't be driver code optimisations as much as gpu feature usage:

    1. Color tiling - need to enable by default and fix regressions
    I am running F12 with KMS, the xorg-x11-ati driver and Mesa from git, and have just enabled ColorTiling. I can't say that it has made any difference (at all) to celestia. Celestia still feels "speedy" under r300c, and sluggish under r300g. And I have made sure that I'm using the "Open GL vertex program" rendering path in both cases.

    This is with my RV350 (Radeon 9550).

  7. #37
    Join Date
    Aug 2008
    Location
    Tokyo, Japan
    Posts
    36

    Default

    Quote Originally Posted by MostAwesomeDude View Post
    The reason that we avoided stipples is a combination of not many apps using it and the fact that we don't have it working in HW yet. I'll put more effort towards it and see if we can get something working soon.
    Meanwhile, the draw module would offer at least an interim solution. Same for some other less popular OpenGL features, it should be possible to achieve correctness without any software rasterization (see e.g. the svga driver).

  8. #38
    Join Date
    Mar 2010
    Posts
    5

    Default

    Quote Originally Posted by phoronix View Post
    Phoronix: ATI's Gallium3D Driver Is Still Playing Catch-Up

    Yesterday we delivered benchmarks showing how the open-source ATI Radeon graphics driver stack in Ubuntu 10.04 is comparing to older releases of the proprietary ATI Catalyst Linux driver. Sadly, the latest open-source ATI driver still is no match even for a two or four-year-old proprietary driver from ATI/AMD, but that is with the classic Mesa DRI driver. To yesterday's results we have now added in our results from ATI's Gallium3D (R300g) driver using a Mesa 7.9-devel Git snapshot from yesterday to see how this runs against the older Catalyst drivers.

    http://www.phoronix.com/vr.php?view=14757
    About the warsow benchmark

    Are you sure the game is run the exact same way? The newer fglrx has more opengl support ..2.0 vs.. 1.X?
    Warsow might enable more gfx features because the driver exports opengl 2.0. Or even use a completely different render pipe that requires opengl 2.0.

    This would ofcourse skew benchmarking.

  9. #39
    Join Date
    Oct 2009
    Location
    West Jordan, Utah, USA
    Posts
    53

    Default blender

    Quote Originally Posted by MostAwesomeDude View Post
    I do Blender. I'm kind of fail at it, but I can do some simple stuff.

    The reason that we avoided stipples is a combination of not many apps using it and the fact that we don't have it working in HW yet. I'll put more effort towards it and see if we can get something working soon.

    As far as large vert counts go, we do just fine on that front; if you've got a real-world problem there, let us know.
    Since Blender supports python extensions, some ambitious coder could begin to write some interesting synthetic benchmarks using blender...

    You might get as much information about the strengths and weaknesses of blenders data handling and rendering, but that would be kind of win-win too.

    I wonder what that might look like.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •