Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 40

Thread: r500 kms performance issues

  1. #11
    Join Date
    Jul 2007
    Posts
    429

    Default Hmm, every gallium test I've done has shown it to be slower.

    Quote Originally Posted by marek View Post
    You might try the R300 Gallium3D driver, it's generally faster than the classic one and has much more features (the feature set is similar to fglrx).
    I've got an AGP RV350 card and a dual P4 Northwood PC, and for simple loads like celestia and OpenGL xscreensavers I am finding that classic Mesa is still beating Gallium hands down. That's for both "performance" and "correctness"; obviously Gallium beats classic Mesa on "features" ;-).

    For example, try right-clicking on the Earth in celestia and rotating it: it's wonderfully responsive with classic Mesa, but drags horribly with Gallium.

    Gallium renders the stars wrongly, too.

    And don't even think of trying to play World of Warcraft using Gallium, although it can be "persuaded" to play more-or-less correctly under classic Mesa if you're prepared to hack it around slightly:
    Code:
    diff --git a/src/mesa/drivers/dri/r300/r300_cmdbuf.c b/src/mesa/drivers/dri/r300
    index c40802a..7f009d9 100644
    --- a/src/mesa/drivers/dri/r300/r300_cmdbuf.c
    +++ b/src/mesa/drivers/dri/r300/r300_cmdbuf.c
    @@ -452,7 +452,7 @@ static void emit_zb_offset(GLcontext *ctx, struct radeon_sta
            uint32_t dw = atom->check(ctx, atom);
     
            rrb = radeon_get_depthbuffer(&r300->radeon);
    -       if (!rrb)
    +       if ((rrb == NULL) || (rrb->cpp == 0))
                    return;
     
            zbpitch = (rrb->pitch / rrb->cpp);
    diff --git a/src/mesa/drivers/dri/radeon/radeon_common.c b/src/mesa/drivers/dri/
    index 13f1f06..e00c995 100644
    --- a/src/mesa/drivers/dri/radeon/radeon_common.c
    +++ b/src/mesa/drivers/dri/radeon/radeon_common.c
    @@ -1126,7 +1126,7 @@ void radeonFlush(GLcontext *ctx)
                    rcommonFlushCmdBuf(radeon, __FUNCTION__);
     
     flush_front:
    -       if ((ctx->DrawBuffer->Name == 0) && radeon->front_buffer_dirty) {
    +       if ((ctx->DrawBuffer != NULL) && (ctx->DrawBuffer->Name == 0) && radeon-
                    __DRIscreen *const screen = radeon->radeonScreen->driScreen;
     
                    if (screen->dri2.loader && (screen->dri2.loader->base.version >=

  2. #12
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,577

    Default

    Was that second part of diff horizontally truncated?

  3. #13
    Join Date
    Jul 2007
    Posts
    429

    Default Possibly - I'll try again...

    Quote Originally Posted by nanonyme View Post
    Was that second part of diff horizontally truncated?
    Code:
    diff --git a/src/mesa/drivers/dri/r300/r300_cmdbuf.c b/src/mesa/drivers/dri/r300/r300_cmdbuf.c
    index c40802a..7f009d9 100644
    --- a/src/mesa/drivers/dri/r300/r300_cmdbuf.c
    +++ b/src/mesa/drivers/dri/r300/r300_cmdbuf.c
    @@ -452,7 +452,7 @@ static void emit_zb_offset(GLcontext *ctx, struct radeon_state_atom * atom)
            uint32_t dw = atom->check(ctx, atom);
     
            rrb = radeon_get_depthbuffer(&r300->radeon);
    -       if (!rrb)
    +       if ((rrb == NULL) || (rrb->cpp == 0))
                    return;
     
            zbpitch = (rrb->pitch / rrb->cpp);
    diff --git a/src/mesa/drivers/dri/radeon/radeon_common.c b/src/mesa/drivers/dri/radeon/radeon_common.c
    index 13f1f06..e00c995 100644
    --- a/src/mesa/drivers/dri/radeon/radeon_common.c
    +++ b/src/mesa/drivers/dri/radeon/radeon_common.c
    @@ -1126,7 +1126,7 @@ void radeonFlush(GLcontext *ctx)
                    rcommonFlushCmdBuf(radeon, __FUNCTION__);
     
     flush_front:
    -       if ((ctx->DrawBuffer->Name == 0) && radeon->front_buffer_dirty) {
    +       if ((ctx->DrawBuffer != NULL) && (ctx->DrawBuffer->Name == 0) && radeon->front_buffer_dirty) {
                    __DRIscreen *const screen = radeon->radeonScreen->driScreen;
     
                    if (screen->dri2.loader && (screen->dri2.loader->base.version >= 2)
    I don't know if either is "correct" in any sense other than they stop Mesa core-dumping when I play WoW.

  4. #14
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,577

    Default

    Both of them make sense anyway. Division by zero protection and NULL dereference protection. You probably checked that both are needed for it to work and not just one?

  5. #15
    Join Date
    Jul 2007
    Posts
    429

    Default Yes, but I still think they paper over deeper problems

    Quote Originally Posted by nanonyme View Post
    Both of them make sense anyway. Division by zero protection and NULL dereference protection. You probably checked that both are needed for it to work and not just one?
    The "division by zero" protection produces lots of "no rrb" messages on my console log, which makes me suspect that it should really be trying to divide by something non-zero instead. (Some "state" is probably not being set.)

    The "NULL protection" affects the context's clean-up path.

    These bugs have already been raised in FDO's bugzilla as #27199 and #27141 respectively.

  6. #16

    Default

    Quote Originally Posted by marek View Post
    1) KMS is slower because color tiling in DDX is disabled by default. You need to enable it in xorg.conf, see "man radeon". The man page is for UMS so it lies sometimes.
    Why is it disabled? Does it have some known bugs?

  7. #17
    Join Date
    Jan 2009
    Posts
    607

    Default

    Quote Originally Posted by chrisr View Post
    For example, try right-clicking on the Earth in celestia and rotating it: it's wonderfully responsive with classic Mesa, but drags horribly with Gallium.
    It's absolutely smooth with Gallium here. Are you sure your glxinfo says "Gallium 0.4 on RV350"? If you got "softpipe", you're not using the driver.

    Quote Originally Posted by chrisr View Post
    Gallium renders the stars wrongly, too.
    This is a known issue.

    Quote Originally Posted by chrisr View Post
    And don't even think of trying to play World of Warcraft using Gallium
    Well the only way to know whether it works is to try it out and see.

  8. #18
    Join Date
    Jan 2009
    Posts
    607

    Default

    Quote Originally Posted by oibaf View Post
    Why is it disabled? Does it have some known bugs?
    Not that I know of. I guess we should enable it by default.

  9. #19
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,577

    Default

    FWIW seems to be on by default in Fedora 13 and current git head of xf86-video-ati.

  10. #20
    Join Date
    Jul 2007
    Posts
    429

    Default I'm QUITE sure I'm using Gallium with celestia

    Quote Originally Posted by marek View Post
    It's absolutely smooth with Gallium here. Are you sure your glxinfo says "Gallium 0.4 on RV350"?
    Code:
    OpenGL vendor string: X.Org R300 Project
    OpenGL renderer string: Gallium 0.4 on RV350
    OpenGL version string: 2.1 Mesa 7.9-devel
    OpenGL shading language version string: 1.20
    FWIW, my card is AGP with PCI IDs 1002:4153.

    I'm suspecting that celestia is suffering from FDO bug #27297 here, because the CPU usage hits the roof with Gallium: a load average of 1.0, vs ~0.2 with Classic. And that's with me just sitting here watching it.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •