Page 11 of 11 FirstFirst ... 91011
Results 101 to 109 of 109

Thread: Open-Source ATI R600/700 Mesa 3D Performance

  1. #101
    Join Date
    Dec 2007
    Posts
    2,404

    Default

    IIRC, gnome-shell uses a fair amount of glGetPixels which isn't currently accelerated. However, it should be possible to use the new blit code to accelerate it.

  2. #102
    Join Date
    Mar 2010
    Posts
    8

    Default

    Quote Originally Posted by agd5f View Post
    IIRC, gnome-shell uses a fair amount of glGetPixels which isn't currently accelerated. However, it should be possible to use the new blit code to accelerate it.
    Right, does that explain why it works when you have DRI2 and not with DRI. Coding for 3D isn't my cup of coding tea. Also, would that be kind of a priority as Gnome-shell will become the default gnome option soon?

  3. #103
    Join Date
    Aug 2009
    Location
    south east
    Posts
    344

    Default

    Put a 1000 wallpaper images in /usr/share/backgrounds and open up the desktop preferences. You'll be waiting a while as it loops through it's load/scale algorithm.

    Just saying Gnomes got slag in the welds.

  4. #104
    Join Date
    Mar 2010
    Posts
    8

    Default

    Quote Originally Posted by squirrl View Post
    Put a 1000 wallpaper images in /usr/share/backgrounds and open up the desktop preferences. You'll be waiting a while as it loops through it's load/scale algorithm.

    Just saying Gnomes got slag in the welds.
    True.. but that doesn't really have much to do with lack of driver support for something that is about to become a standard feature. That's just the gnome guys being lazy and not caching thumbnails.

  5. #105
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by agd5f View Post
    IIRC, gnome-shell uses a fair amount of glGetPixels which isn't currently accelerated. However, it should be possible to use the new blit code to accelerate it.
    Is it doing this purely for picking? I recall that being part of the code in Clutter when I looked at it some time back.

    If so, I really hope they just fix this soon. There are much faster ways of doing this. They take a little more work, but obviously if performance matters (it should) the work is worth it. Using the GPU for picking is a neat trick, but it's one of the ones that belongs more in academia than in real-world code, at least on current architectures. Point-to-polygon collision is not particularly difficult nor expensive (even with more advanced collision culling algorithms, which themselves are only really necessary if the 2D scene has a large number of clickable regions). If it's possible in a high-end real-time simulation, it's possible in a low-level 2.5D UI framework. I'm sure it seemed easier and cheaper to just use the GPU trick, but if it's being a problem... fix it.

    At the very least have a software-only fallback for systems where GPU picking is obviously too slow, or GPU memory is too precious for any extraneous FBOs, or cases where the pickable objects are few enough that the raw I/O and GPU context switch overhead of GPU picking swamps the simple transformations and collision detection algorithm execution time. The vast majority of useful 2D elements are rectangles, which even with basic non-crazy transformations end up being trapezoids, and anything more complex than that is probably not something that needs to be (or even should be) clickable anyway. Likewise, pixel-perfect picking is totally unnecessary; any UI that actually requires that is a UI I don't ever want to have to use.

    ... and if the glGetPixels is being used for something else legitimate, ignore me.

    And it no longer matters who is right or wrong, but that it really isn't relevant.
    Thanks for stepping in, nevermind I stopped replying to the idiot some time ago. So long as the information is there so other random visitors to the thread don't end up seeing his inane rambling and thinking it's truth (which is, sadly, most likely what happened to him on some other forum(s) in the past and is how he likely came to accumulate and believe in so much bullshit; and that's why letting idiots go on without correction is irresponsible and negligent to the community as a whole), I'm happy. Well, I'm less irritated than I was before, anyways.

  6. #106
    Join Date
    Mar 2010
    Posts
    8

    Default

    From what I read they were using the alpha color to determine what your clicking on. (I think it was alpha).
    It's a neat trick if it is accelerated, esp on small hardware like netbooks, but when it isnt accellerated it causes really weird things to happen. But I still don't think that is the cause of the exceeding poor performance under UMS. I think it could possibly be that it falls back to software rastering. I'm not 100% sure if that is possible.. but that is what it felt like. Any body know more about the clutter api and how it chooses renderers?

    Anyway I'm looking forward to 7.9, i'm running 7.8git builds and they rock so hard it is not funny.

  7. #107
    Join Date
    Jul 2009
    Posts
    416

    Default

    Just saw this commit today:
    radeon/r200/r600: enable HW accelerated gl(Read/Copy/Draw)Pixels
    http://cgit.freedesktop.org/mesa/mes...a09fad369706cf

    I'm just wondering what this means. I saw another commit yesterday that implemented the gl*Pixels functions, but they weren't enabled in that commit.

    Will this dramatically increase performance across the board, or increase compatibility? Or is it just something specific to certain edge cases?

  8. #108
    Join Date
    Nov 2009
    Posts
    379

    Default

    I find it funny, Radeon users always live in the future

    When we had 2.6.32 with the 6.12.1 driver we were waiting for 2.6.33 to get kms working. Now that 2.6.33 and 6.12.5 is released we wait for 2.6.34 and 6.13 to be released to get dri2 support.

    Anyway it is a good thing! Also I get to see all the improvements from not being able to properly render images in 2.6.31 to maybe in the future have proper 3d support and crossfire.

  9. #109
    Join Date
    Oct 2007
    Posts
    1,325

    Default

    Well, I know glxgears isn't the best benchmark, but DRI2/KMS is now close to the old UMS fps number in Debian sidux with drm/xf86-ati/mesa from git, using the latest 2.6.33 kernel. Previously, KMS wasn't throttling my CPU at all, and now it throttles a bit and the glxgears score is much closer to the UMS days (instead of being half the speed). I think the devs eliminated at least one of the bottlenecks in the recent commits - good work, folks!

    BTW, RV710/RadeonHD 4550 here for those curious.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •