Page 2 of 6 FirstFirst 1234 ... LastLast
Results 11 to 20 of 52

Thread: AMD R600g Performance Patches Yield Mixed Results

  1. #11
    Join Date
    Jul 2012
    Posts
    627

    Default

    Even with Xonotic results , it's clear that he is on the right track, he "only" needs to figure out what is happening with Xonotic to fix it....

    I bet that solving the problem with Xonotic in the driver itself will also solve the problem with several games that are affected by this patch....

  2. #12
    Join Date
    Dec 2007
    Posts
    2,395

    Default

    Quote Originally Posted by tmikov View Post
    The closed blob's perfirmance advantage is not due to tweaks like this. I have an extremely simple test case which renders a single static rectangular texture and the open source driver is half the speed. This is the simplest fundamental operation and alas we are slower than we should be. Until fundamental problems like that are addressed, tweaks here and there for this and that game are not likely to have the expected result.
    Make sure you have 2D tiling enabled otherwise you won't be fully utilizing your memory bandwidth; it's been made the default as of mesa 9.0 and xf86-video-ati git master. Note that the EGL paths to not properly handle tiling yet.

  3. #13
    Join Date
    Oct 2012
    Posts
    17

    Default

    Quote Originally Posted by agd5f View Post
    Make sure you have 2D tiling enabled otherwise you won't be fully utilizing your memory bandwidth; it's been made the default as of mesa 9.0 and xf86-video-ati git master. Note that the EGL paths to not properly handle tiling yet.
    Tiling is enabled, though I don't see a difference when I enable 2D tiling vs 1D.

    About EGL: I have applied a simple patch which enables tiling of the frame buffer. With that It matches the performance of running under X11. In both cases I get 130 FPS, while the blob is at about 220. (It is not actually double, sorry, but is significantly faster).

  4. #14
    Join Date
    Jul 2009
    Posts
    9

    Default

    Well, guessing from what was changed I'd say that the huge difference is when the game is complex enough to run out of memory from the graphics card. Since they changed "VRAM|GTT" to just "VRAM", it seems quite likely that is the issue, there should be some code to detect when the video card get close to running out of VRAM and switch from "VRAM" back to "VRAM|GTT" relocations for those workloads while keeping VRAM only for workloads which require less video memory. Another solution would be to somehow monitor which of those resources are accessed more and which less often and locate the high access count resources in VRAM and the rest in GTT.

  5. #15
    Join Date
    Oct 2012
    Posts
    294

    Default

    Quote Originally Posted by xception View Post
    Well, guessing from what was changed I'd say that the huge difference is when the game is complex enough to run out of memory from the graphics card. Since they changed "VRAM|GTT" to just "VRAM", it seems quite likely that is the issue, there should be some code to detect when the video card get close to running out of VRAM and switch from "VRAM" back to "VRAM|GTT" relocations for those workloads while keeping VRAM only for workloads which require less video memory. Another solution would be to somehow monitor which of those resources are accessed more and which less often and locate the high access count resources in VRAM and the rest in GTT.
    wouldn'T it be better to always keep it VRAM|GTT but distribute between them two more intelligently?

  6. #16
    Join Date
    Dec 2009
    Posts
    492

    Default

    Quote Originally Posted by agd5f View Post
    Make sure you have 2D tiling enabled otherwise you won't be fully utilizing your memory bandwidth; it's been made the default as of mesa 9.0 and xf86-video-ati git master. Note that the EGL paths to not properly handle tiling yet.
    Like, seriously? You need the full bandwidth of a card to display a single texture?!?

  7. #17
    Join Date
    Oct 2008
    Posts
    3,174

    Default

    Quote Originally Posted by bug77 View Post
    Like, seriously? You need the full bandwidth of a card to display a single texture?!?
    That depends on how many fps you want to render. More bandwidth will always give you higher fps.

  8. #18
    Join Date
    Oct 2008
    Posts
    3,174

    Default

    Quote Originally Posted by xception View Post
    Well, guessing from what was changed I'd say that the huge difference is when the game is complex enough to run out of memory from the graphics card. Since they changed "VRAM|GTT" to just "VRAM", it seems quite likely that is the issue, there should be some code to detect when the video card get close to running out of VRAM and switch from "VRAM" back to "VRAM|GTT" relocations for those workloads while keeping VRAM only for workloads which require less video memory. Another solution would be to somehow monitor which of those resources are accessed more and which less often and locate the high access count resources in VRAM and the rest in GTT.
    That was my guess. The high quality setting is probably using larger textures and running out of memory on the card.

  9. #19
    Join Date
    Dec 2007
    Posts
    2,395

    Default

    Quote Originally Posted by bug77 View Post
    Like, seriously? You need the full bandwidth of a card to display a single texture?!?
    Only if you want maximum performance.

  10. #20
    Join Date
    Dec 2009
    Posts
    492

    Default

    Quote Originally Posted by agd5f View Post
    Only if you want maximum performance.
    The guy said "single static rectangular texture". Even if rendered at 1000fps, does this need any memory bandwidth past the initial rendering?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •