Page 2 of 5 FirstFirst 1234 ... LastLast
Results 11 to 20 of 48

Thread: AMD Radeon HD 6000 Series Open-Source Driver Becomes More Competitive

  1. #11

    Default

    Quote Originally Posted by krasnoglaz View Post
    I don't understand why test target for drivers are decade old shaderless games or opensource relatively light games like Xonotic. Why not Team Fortress 2 and Dota 2?
    See: http://www.phoronix.com/scan.php?pag...tem&px=MTQxMzY

  2. #12
    Join Date
    Oct 2008
    Posts
    143

    Default

    Quote Originally Posted by krasnoglaz View Post
    I don't understand why test target for drivers are decade old shaderless games or opensource relatively light games like Xonotic. Why not Team Fortress 2 and Dota 2?
    Yes please, absolutely no reason not to try the source benchmark imo.

  3. #13
    Join Date
    Jun 2013
    Posts
    210

    Default

    more Team Fortress 2 benchmarks please

  4. #14
    Join Date
    Dec 2008
    Location
    San Bernardino, CA
    Posts
    231

    Default

    Very interesting article. Would be very interested to see how the HD4000 (r600) compares vs. Catalyst. Lots of people are still running HD3000/4000 hardware.

  5. #15

    Default

    Quote Originally Posted by verde View Post
    more Team Fortress 2 benchmarks please
    See: http://www.phoronix.com/scan.php?pag...tem&px=MTQxMzY

  6. #16
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,209

    Default

    Quote Originally Posted by Michael View Post
    I can't believe you seriously had to repeat yourself twice on the same page of this thread... shows how much people really pay attention.


    Anyways, it's pretty exciting to see these test results. I find it interesting how in terms of GPUserformance, it forms a sort of sine wave, where the very low end cards and the very high-end cards perform the worst. I get the impression the devs focus the most on the mainstream GPUs, since the low-end GPUs aren't good for gaming and if you want your money's worth for the high-end parts, you're better off using catalyst.

  7. #17
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,523

    Default

    Quote Originally Posted by Michael View Post
    No, tests are always done with it disabled, as can be seen from the system logs.
    Huh, then the triangle test result is a bit odd, I thought disabling it improved the results in that test a lot?..

  8. #18
    Join Date
    Oct 2008
    Posts
    3,036

    Default

    Quote Originally Posted by Michael View Post
    Since Michael won't provide what everyone wants to see, can somebody here on the forums run the tests on Steam or WINE apps?

    Also, YNOR600SB?

  9. #19
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Quote Originally Posted by schmidtbag View Post
    Anyways, it's pretty exciting to see these test results. I find it interesting how in terms of GPUserformance, it forms a sort of sine wave, where the very low end cards and the very high-end cards perform the worst. I get the impression the devs focus the most on the mainstream GPUs, since the low-end GPUs aren't good for gaming and if you want your money's worth for the high-end parts, you're better off using catalyst.
    It's not so much about focusing on mid-range GPUs, it's just that the mid-range GPUs have the least need for hand-tweaking optimization.

    Low end parts tend to run into memory bandwidth and "tiny shader core" bottlenecks (requiring a lot of complex heuristics), high end parts are so fast that they often get CPU limited before they get GPU limited (requiring a lot of tuning to reduce CPU overhead in the driver), while midrange parts tend to be more balanced and less likely to get badly bottlenecked in a single area.

  10. #20
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,860

    Default

    Quote Originally Posted by bridgman View Post
    It's not so much about focusing on mid-range GPUs, it's just that the mid-range GPUs have the least need for hand-tweaking optimization.

    Low end parts tend to run into memory bandwidth and "tiny shader core" bottlenecks (requiring a lot of complex heuristics), high end parts are so fast that they often get CPU limited before they get GPU limited (requiring a lot of tuning to reduce CPU overhead in the driver), while midrange parts tend to be more balanced and less likely to get badly bottlenecked in a single area.
    Is Radeon then going to become a mess of if's and IFDEF's, Bridgman? All that hand-tuning to get every little ounce of performance out of every card or are the devs thinking that its best to keep the code as clean as possible and just go for the 'middle of the road, good for most but not perfect for all' approach?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •