Announcement

Collapse
No announcement yet.

An Interesting Difference Between AMD & NVIDIA Linux Drivers When Comparing System Usage

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • An Interesting Difference Between AMD & NVIDIA Linux Drivers When Comparing System Usage

    Phoronix: An Interesting Difference Between AMD & NVIDIA Linux Drivers When Comparing System Usage

    When running the tests recently for the NVIDIA Linux Driver 2015 Year-in-Review and How AMD's Proprietary Linux Driver Evolved In 2015, I also ran some extra tests comparing the AMD Radeon Software 15.12 and NVIDIA 358.16 proprietary drivers when looking at their CPU usage, memory consumption, and other system sensors...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Originally posted by phoronix View Post
    In looking at the GPU usage as exposed by each of the respective driver's interfaces, it appears that the Radeon GPU usage bounces around a heck of a lot more than the NVIDIA drivers.
    How did you get to that conclusion? To my eye, it bounces around some more, but it's game-dependent. The NVidia GPU usage bounces around at times as well, and the AMD GPU usage also stays stable for longer periods of time. To reach that conclusion, you should do some more analysis on your data. I agree on the frequencies/states bouncing around alot more though.

    In the end, though, I don't really understand what this means: Are the AMD drivers just not as good as the NVidia drivers at pushing their GPU's all the time? Or, contrary, are the AMD drivers better at reclocking? Does this mean that one or the other is more prone to burn out? The numbers! What do they mean!?

    Also, for easier viewing, you could've considered rescaling the horizontal axis so that each benchmark took up the same amount of space. (With an explanation that the AMD benchmarks still lasted longer)

    Comment


    • #3
      This kind of thing should only be measured by open source utilities in which it can be verified that they use the same code path on GPUs. Games tend to do it differently, using very different extensions and shaders on different brands. What if the 2 code paths don't have the same memory/CPU usage by nature?

      Comment


      • #4
        lvlark, without more intensive study it's hard to say which is better. The AMD driver may be better because it's "racing to finish, and down-clocking" faster - thus using less energy and resolving things more quickly. Or it just could be "wack." eydee is all 100% on target that the tools used could be using very different code-paths with dramatic behavioral impact and that could be the entire issue.

        Comment


        • #5
          On windows 10 my R9 290 never displays such an erratic clockspeed while gaming. It stays at the max frequency unless I hit a loading screen etc.

          So these results could be very revealing, or alternatively they could be the result of thermal throttling since Michael uses the initial reference model cooler (which is inadequate for Hawai). But more likely a driver issue since the fury got similar results.
          Last edited by humbug; 06 January 2016, 09:58 AM.

          Comment


          • #6
            It would be nice to have a plot about CPU usage normalized with the current frame rate. This would actually get a little more insight about the efficiency of the cpu part of the driver. However, there still might be some frame independent constants.

            Comment


            • #7
              Originally posted by c0derbear View Post
              lvlark, without more intensive study it's hard to say which is better. The AMD driver may be better because it's "racing to finish, and down-clocking" faster - thus using less energy and resolving things more quickly. Or it just could be "wack." eydee is all 100% on target that the tools used could be using very different code-paths with dramatic behavioral impact and that could be the entire issue.
              That's pretty much what I thought.. With proprietary drivers, I imagine it's nigh-impossible to understand where those differences come from. Still, would be nice to hear some thoughts on the significance of these differences, without it turning into AMD vs NVidia flamewars.

              Comment


              • #8
                I was about to say roughly the same things.

                The conclusion is no real​ conclusion, and little does the article say about the origin of the discrepancies, if there are any at all. The different codepaths can also be held responsible for the varying memory footprint. I doubt that a driver alone could result in such different memory usage; the application has to do something differently.

                This article seems like the the first 10% of something that requires a lot more research before any definitive conclusion could be drawn.

                Comment


                • #9
                  Originally posted by lvlark View Post

                  That's pretty much what I thought.. With proprietary drivers, I imagine it's nigh-impossible to understand where those differences come from. Still, would be nice to hear some thoughts on the significance of these differences, without it turning into AMD vs NVidia flamewars.
                  If you want to split hairs, yes, it's very difficult to determine.
                  As I see it, the red lines are way jumpier. That's the experience for the end-user.

                  Comment


                  • #10
                    Originally posted by phoronix View Post
                    Phoronix: An Interesting Difference Between AMD & NVIDIA Linux Drivers When Comparing System Usage

                    When running the tests recently for the NVIDIA Linux Driver 2015 Year-in-Review and How AMD's Proprietary Linux Driver Evolved In 2015, I also ran some extra tests comparing the AMD Radeon Software 15.12 and NVIDIA 358.16 proprietary drivers when looking at their CPU usage, memory consumption, and other system sensors...

                    http://www.phoronix.com/scan.php?pag...M-Binary-Blobs
                    It would be very informative to see the difference between an AMD GPU running at max clocks (fixed to max clocks) and the same AMD GPU running with normal reclocking.

                    Nvidia utilities allow the user to fix the GPU clocks to the max clock, but as far as I know this functionality is missing in AMD's Radeon software.

                    Comment

                    Working...
                    X