Announcement

Collapse
No announcement yet.

Fresh 10-Way GeForce Linux Benchmarks With The NVIDIA 367.18 Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fresh 10-Way GeForce Linux Benchmarks With The NVIDIA 367.18 Driver

    Phoronix: Fresh 10-Way GeForce Linux Benchmarks With The NVIDIA 367.18 Driver

    In prepping for our forthcoming GeForce GTX 1070 and GTX 1080 Linux benchmarking, I've been running fresh rounds of benchmarks on my large assortment of GPUs, beginning with the GeForce hardware supported by the NVIDIA 367.18 beta driver. Here are the first of those benchmarks with the ten Maxwell/Kepler GPUs I've tested thus far...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Wow... A big step-up from my 960 to 970 in terms of frame rate per second... During the previous benchmarks, I thought the 960 and 970 were almost closer together in performance.

    Comment


    • #3
      Originally posted by GraysonPeddie View Post
      Wow... A big step-up from my 960 to 970 in terms of frame rate per second... During the previous benchmarks, I thought the 960 and 970 were almost closer together in performance.
      Yep, the 970/980 are roughly 2x the HW complement of 950/960.
      Test signature

      Comment


      • #4
        Originally posted by siavashserver
        It's kinda weird to hear that from an AMD representative
        “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

        - Sun Tzu, The Art of War

        Originally posted by siavashserver
        In my opinion 970 being 2 times faster than the 960 is because the 960 tested in Phoronix and other sources are usually 2GB versions and hitting the video memory limits. A 970 is almost 1.5 times faster than a 4GB 960, and 980/Titan series are almost 2 times faster as mentioned.
        There are a number of tests comparing 2GB/4GB versions of both GTX960 and R9 380, and in general you don't see that much difference in average frame rates. On Windows you see a big jump going to 4GB VRAM with Assassin's Creed but that's about it - although having 4GB definitely helps with minimum frame rate because the game doesn't have to swap textures as much (or at all) during gameplay.
        Last edited by bridgman; 31 May 2016, 01:48 AM.
        Test signature

        Comment


        • #5
          Why do you think Dota 2 Vulkan 4k does not run with the GTX 960? I would only buy 4 GB models. Look into the full results link.
          Last edited by Kano; 31 May 2016, 02:01 AM.

          Comment


          • #6
            Originally posted by siavashserver
            It's kinda weird to hear that from an AMD representative

            In my opinion 970 being 2 times faster than the 960 is because the 960 tested in Phoronix and other sources are usually 2GB versions and hitting the video memory limits. A 970 is almost 1.5 times faster than a 4GB 960, and 980/Titan series are almost 2 times faster as mentioned.
            In Heaven 1080p you don't even hit the 1GB limit. Even if you did, it would cause swapping and fluctuation, not a generally lower frame rate. It's more of a GPU thing. If additional features like AA and tessellation are enabled, they can choke weaker GPUs badly, resulting in lower frame rates what you would get with simple geometry rendering.

            Comment


            • #7
              Originally posted by siavashserver
              It's kinda weird to hear that from an AMD representative

              In my opinion 970 being 2 times faster than the 960 is because the 960 tested in Phoronix and other sources are usually 2GB versions and hitting the video memory limits. A 970 is almost 1.5 times faster than a 4GB 960, and 980/Titan series are almost 2 times faster as mentioned.
              just checked both price 960 is almost twice cheaper than 970 ... is this enough like reason why

              Comment


              • #8
                Originally posted by siavashserver


                318.99/216.99 = 1.47
                yeah and after some more discounts and two-tree more years it even can reach 1,3 )) nice

                Comment


                • #9
                  Originally posted by bridgman View Post

                  “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

                  - Sun Tzu, The Art of War



                  There are a number of tests comparing 2GB/4GB versions of both GTX960 and R9 380, and in general you don't see that much difference in average frame rates. On Windows you see a big jump going to 4GB VRAM with Assassin's Creed but that's about it - although having 4GB definitely helps with minimum frame rate because the game doesn't have to swap textures as much (or at all) during gameplay.
                  On linux the texture swapping is more expensive. And i am guessing it adds frame latencies that you really dont notice in "average fps" stats. There are quiet a bit of games that memory intensive and gain from having more GB. actually its the biggest bottleneck on my 2gb 770, games run fine if i tone things down that increase memory usage. Which made me hold back of upgrading to cards that are twice as expensive other wise they just give little performance improvements that are not worth the price.

                  Comment


                  • #10
                    Yep, agree completely... I think we are saying the same thing.

                    My point was that you don't tend to see linear scaling between memory size and average FPS, which is what was being claimed. You tend to see less-than-linear scaling for average FPS (if the memory management is decent), and more-than-linear scaling for minimum FPS when the change in available memory crosses over the size of the app's working set.

                    It's obviously more complicated than that but I was trying not to write a three-page post
                    Test signature

                    Comment

                    Working...
                    X