Announcement

Collapse
No announcement yet.

NVIDIA GeForce RTX 2080 Ti To GTX 980 Ti TensorFlow Benchmarks With ResNet-50, AlexNet, GoogLeNet, Inception, VGG-16

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA GeForce RTX 2080 Ti To GTX 980 Ti TensorFlow Benchmarks With ResNet-50, AlexNet, GoogLeNet, Inception, VGG-16

    Phoronix: NVIDIA GeForce RTX 2080 Ti To GTX 980 Ti TensorFlow Benchmarks With ResNet-50, AlexNet, GoogLeNet, Inception, VGG-16

    For those curious about the TensorFlow performance on the newly-released GeForce RTX 2080 series, for your viewing pleasure to kick off this week of Linux benchmarking is a look at Maxwell, Pascal, and Turing graphics cards in my possession when testing the NGC TensorFlow instance on CUDA 10.0 with the 410.57 Linux driver atop Ubuntu and exploring the performance of various models. Besides the raw performance, the performance-per-Watt and performance-per-dollar is also provided.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    You know the 2080Ti is priced poorly when it can be as much as 65% faster than a 1080Ti and yet still loses out in performance-per-dollar.

    Comment


    • #3
      It is not priced poorly. It does not only leap the performance of the previous nvidia generation, which still is faster than the competition, but also offers new features (tensor cores, ray tracing) that the competition will get to in a year at least. It reminds to me the blow that nvidia gave to 3Dfx Voodo with their first GeForce that expanded hardware acceleration to a whole new level. You have to take into account that these advancements did not come cheap. What we are seeing is the capitalization of the investment that Nvidia has been doing in the AI sector for some time now.
      Last edited by zoomblab; 08 October 2018, 09:45 AM.

      Comment


      • #4
        Originally posted by zoomblab View Post
        It is not priced poorly. It does not only leap the performance of the previous nvidia generation, which still is faster than the competition, but also offers new features (tensor cores, ray tracing) that the competition will get to in a year at least. It reminds to me the blow that nvidia gave to 3Dfx Voodo with their first GeForce that expanded hardware acceleration to a whole new level. You have to take into account that these advancements did not come cheap. What we are seeing is the capitalization of the investment that Nvidia has been doing in the AI sector for some time now.
        Lets be fair, in everything except AI it's a modest bump of 30% or thereabouts. About the minimum you can bump It up and get anyone to bat an eye. If it had been priced the same as the 1080ti I'd say ok I can respect that.... however it isn't doesn't have open source drivers and in general is a wonky architecture of diverse coprocessors rather than a single accelerator.

        Comment


        • #5
          I kinda agree about Turing not being priced poorly, if anything it's named poorly.

          Take into consideration the die sizes: 2080ti is 754mm² and nearly the size of the Titan V. 2080 is 545mm², bigger even than 1080ti which is just 471mm². Just on the die sizes alone the Founder's edition pricing can already be justified, and the 'regular' MSRP is even a slight price reduction. If Nvidia had named the 2080ti 'RTX Titan' instead then people wouldn't have balked at the $1200 price.

          Of course the problem now is if the 2080 was instead named something like 'RTX 1180ti', then there would be little to no performance uplift over Pascal, because despite slight improvements in the shader IPC, a good chunk of the silicon went into implementing the RT and Tensor cores. It'd be a nominal 10% bump but with ray tracing.

          OTOH now was the best time for Nvidia to do this, because if AMD was competitive they couldn't afford to be wasting die space chasing 'novelty' features.

          Comment


          • #6
            Michael, thanks for those last charts of performance per Dollar. Those were the real takeaway from this article. Yeah, it's faster, but it *more expensive* that it is faster. And that is, like cd88 points out, at the tasks it's supposed to be faster at.

            Comment


            • #7
              Originally posted by zoomblab View Post
              ... What we are seeing is the capitalization of the investment that Nvidia has been doing in the AI sector for some time now.
              aka... milking it.

              Well, I'm ready to call Nvidia on it's discrete graphics monopoly and stop this monopolistic bundling of AI/CUDA with graphics. If the equivalent graphics increase is 30% (1080ti vs 2080ti), are you getting $200-300+ worth of tensor cores? It's debatable.
              Last edited by audir8; 19 October 2018, 02:00 PM.

              Comment


              • #8
                Originally posted by phoronix View Post
                The TensorFlow models used were tested at FP16 to see the performance impact of the tensor cores on the new RTX 2080 Ti
                No, they clearly didn't. Turing has both packed fp16 and Tensor cores. It makes sense that packed fp16 would be about 2x as fast as the GTX 1080 Ti, which must be emulating fp16 with fp32. It's pretty clear you either weren't using the tensor cores, or your batch sizes were too small to see the full effect.

                Here's the kind of difference that the Tensor cores should make:

                https://www.anandtech.com/show/12673...ng-deep-dive/8

                Edit: The issue is clearly the batch size. See below.
                Last edited by coder; 09 October 2018, 12:59 AM.

                Comment


                • #9
                  Originally posted by schmidtbag View Post
                  You know the 2080Ti is priced poorly when it can be as much as 65% faster than a 1080Ti and yet still loses out in performance-per-dollar.
                  Hold your assessment of its value until he has some proper benchmarks of the tensor cores. It won't even be close.

                  Comment


                  • #10
                    Originally posted by jihadjoe View Post
                    Take into consideration the die sizes: 2080ti is 754mm² and nearly the size of the Titan V. 2080 is 545mm², bigger even than 1080ti which is just 471mm².
                    What's your point? Die sizes often increase from one GPU generation to the next. Usually, the pattern is that they shrink when a new manufacturing node is utilized, then they creep up over the following generations as that node matures. Turing does both, but some people claim that 12 nm should be seen as more of a refinement of 16 nm than as a proper node change.

                    Originally posted by jihadjoe View Post
                    If Nvidia had named the 2080ti 'RTX Titan' instead then people wouldn't have balked at the $1200 price.
                    But they can't, because what would they call the actual RTX Titan card? Compare the specs of the 2080 Ti to the RTX Quadro 6000 to see just how much they're holding back. That should eventually become their next Titan card (except with only 12 GB).

                    Originally posted by jihadjoe View Post
                    now was the best time for Nvidia to do this, because if AMD was competitive they couldn't afford to be wasting die space chasing 'novelty' features.
                    Exactly. I see this as an attempt by Nvidia to capitalize on their graphics lead, by establishing new areas of leadership. When AMD eventually catches them on raw shader and rasterization performance, they will already be way ahead on ray tracing performance and on techniques involving deep learning (DLSS, for instance).

                    What they did makes pretty good sense, from a business perspective. They did certainly risk alienating their standing with gamers. But that resentment might start to die down, when the Pascal inventory finally dissipates and the holiday sales arrive. If not then, maybe upon the arrival of competitive offerings from AMD.

                    I wonder, if they had it to do over, would they put only the Tensor cores in Turing and sit on the RT cores for one more generation?

                    Comment

                    Working...
                    X