Announcement

Collapse
No announcement yet.

NVIDIA RTX 30 Series OpenCL / CUDA / OptiX Compute + Rendering Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA RTX 30 Series OpenCL / CUDA / OptiX Compute + Rendering Benchmarks

    Phoronix: NVIDIA RTX 30 Series OpenCL / CUDA / OptiX Compute + Rendering Benchmarks

    Recently from NVIDIA we received the rest of the NVIDIA RTX 30 series line-up for cards we haven't been able to benchmark under Linux previously, thus it's been a busy month of Ampere benchmarking for these additional cards and re-testing the existing parts. Coming up next week will be a large NVIDIA vs. AMD Radeon Linux gaming benchmark comparison while in this article today is an extensive look at the GPU compute performance for the complete RTX 20 and RTX 30 series line-up under Linux with compute tests spanning OpenCL, Vulkan, CUDA, and OptiX RTX under a variety of compute and rendering workloads.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Such a huge gap between 3060 and 3060 Ti.

    Comment


    • #3
      Originally posted by schmidtbag View Post
      Such a huge gap between 3060 and 3060 Ti.
      Probably caused by the latest driver's mining detection:

      NVIDIA also announced they will be limiting the mining hash rate with their forthcoming GeForce RTX 3060. With the GeForce RTX 3060 launching next week, they will have their drivers artificially limit the performance when Ethereum cryptocurrency mining is detected. This will be an artificial 50% performance limitation if mining is detected.

      Comment


      • #4
        I'm so glad I bought my two XFX Radeon RX 580 GTS XXX Edition GPUs last August for $170 each, as now they start at $600. And an RTX 3060 starts at over $1,000, if you can get one. In fact most all GPU prices have simply become silly, so I'd hold on to whatever I have until they become sane again.

        Comment


        • #5
          Originally posted by schmidtbag View Post
          Such a huge gap between 3060 and 3060 Ti.
          From a quick glance at the specs one could argue that a 3060 is roughly 3/4 of a 3060ti (#cores, memory bandwidth) and that is what the scores show as well (161/219 = 0.73).

          EDIT - seems like gaming scores scale about the same way. I didn't expect that at first but then I noticed that #ROPs and #TMUs differ even more than #cores and #bandwidth.
          Last edited by bridgman; 15 April 2021, 09:35 PM.
          Test signature

          Comment


          • #6
            The way things are going, I won't be able to buy a mid-tier shitbox gpu until 2025. Even Microcenter is raising the MSRP on these and cpus. Their sold out in 30 seconds "sale" price is now a made up MSRP. It's a mad max hellscape in computerland.

            I appreciate the benchmarks, but I won't be looking at them and caring for a few years yet.

            Comment


            • #7
              This is why as I argued over on the thread concerning Nvidia's Grace SoC and after mentioning that Nvidia is partnering with Mediatek to bring RTX 30 Mobile GPUs to Mediatek's SoCs that the PCIe slot paradigm is increasingly archaic and not needed for 98% of global personal computing needs.

              4 reasons.

              1: No use in having PCIe slots if you can't afford to put a GPU card in there that will at least afford you gaming performance equal to or exceeding a PS 5.

              2: You can get better Cyberpunk 2077 quality streaming to a Chromebook through Google's Stadia than on a high end PC or even PS 5 running locally off disk. This has been verified. So off board GPUs streaming rendered data is now as performant as having a local GPU on a PCIe slot.

              3: Everything including GPU cores will be integrated onto Tiles (Intel) and Chiplets (AMD) and lashed together through either CXL or Infinity Fabric interconnects directly to HBM on die and/or memory pools either on board or in the SiP where the Tiles or Chiplets reside. In fact there is some some speculation that AMD has this in mind with a Zen 4 variant where GPU compute units will be spread across all chilplet units and lashed together through IF links in a zero copy, cache coherent manner. So, for instance, let's say you want 2,084 RDNA or CDNA cores on your 32 Core CPU Chiplets SiP, then you spread 64 GPU cores per CPU, lash them together via IF to each CPU with HBM and a pool of RAM and call it a day.

              4: dGPUs are NOT for us anymore. They're for Big Data, Coin Mining, Streaming Game Services, Banking, etc. PC gaming is not where the money is. Gaming is now firmly in the hands of phones, consoles and streaming. All platforms where there are ZERO local PCIe slots. The Age of Cheap Performant dGPUs is over.

              Comment


              • #8
                Originally posted by Jumbotron View Post
                This is why as I argued over on the thread concerning Nvidia's Grace SoC and after mentioning that Nvidia is partnering with Mediatek to bring RTX 30 Mobile GPUs to Mediatek's SoCs that the PCIe slot paradigm is increasingly archaic and not needed for 98% of global personal computing needs.

                4 reasons.

                1: No use in having PCIe slots if you can't afford to put a GPU card in there that will at least afford you gaming performance equal to or exceeding a PS 5.

                2: You can get better Cyberpunk 2077 quality streaming to a Chromebook through Google's Stadia than on a high end PC or even PS 5 running locally off disk. This has been verified. So off board GPUs streaming rendered data is now as performant as having a local GPU on a PCIe slot.

                3: Everything including GPU cores will be integrated onto Tiles (Intel) and Chiplets (AMD) and lashed together through either CXL or Infinity Fabric interconnects directly to HBM on die and/or memory pools either on board or in the SiP where the Tiles or Chiplets reside. In fact there is some some speculation that AMD has this in mind with a Zen 4 variant where GPU compute units will be spread across all chilplet units and lashed together through IF links in a zero copy, cache coherent manner. So, for instance, let's say you want 2,084 RDNA or CDNA cores on your 32 Core CPU Chiplets SiP, then you spread 64 GPU cores per CPU, lash them together via IF to each CPU with HBM and a pool of RAM and call it a day.

                4: dGPUs are NOT for us anymore. They're for Big Data, Coin Mining, Streaming Game Services, Banking, etc. PC gaming is not where the money is. Gaming is now firmly in the hands of phones, consoles and streaming. All platforms where there are ZERO local PCIe slots. The Age of Cheap Performant dGPUs is over.
                I'd point that we have come to the point of time where self-build PC are dying out. Currently normal end-user cannot afford to build one, yeah the main barrier is GPU right now, but you should not discover that with the rise of demand memory, both RAM and SSDs, and CPU prices might skyrocket as well. Meaning most of production will go to OEM which would be able to buy in bulk at decent price. So the only solutions customer would be able to buy is some large OEM's.

                Comment


                • #9
                  GPU Prices are crazy. I have bought 1,5 years ago the asus 5700XT strixx..around 530€ ...now its unavailable and used cards on ebay are around 1k€ ..

                  Comment


                  • #10
                    $200 cards are selling for $1000. It's absurd and won't get any better until 2023. Whatever you have now, you better hope it lasts.

                    Comment

                    Working...
                    X