Announcement

Collapse
No announcement yet.

PCI Express 6.0 Announced For Release In 2021 With 64 GT/s Transfer Rates

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PCI Express 6.0 Announced For Release In 2021 With 64 GT/s Transfer Rates

    Phoronix: PCI Express 6.0 Announced For Release In 2021 With 64 GT/s Transfer Rates

    While PCI Express 4.0 up to this point has only been found in a few systems like Talos' POWER9 platforms and coming soon with the new AMD graphics cards and chipsets, the PCI SIG today announced PCI Express 6.0...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I think they should probably hit the pause button on doubling the bandwidth every generation. PCIE 4.0 currently already requires multiple layers to implement properly, making motherboards far more expensive than in the past. I can only imagine what will happen with 5.0 and 6.0.

    Comment


    • #3
      Originally posted by betam4x View Post
      I think they should probably hit the pause button on doubling the bandwidth every generation. PCIE 4.0 currently already requires multiple layers to implement properly, making motherboards far more expensive than in the past. I can only imagine what will happen with 5.0 and 6.0.
      That's mostly bunk mobo marketing phooey... PCIe 3.1 can run across a 25ft cable with virtually no performance loss. It is likely that the same is true for PCIe 4.0... if anything they are just tightening up the margins on vendors so they can get the performance out of the silicon that is already there.

      Slow IO speed has been the primary bottleneck of most computers for ages. PCIe bandwidth isn't a problem for GPUs these days because developers know this and work hard to minimize CPU-GPU chat... that said quadrupling of bandwidth will relax that and enable developers do do things they couldn't do before. A 16x GPU on PCIe 6.0 will have 128GB/s of bandwidth to the system... at speeds that high you can probalby just leave large amounts of graphics resources just sitting on an SSD instead of having to load it and lazy load it as needed.

      Realistically if you extrapolate some seat of pants numbers from the going rate for 6 layer boards 700 boards in ATX size is = $9.3
      $9.3*(1.2)^10 = ~26 layer board cost = $56 in low volumes... motherboards are made by the thousands and tens of thousands. That's just being extreme of course, as AM4 motherboards typically have only 4-6 layers and an HEDT boards might ahve 10-15 layers. $9.3*(1.2)^10

      The MSI MEG x570 board only has 8 layers... this is nothing cost wise as that board should cost something like $11-15 as a baseline, and since it probably has beefed up copper and such maybe $20?
      Last edited by cb88; 18 June 2019, 06:39 PM.

      Comment


      • #4
        Originally posted by cb88 View Post

        That's mostly bunk mobo marketing phooey... PCIe 3.1 can run across a 25ft cable with virtually no performance loss. It is likely that the same is true for PCIe 4.0... if anything they are just tightening up the margins on vendors so they can get the performance out of the silicon that is already there.

        Slow IO speed has been the primary bottleneck of most computers for ages. PCIe bandwidth isn't a problem for GPUs these days because developers know this and work hard to minimize CPU-GPU chat... that said quadrupling of bandwidth will relax that and enable developers do do things they couldn't do before. A 16x GPU on PCIe 6.0 will have 128GB/s of bandwidth to the system... at speeds that high you can probalby just leave large amounts of graphics resources just sitting on an SSD instead of having to load it and lazy load it as needed.

        Realistically if you extrapolate some seat of pants numbers from the going rate for 6 layer boards 700 boards in ATX size is = $9.3
        $9.3*(1.2)^10 = ~26 layer board cost = $56 in low volumes... motherboards are made by the thousands and tens of thousands. That's just being extreme of course, as AM4 motherboards typically have only 4-6 layers and an HEDT boards might ahve 10-15 layers. $9.3*(1.2)^10

        The MSI MEG x570 board only has 8 layers... this is nothing cost wise as that board should cost something like $11-15 as a baseline, and since it probably has beefed up copper and such maybe $20?
        Doubling the frequency can have a profound impact.

        Comment


        • #5
          How can they design PCIe 6.0 to be backwards compatible with PCIe 5.0 and 4.0 when few vendors have even implemented PCIe 4.0 yet? Who even wants to bother implementing PCIe 4.0 now, knowing it's already obsolete twice over?

          Comment


          • #6
            Originally posted by microcode View Post

            Doubling the frequency can have a profound impact.
            This isn't doubling of the frequency, it's doubling of the bandwidth. Anandtech has a good article on this https://www.anandtech.com/show/14559...o-land-in-2021


            Originally posted by chroma View Post
            How can they design PCIe 6.0 to be backwards compatible with PCIe 5.0 and 4.0 when few vendors have even implemented PCIe 4.0 yet? Who even wants to bother implementing PCIe 4.0 now, knowing it's already obsolete twice over?
            Because it's NOT obsolete. PCIe 5.0 spec is only recently ratified and hardware/chip makers are all part of that process and know the timelines. They're devoted to PCIe 4.0 because otherwise they'll be left behind for another 2-4 years as everyone else implements PCIe 4.0 this year/next year.

            Comment


            • #7
              Originally posted by microcode View Post

              Doubling the frequency can have a profound impact.
              I must have missed the part that mentioned clock frequency. Right now, unless you have access to the specification which is members only, all I've seen is a bunch of best-case numbers thrown out by PCI-SIG's marketing department.

              Also, there's not a linear relationship between performance and clock speed in most applications. Just because PCI-SIG advertises doubling the throughput doesn't mean they've gotten there by doubling the clock frequency on the entire bus.

              But you're right, doubling a line frequency can have a profound impact on the underlying materials, E&M problems, and thermal problems in an electrical system. It also doesn't follow that the OP's estimates are correct because when you do change the electrical properties in a circuit that the materials in the current circuits and circuit boards will be related to the circuits in the future board materials and layouts. There's a fundamental difference between the materials in a top of the line server board and an enthusiast or OEM board. The top line high quality board is likely to be all, or nearly all, gold circuits and will, generally, electrically perform better. As you go down in price the gold is replaced with copper where electrically and thermally feasible - and sometimes even when it's not. You can take a guess which board is more likely to come close to the theoretical maximums in the PCI-SIG specs all else being equal. Once you get to certain points the materials may no longer allow the electrical and thermal properties of one set of materials and instead mandate other more expensive materials due to thermal or E/M issues resulting in different price characteristics.

              Comment


              • #8
                Originally posted by stormcrow View Post

                I must have missed the part that mentioned clock frequency. Right now, unless you have access to the specification which is members only, all I've seen is a bunch of best-case numbers thrown out by PCI-SIG's marketing department.

                Also, there's not a linear relationship between performance and clock speed in most applications. Just because PCI-SIG advertises doubling the throughput doesn't mean they've gotten there by doubling the clock frequency on the entire bus.

                But you're right, doubling a line frequency can have a profound impact on the underlying materials, E&M problems, and thermal problems in an electrical system. It also doesn't follow that the OP's estimates are correct because when you do change the electrical properties in a circuit that the materials in the current circuits and circuit boards will be related to the circuits in the future board materials and layouts. There's a fundamental difference between the materials in a top of the line server board and an enthusiast or OEM board. The top line high quality board is likely to be all, or nearly all, gold circuits and will, generally, electrically perform better. As you go down in price the gold is replaced with copper where electrically and thermally feasible - and sometimes even when it's not. You can take a guess which board is more likely to come close to the theoretical maximums in the PCI-SIG specs all else being equal. Once you get to certain points the materials may no longer allow the electrical and thermal properties of one set of materials and instead mandate other more expensive materials due to thermal or E/M issues resulting in different price characteristics.
                From PCIe 5.0 to 6.0, according to https://www.anandtech.com/show/14559...o-land-in-2021 it's not frequency doubling, it's switching to multi-level signaling. Like MLC SSD drivers, a single voltage value could have up to 4 different 2-bit values.

                Comment


                • #9
                  Originally posted by Drizzt321 View Post

                  From PCIe 5.0 to 6.0, according to https://www.anandtech.com/show/14559...o-land-in-2021 it's not frequency doubling, it's switching to multi-level signaling. Like MLC SSD drivers, a single voltage value could have up to 4 different 2-bit values.
                  which means you are going to need additional shielding to prevent EMI. Otherwise the slightest bit of EMI can cause errors. Likely also why why include ECC. As far as the 25 ft cable mentioned earlier, I've only seen that done in a Linus Tech Tips video, and only done once with a GPU. GPUs don't saturate the PCIE bus. Try doing that with an NVME drive that saturates a PCIE x4 interface or a card that is sensitive to latency and interference. Even thunderbolt, a high speed interface made for EXTERNAL use, is subject to interference.

                  Comment


                  • #10
                    Originally posted by Drizzt321 View Post
                    Because it's NOT obsolete. PCIe 5.0 spec is only recently ratified and hardware/chip makers are all part of that process and know the timelines. They're devoted to PCIe 4.0 because otherwise they'll be left behind for another 2-4 years as everyone else implements PCIe 4.0 this year/next year.
                    I suppose it's a matter of perspective. I'll be happy to see any of these hit market, but with three announced, naturally I want the fastest of the three so I'd generally wait for it to hit the market, but PCIe 6.0 is not going to be available any time soon. The other thing is that the experience of implementing PCIe 4.0 is not going to inform the standard for PCIe 5.0, nor will PCIe 5.0's real experience inform the standard for PCIe 6.0. This seems like a strange way to run a railroad, but it's still better than what's happened with USB 3.0, 3.1, and 3.2 nomenclature. At least the PCI Consortium is bothering to increment the major rev number so it's moderately less confusing to casual consumers. c__c

                    Comment

                    Working...
                    X