Announcement

Collapse
No announcement yet.

PCI Express 4.0 Is Ready, PCI Express 5.0 In 2019

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PCI Express 4.0 Is Ready, PCI Express 5.0 In 2019

    Phoronix: PCI Express 4.0 Is Ready, PCI Express 5.0 In 2019

    The PCI-SIG has announced the finalized PCI Express 4.0 specifications and has laid out early details about PCI Express 5.0...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    PCI Express 5.0 is slated for release in 2019 and will deliver 32GT/s of bandwidth. These are big boosts over PCI Express 3.0 that was firmed up back in 2010 at 8GT/s
    It's still below what Moore's law would imply. Then again, PCIe isn't 100% about computing power which is what Moore's law is about.

    Comment


    • #3
      Yawn, it's getting into the "ridicolous overkill" bandwith. I like the fact that this means even smaller connectors/cables can run a GPU properly (currently with x4 lanes of PCIe 3.0 you can game fine), and the fact that there is already a new external pcie cable standard supported.

      Originally posted by bug77 View Post
      It's still below what Moore's law would imply. Then again, PCIe isn't 100% about computing power which is what Moore's law is about.
      Moore's Law was about number of transistors doubling each two years actually, so it might still fit. What's the transistor count on teh controllers of this stuff?

      Comment


      • #4
        Originally posted by starshipeleven View Post
        Yawn, it's getting into the "ridicolous overkill" bandwith. I like the fact that this means even smaller connectors/cables can run a GPU properly (currently with x4 lanes of PCIe 3.0 you can game fine), and the fact that there is already a new external pcie cable standard supported.
        overkill? How do you feed multiple xeon-phis or GPUs for compute?

        Comment


        • #5
          Originally posted by Nelson View Post
          overkill? How do you feed multiple xeon-phis or GPUs for compute?
          I thought computing wasn't so bandwith-intensive, but I'm no expert. I've only seen mining rigs, password cracking rigs and similar consumer-grade stuff where there isn't much bandwith usage.

          Comment


          • #6
            I wonder what this changes for latency. Graphics cards are pretty-much the only thing that takes advantage of a full-sized (x16) PCIe slot, but I wonder what kind of performance increase they would have with more bandwidth. I had the impression that latency was a bigger issue for graphics cards. I was hoping that there would be an optical PCIe to reduce latency, as I don't want discrete graphics to die, but I read somewhere that PCIe 5.0 will be optical.

            The Expresscard specification was never updated to go beyond PCIe 2.0, which prevents it from being used for external graphics cards. However, I feel like the expresscard form-factor would have to be redesigned to be relevant today.

            Comment


            • #7
              Originally posted by Electric-Gecko View Post
              I was hoping that there would be an optical PCIe to reduce latency, as I don't want discrete graphics to die, but I read somewhere that PCIe 5.0 will be optical.
              You do realize that having to convert from electrical to optical and back adds latency, right? Do you have any solid data to back up the idea that it would actually reduce latency? Optic fibers aren't actually that much faster than normal electrical cable even without the conversion delays, and the cables in question are extremely short to begin with. In fact the need to avoid sharp turns in the cable will likely make the cable even longer than it normally would be, potentially eliminating the benefit of the slightly faster conduction velocity.

              Originally posted by Electric-Gecko View Post
              The Expresscard specification was never updated to go beyond PCIe 2.0, which prevents it from being used for external graphics cards. However, I feel like the expresscard form-factor would have to be redesigned to be relevant today.
              There is no need for expresscard with USB Type-C carrying Thunderbolt.
              Last edited by TheBlackCat; 09 June 2017, 04:59 PM.

              Comment


              • #8
                Why the sudden increase in release rate?

                Comment


                • #9
                  Oh, I'm going to love PCIe 5.0 because I'm crazy.

                  I hope one day I could get a Laptop that supports IOMMU Groups via External PCIe Connections like Thunderbolt or OCuLink so I can get a dockable legacy peripheral solution. I would give it the Kitchen Sink, dual PCIe 1.0 x16 slots so I can run Windows 9x and XP in a VM at the same time with native GPU passthrough (There are early PCIe Cards that have Win 9X Drivers) and enough PCIe lanes to have a plethora of legacy cards like a Floppy Controller, a SCSI Card, IDE, SATA, Firewire, LPT, COM, The best Soundblaster with 9X/XP Drivers, an Agia PhysX Card, multiple capture cards for different things, ETC. All of that would have plenty of bandwidth for three PCIe 5.0 lanes

                  Like I said, I'm crazy :P I also hate USB because the bandwidth is never consistent.

                  Comment


                  • #10
                    Originally posted by starshipeleven View Post
                    Yawn, it's getting into the "ridicolous overkill" bandwith. I like the fact that this means even smaller connectors/cables can run a GPU properly (currently with x4 lanes of PCIe 3.0 you can game fine), and the fact that there is already a new external pcie cable standard supported.
                    Yeah, we're not exactly bandwidth starved (outside of special use cases), but it doesn't hurt to have it readily available in the future. What we are starting to be short of is PCIe lanes. Pretty soon all internal storage will be connected to PCIe, external stuff will go Thunderbolt. If not more lanes, we'd need some kind of multiplexer that can shove several slower peripherals onto a single PCIe lane or something like that.

                    Comment

                    Working...
                    X