Announcement

Collapse
No announcement yet.

Raptor Computing Systems Planning To Launch New ATX POWER9 Board With OpenCAPI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Raptor Computing Systems Planning To Launch New ATX POWER9 Board With OpenCAPI

    Phoronix: Raptor Computing Systems Planning To Launch New ATX POWER9 Board With OpenCAPI

    In addition to the news out of the OpenPOWER Summit in San Diego that the POWER ISA is going open-source and the OpenPOWER Foundation becoming part of the Linux Foundation, Raptor Computing Systems shared they plan to launch a new standard ATX motherboard next year that will feature OpenCAPI connectivity...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    The only thing I am worried about is the OMI-DDIMM format they want to use before embracing HBM to support OpenCAPI performance levels.

    I am not aware of *anyone* using this DIMM format.

    Only D-DDR5 as far as I know has been ratified for OpenCAPI use by JEDEC.

    They say it will be much cheaper to make than DDR4 because OpenCAPI doesn't use a fixed DDR speed to communicate with RAM. It's all dynamic.

    That should make overclocking and ram tuning interesting.



    According to the presentation, OpenCAPI doesn't give a rip of what kind of RAM it has to work with, you can even mix DDR types and it will regulate it. But I seriously doubt any maker will give you a board that has a mix of DDR types.

    At last years summit, a session was held on exactly what differential DIMM's (DDIMM) can bring relative to OpenCAPI requirements.



    It depends on how you look at this. If all of these OpenCAPI interfaces have been set free and there are no royalties involved, technically it should stick until HBM is a bigger reality.

    Comment


    • #3
      Originally posted by edwaleni View Post

      (bunch of stuff)
      Rarity is just one aspect. Look that the picture of the OMI DDIMM stick; it has an onboard processor. Which means that this thing isn't going to be affordable right from the get-go.

      I don't mind non-standard interfaces or form factors as long as a) the principal provider sells such replacement hardware at reasonable prices or b) there's a large market out there selling compatible hardware on those non-standard interfaces or form factors. OMI DDIMM looks like it falls into neither camp.

      Comment


      • #4
        Originally posted by Sonadow View Post
        Rarity is just one aspect. Look that the picture of the OMI DDIMM stick; it has an onboard processor. Which means that this thing isn't going to be affordable right from the get-go.
        It wasn't that long ago that x86 memory controllers were off die, in a separately packaged processor on the motherboard, called the North Bridge. Clearly the chip in the photo is a memory controller. I can't imagine it would cost more than a few dollars i.e. congruent with what the x86 North Bridge memory controllers used to cost. What makes you think it wouldn't be "affordable"?
        Last edited by torsionbar28; 20 August 2019, 11:29 PM.

        Comment


        • #5
          Originally posted by edwaleni View Post
          The only thing I am worried about is the OMI-DDIMM format they want to use before embracing HBM to support OpenCAPI performance levels.

          I am not aware of *anyone* using this DIMM format.

          Only D-DDR5 as far as I know has been ratified for OpenCAPI use by JEDEC.
          OpenCAPI has _nothing_ to do with DIMM format.
          I don't know why you start mixing DIMM format into the OpenCAPI discussion?
          You even state that OpenCAPI does not give a flying * about DIMM-format.
          OpenCAPI is a bus. Much like PCIe. In fact, OpenCAPI, NvLink relies on the same base SerDes on the boards that PCIe have.
          The difference is that the protocol and topology is better suited for high end data transfers.

          The OMI-stick is a OpenCAPI bus device. Much like M.2 is a PCIe bus and NVMe's usually are PCIe devices.
          There is no JEDEC ratification here. Micron is making a DRAM product to use on OpenCAPI-busses.
          It's a DDR4 memory controller on a OpenCAPI bus. My guess is that Micron did not have a DDR5/HBM2 memory controller ready in their IP-portfolio.
          Supposedly Micron was to enter HBM2 market this year, but I dunno if they are ready for it yet.

          I think it's a pipe product. OpenCAPI is just to thin to beat custom DRAM pathways like HBM2 or GDDR5X/6 on GPU boards.
          And you're way better off relying on the CPU DRAM pathway for access than over the I/O-links.
          There isn't much difference between this and a PCIe DRAM board.

          Comment


          • #6
            I hope that this time the PCIe situation is going to be better for workstations with NVMe RAID.

            With the Blackbird there was only one PCIe x16 slot. With the Talos II Lite, there was one x16 and one x8, but the x8 wasn't open-ended (would not accept x16 cards) and the x16 slot did not support bifurcation for PCIe-to-M.2 adapters like this one:



            So the only option if you wanted a PCIe x16 graphics card and NVMe RAID were quite expensive cards with PLX switch.

            Comment


            • #7
              With AMD recently joining (Intel's) CXL for cache coherent device access, one wonders whether OpenCAPI is dead in the water?

              Comment


              • #8
                This is one of the multiple reasons that POWER is still a zombie...

                No following the commodity hardware trend and leading it? It's going to die!

                Comment


                • #9
                  Originally posted by milkylainen View Post

                  OpenCAPI has _nothing_ to do with DIMM format.
                  I don't know why you start mixing DIMM format into the OpenCAPI discussion?
                  You even state that OpenCAPI does not give a flying * about DIMM-format.
                  OpenCAPI is a bus. Much like PCIe. In fact, OpenCAPI, NvLink relies on the same base SerDes on the boards that PCIe have.
                  The difference is that the protocol and topology is better suited for high end data transfers.
                  Because the presentation states that current DDR data rates are fixed and inadequate to support OpenCAPI data rates, which are dynamic.

                  So what they said was if you want to fully exploit OpenCAPI, the memory has to support the dynamic data rates of the bus.

                  IBM presented the OMI-DDIMM as a dynamic memory capable of supporting that bus.

                  JEDEC approved a DIMM that also supports differential data rates.

                  That is the relationship between DIMM types and OpenCAPI.



                  Comment


                  • #10
                    Originally posted by edwaleni View Post

                    Because the presentation states that current DDR data rates are fixed and inadequate to support OpenCAPI data rates, which are dynamic.
                    So what they said was if you want to fully exploit OpenCAPI, the memory has to support the dynamic data rates of the bus.
                    IBM presented the OMI-DDIMM as a dynamic memory capable of supporting that bus.
                    JEDEC approved a DIMM that also supports differential data rates.
                    That is the relationship between DIMM types and OpenCAPI.


                    Your comment does not make much sense to me. There is no "capable of supporting a bus".

                    OpenCAPI is a fixed-rate bus. Much like PCIe there is "lanes".
                    More lanes (SerDes) provide more bw. There is nothing dynamic or magical about it.
                    In the same sense, DDR rates are rates. Any DDR module can be clocked up to it's max setting in JEDEC approved speed-grades.

                    I think you're confusing the latency and bandwidth provided by the bus with some "ratification".
                    Any storage media of any kind can be fullfilled over OpenCAPI as they can over PCIe. They are unrelated.

                    To fill the OpenCAPI bus you need fast memories or a lot of channels.
                    You can saturate a OpenCAPI bus with DDR3 if you want to. Latency is still going to be latency.
                    There is no way an OpenCAPI bus changes how DDR modules do CAS, RAS, CD etc.

                    There is nothing to it. It is a (any) memory controller over a memory-mapped bus.
                    The memory controller sets up the DDR4 modules and talks OpenCAPI over whatever width is provided.

                    The only thing I can think of is the lost IBM hegemony. They are looking to recuperate.
                    In the grander scheme they are probably hoping to insert themselves into the fray again.
                    OMI comes as a part of the "OpenPOWER" effort.
                    Well good luck with that. 28GHz+ SerDes are stupid expensive and stupid hard to validate.
                    This compared to the low-speed/low-power wide-pin DDR-bus.
                    If you think about it, there has been efforts to replace the DRAM bus with serial protocols already.
                    Remember RAMBUS? Expensive, power-hungry and patent trolls. All added into one.
                    In the end not much faster than the good old DDR-bus.

                    And OpenCAPI is still to slow compared to a custom DRAM-bus.
                    OpenCAPI will max out at ~~ 100G/sec+, HBM2 on a NVidia card is ~~ 1000G/sec+
                    People enjoy the cheap JEDEC DIMM standards. If OMI was to get a foothold, DRAM-sticks will have another sticker price altogether.

                    I think IBM is failing to see why they lost the high-end hardware computer scene to Intel, AMD and NVidia to begin with.
                    It's because they were ridiculously expensive to deal with.
                    Anything IBM used to mean "Add confusion and misery. Also add a lot of zeros to the sticker price".
                    This is not going to change anything. People will still count computing power/$, as will any datacenter and supercomputer units.
                    Last edited by milkylainen; 21 August 2019, 03:28 PM.

                    Comment

                    Working...
                    X