Announcement

Collapse
No announcement yet.

Summit Supercomputer Launches With 200 PFLOPS Of Compute Power

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Summit Supercomputer Launches With 200 PFLOPS Of Compute Power

    Phoronix: Summit Supercomputer Launches With 200 PFLOPS Of Compute Power

    Oak Ridge National Laboratory has officially launched their "Summit" supercomputer today that also comes in as the world's fastest...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I have a question. Why does NVIDIA always make extremely powerful hardware, but often lets their consumers down?

    Comment


    • #3
      But does it have RGB?
      edit: on a more serious note. A fascinating machine. What tools do they have in mind to make ones application scale well? How would it handle failure and what would the risk be of it catching fire on some point in its lifetime? (If it is looking for some work, please help out lczero.org )
      Last edited by LuukD; 08 June 2018, 03:22 PM.

      Comment


      • #4
        And can it play Chrys....
        Never mind.

        Comment


        • #5
          Originally posted by tildearrow View Post
          I have a question. Why does NVIDIA always make extremely powerful hardware, but often lets their consumers down?
          Because that's where the money is. While gamers were the initial drive behind programmable shader architectures, that led to GPGPUs, we're not needed anymore since the big data/AI/deep learning is leading the way now. They prefer selling high margin parts to huge clients to supporting arcane Linux-based consumers "properly". I write that in quotes because nVidia seems to think that a closed driver that "just works" is fine and open-sourcing would be too much effort for relatively little gain.

          Not to mention the majority of nVidia's consumers don't care for desktop Linux

          Comment


          • #6
            Originally posted by tildearrow View Post
            I have a question. Why does NVIDIA always make extremely powerful hardware, but often lets their consumers down?
            Consumer market has orders of magnitude less profit margin than servers (computing or HPC or AI)

            It's the same also for Intel, for that matter. Xeons with dozens of cores appeared as soon as they were actually possible at all (a long time ago), while consumer stuff was locked at 4 cores (with/without HT) for a decade.

            Comment


            • #7
              Originally posted by numacross View Post
              I write that in quotes because nVidia seems to think that a closed driver that "just works" is fine and open-sourcing would be too much effort for relatively little gain.
              Technically speaking, that's right. The effort of rewriting the driver as opensource would not result in them selling more (Linux consumers are a minority, and a lot of them buy NVIDIA anyway), and they would actually risk losing control of their strict product segmentation and planned obsolescence, which is BAD for sales.

              Comment


              • #8
                Originally posted by LuukD View Post
                But does it have RGB?
                A fascinating machine. What tools do they have in mind to make ones application scale well? How would it handle failure and what would the risk be of it catching fire on some point in its lifetime? (If it is looking for some work, please help out lczero.org )
                I would venture that they have a team of people who work to get the best out of every payload that goes throuugh. That means figuring out the exact optimisations required for CUDA and the Power architecture so that the super computer runs at near 100% efficiency

                Comment


                • #9
                  Originally posted by LuukD View Post
                  But does it have RGB?
                  edit: on a more serious note. A fascinating machine. What tools do they have in mind to make ones application scale well? How would it handle failure and what would the risk be of it catching fire on some point in its lifetime? (If it is looking for some work, please help out lczero.org )
                  Most of these types of supercomputers (clusters of machines) run some form of the Message Passing Interface (MPI) standard. It is a standard that has many implementations, but the biggest these days seems to be MPICH (https://www.mpich.org/) and Open MPI (https://www.open-mpi.org/). Often for large supercomputers, the library is tuned specifically for the cluster, so what may end up being used on a large cluster may deviate (in implementation) from the base project, but the API should remain the same.

                  If you are interested in programming supercomputers, one can set up an MPI library on a set of very cheap computers - like a few Raspberry Pi's. It is not as fast as a normal computer, but is good for learning the programming paradigm.

                  Comment


                  • #10
                    Stock photo or are only those first few racks completely empty?

                    Comment

                    Working...
                    X