Announcement

Collapse
No announcement yet.

Microsoft Releases An Open-Source Deep Learning Toolkit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Microsoft Releases An Open-Source Deep Learning Toolkit

    Phoronix: Microsoft Releases An Open-Source Deep Learning Toolkit

    2015 was filled with many interesting Linux/open-source announcements by Microsoft and it looks like 2016 will not be any different. Today they announced the open-sourcing of a new project...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    CNTK was actually already available on CodePlex, they've just moved it to GitHub.

    Comment


    • #3
      Originally posted by abral View Post
      CNTK was actually already available on CodePlex, they've just moved it to GitHub.
      From what I had read, on CodePlex the CNTK code was under some restricted license?
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Those benchmarks they're showing, if valid - and I do not grant that they are - are impressive. That would be a good reason for Microsoft to release the code, to entice companies and people playing with the technology to use CNTK as a way to lure people to using Microsoft's own hosted GPU services instead of competing ones.

        Comment


        • #5
          Relevant

          Comment


          • #6
            I would like to congratulate MS on "fostering" their own technology, and implementing their own library atop CUDA instead of their own portable C++AMP.

            Comment


            • #7
              Originally posted by Meteorhead View Post
              I would like to congratulate MS on "fostering" their own technology, and implementing their own library atop CUDA instead of their own portable C++AMP.
              It's open source. I hate Microsoft as much as anyone does, but it's not like this is yet another proprietary Microsoft tool.

              Comment


              • #8
                Michael_S And they are indeed misleading. Theano, for one, does support multi-GPU or at any rate its support will be a lot cleaner soon. I'm also certain that those graphs suffer from the phenomenon wherein zero effort is put into optimizing code for your competitors' frameworks while all effort is put on tuning your own.

                I would also hazard to say that 8-GPU machines are few and far between, so one can hardly expect mere mortals to tune for them. Eight GTX Titan X's will set you back an absolute fortune.

                Comment


                • #9
                  Idonotexist I can readily believe Microsoft optimized their own code on the benchmark and was at best lazy and at worst intentionally inefficient with competitors.

                  For the 8-GPU machines, I imagine they exist in the cloud computing GPU rental services.

                  Comment


                  • #10
                    No, you misunderstood. That's not one machine with 8 GPUs, that's two machines each with 4 GPUs. CNTK is doing distributed multi-GPU deep learning. That's amazing!

                    So not only is there CUDA in there, there's MPI (which surprises me!).

                    Look, issue #15 shows someone compiling with Open MPI:
                    mpic++ -c Source/CNTK/ModelEditLanguage.cpp -o .build/Source/CNTK/ModelEditLanguage.o -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -DCPUONLY -DUSE_ACML -DNDEBUG -msse3 -std=c++0x -std=c++11 ...


                    Wow, MPI! You only really see MPI in supercomputing clusters.

                    In my opinion, there are only three good guys here: Microsoft, Google (TensorFlow) and Theano

                    Those are the only three that are truly trying to support all platforms.

                    The other projects seem extremely hostile toward Windows. It's almost like they try to sabotage you from trying to run it on Windows. For example, some of the Google projects for dependencies for Caffe will not accept contributed bits of code to make those dependencies work for Windows. Caffe itself will not officially support Windows and it's very difficult to add the support because of the differences in the way GCC and MSVC deal with shared objects (DLLs).

                    I sincerely hope CNTK, TensorFlow, and Theano put Torch and Caffe to bed.

                    Torch is a runnerup for good behavior. Once you got luarocks running on Windows, you only have to waste half the time hacking some of those packages to work! And that's after you study their Linux install.sh script. And you must have extremely good working knowledge of how CMake works and a good working knowledge of C89/C99, C++, CUDA, Lua, POSIX, basic Win32 API. For me, that's about 1-2 days of wasted time. For a graduate student, could be a month.

                    Comment

                    Working...
                    X