Announcement

Collapse
No announcement yet.

LLVM Has New "parallel-lib" Sub-Project

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVM Has New "parallel-lib" Sub-Project

    Phoronix: LLVM Has New "parallel-lib" Sub-Project

    Parallel-Lib is a new project out of the LLVM group...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    LLVM is evil. No way out!
    I am not joking.
    When the distance between the code you wrote and the code is being executed is so large that even the basic logic has been reshuffled, parallelized and run-time optimized, then debugging a program is a real nightmare, provided that it's still possible.
    Can you trust a dynamic optimizer that will also slip some parallelism under the hood?
    Think about a 3000 lines of C/C++ code of a program running 24 hours a day as a server.
    Trivial bugs (like printf(), memcpy and the likes) have trivial checks that are usually done by static code analyzers.
    Imagine it crashes and dumps the core.
    What can you do with dynamically optimized machine code?
    Delete the core, restart the server. No reliable way do debug it!

    Comment


    • #3
      Uqbar So what about interpreted languages? I guess those are evil too? Java optimizes code on the fly, yet it seems to somehow work.

      While in principle I agree with you, I think that we're well past that point.
      Hell even in my basic programming class I stumbled onto a compiler bug; while learning bash I found a bug there.
      While programming in assembly for another class I found a discrepancy between what the docs said and the actual hardware behavior (that was a real nightmare...).
      Trust of underlying technology has to start somewhere, unless you're willing to wire your own CPU, write an STL and compiler and everything on top. (and even then you trust yourself to do it right)

      Comment


      • #4
        Originally posted by Serafean View Post
        Uqbar So what about interpreted languages? I guess those are evil too? Java optimizes code on the fly, yet it seems to somehow work.
        Correct! Whatever messes with your code logics can make your life much harder, especially if you cannot disable the mess.
        Originally posted by Serafean View Post
        While in principle I agree with you, I think that we're well past that point.
        Hell even in my basic programming class I stumbled onto a compiler bug; while learning bash I found a bug there.
        As I said, bugs happen. But if you don't have effective options for debugging, then you're done!
        Originally posted by Serafean View Post
        While programming in assembly for another class I found a discrepancy between what the docs said and the actual hardware behavior (that was a real nightmare...).
        Trust of underlying technology has to start somewhere, unless you're willing to wire your own CPU, write an STL and compiler and everything on top. (and even then you trust yourself to do it right)
        The trust for the "underlying layer" is a de facto need, you need to have a strong debugging mean. You can run gdb with your bash (in theory) and see your script trigger the bag. You could do the same with PHP (well, not with these new optimizer, maybe) and Perl. But you cannot really do it with Java! And maybe you'll find some surprise with C and LLVM!

        Comment


        • #5
          Originally posted by Uqbar View Post
          But you cannot really do it with Java!
          Yet many companies are able to generate massive amount of cash with Java, which, in this day and age, is the only thing that matters.

          Comment


          • #6
            Uqbar - An assertion (that is not reflected in practice) and without verifiability by some scientific work / papers is "for me" very quickly just FUD. We all know: Fear leads to anger. Anger leads to hate. Hate leads to suffering. ...

            Comment


            • #7
              Originally posted by Uqbar View Post
              LLVM is evil. No way out!
              I am not joking.
              When the distance between the code you wrote and the code is being executed is so large that even the basic logic has been reshuffled, parallelized and run-time optimized, then debugging a program is a real nightmare, provided that it's still possible.
              Can you trust a dynamic optimizer that will also slip some parallelism under the hood?
              Think about a 3000 lines of C/C++ code of a program running 24 hours a day as a server.
              Trivial bugs (like printf(), memcpy and the likes) have trivial checks that are usually done by static code analyzers.
              Imagine it crashes and dumps the core.
              What can you do with dynamically optimized machine code?
              Delete the core, restart the server. No reliable way do debug it!
              This isn't exactly how things work.

              It sounds like you're against any sense of optimization. The world without an optimizing compiler is a very slow place, especially given most people don't understand or care to understand the low-level details. Think of 3000 lines of a C/++ program running 24 hours a day that isn't optimized by the compiler by some guy who was forced to rush the program out. That said, you still have to take into account things like cache misses which the compiler won't fix for you. It also has to give you enough correlation to machine code to be able to fix that in the higher level language.

              This isn't "dynamic optimization", and even if it was, dynamic optimization has various benefits that correlate directly to the code you wrote, such as taking advantages of extensions of the architecture the program is currently running on that couldn't be known at compilation time. This is why JIT can in some cases be a bit faster than completely native code.

              Debugging isn't as complicated as you make it out to be. There are other ways of debugging programs outside of a debugger and there are various tools that can be provided in affiliation with runtimes such as this. As a matter of fact, compiler-rt has various of these. I'd imagine there would be some correlating versions of these for parallel-libs.

              Comment


              • #8
                Originally posted by Uqbar View Post
                LLVM is evil. No way out!
                I am not joking.
                When the distance between the code you wrote and the code is being executed is so large that even the basic logic has been reshuffled, parallelized and run-time optimized, then debugging a program is a real nightmare, provided that it's still possible.
                Can you trust a dynamic optimizer that will also slip some parallelism under the hood?
                Think about a 3000 lines of C/C++ code of a program running 24 hours a day as a server.
                Trivial bugs (like printf(), memcpy and the likes) have trivial checks that are usually done by static code analyzers.
                Imagine it crashes and dumps the core.
                What can you do with dynamically optimized machine code?
                Delete the core, restart the server. No reliable way do debug it!
                You do realize that there are different optimization levels that you can use and that LLVM IR is beautiful to work with, right? Take the nonsensical fear mongering back to whatever rock you've been living under.

                Comment


                • #9
                  Originally posted by Uqbar View Post
                  Imagine it crashes and dumps the core.
                  What can you do with dynamically optimized machine code?
                  Delete the core, restart the server. No reliable way do debug it!
                  Honestly, in that scenario I would say not enough stress testing taking place is more important than an imagined lack of debugging methodology; yes, imagined. Since the problem exists only in your mind. Regardless, anything that runs 24/7 should be thoroughly stress tested before being brought to a production machine.

                  Comment

                  Working...
                  X