Announcement

Collapse
No announcement yet.

GCC 6.2 vs. Clang 3.9 Compiler Performance On Clear Linux With Intel Kaby Lake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC 6.2 vs. Clang 3.9 Compiler Performance On Clear Linux With Intel Kaby Lake

    Phoronix: GCC 6.2 vs. Clang 3.9 Compiler Performance On Clear Linux With Intel Kaby Lake

    For the latest benchmarking off MSI's Cubi 2 with Core i5 Kaby Lake CPU are some GCC and LLVM Clang compiler benchmarks on Intel's Clear Linux distribution.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    very competitive, gcc compiling faster than clang is weird

    Comment


    • #3
      Originally posted by davidbepo View Post
      very competitive, gcc compiling faster than clang is weird
      Why is that? In the beginning Clang had almost zero features. GCC was loaded in comparison. Yet fanboys kept on ranting about compile speed. They have been racking up features by the boatloads and GCC has slowly optimized and added features. And guess what, they are slowly converging.

      This reminds me about the ARM vs x86 development. Fanboys kept on ranting about ARM having amazing low power envelope and x86 sucked.. bla bla bla.
      As ARM have been adding CPU features, again by the boatloads, power envelope crept up. Intel keept on improving their power envelope. Now, high end ARMs meet their TDP x86 brethren with approx. the same performance.

      Again the tradeoff.

      Comment


      • #4
        Originally posted by milkylainen View Post

        Why is that? In the beginning Clang had almost zero features. GCC was loaded in comparison. Yet fanboys kept on ranting about compile speed. They have been racking up features by the boatloads and GCC has slowly optimized and added features. And guess what, they are slowly converging.

        This reminds me about the ARM vs x86 development. Fanboys kept on ranting about ARM having amazing low power envelope and x86 sucked.. bla bla bla.
        As ARM have been adding CPU features, again by the boatloads, power envelope crept up. Intel keept on improving their power envelope. Now, high end ARMs meet their TDP x86 brethren with approx. the same performance.

        Again the tradeoff.

        I agree, I see gcc becoming faster and faster with better binaries

        Comment


        • #5
          I know what I'll use if I'm ever in the mood for some Dense LU Matrix Factorization or Jacobi Successive Over-Relaxation.

          Comment


          • #6
            "With SciMark 2.0 ... but GCC 6.2 on the other hand was more than twice as fast for monte carlo."
            My guess would be that this is a library issue --- the code calls std::random() or something like, and LLVM's lib chooses a slower (but presumably "more random" ) algorithm than GCC's lib.

            As for compile speed, I'm going to suggest that the naive way this is tested here is becoming ever less relevant. Certainly LLVM is most interested in optimizing the compile/debug/compile loop which means
            (a) the set of "fast" optimizations is the -Og optimizations, not the full set.
            (b) there is a lot of effort put into incremental link, caching earlier results, detecting minimum change sets, etc.
            If some change (to help cache results, for example) speeds up this loop but slows down a build from empty, LLVM will adopt it. I don't know GCC's priorities, but I assume they are the same.

            I'd say there are two interesting data points.
            One is something like:
            - compile+link everything as -Og, make a small change in a header, and see how long the SECOND compile+link takes
            This is the "practical" compile time, the one you care about every day.

            The other is
            - compile+link a large project at maximum optimization, full LTO, the works.
            This is mildly interesting as information about what to expect when you are close to ready to ship your large project, but it's of much less day-to-day importance.

            (Both of these, of course, should be run on a realistic developer machine (minimum of fast SSD, 16GB at least, 4 hyper-threaded cores, so that each build gets to exploit modern hardware as much as it can.)


            More generally, I'm going to make the same comment I have made before. It is VERY hard to believe that these tests are conducted competently and in good faith when half the relevant information is ALWAYS omitted. We always get the full GCC command-line (and specialized GCC command-lines for different benchmarks) but the LLVM command lines are always something vague and meaningless, and there's apparently no specialized command-line for different benchmarks.

            It's Michael's site and he can run it any way he likes, but as far as the REST of the world is concerned, details like that mean that the site has substantially less impact in the outside world because it's hard to tell what's honest benchmarking and what's an attempt to rig the numbers.

            Comment

            Working...
            X