Page 1 of 5 123 ... LastLast
Results 1 to 10 of 63

Thread: Compiler Benchmarks Of GCC, LLVM-GCC, DragonEgg, Clang

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    15,679

    Default Compiler Benchmarks Of GCC, LLVM-GCC, DragonEgg, Clang

    Phoronix: Compiler Benchmarks Of GCC, LLVM-GCC, DragonEgg, Clang

    LLVM 2.8 was released last month with the Clang compiler having feature-complete C++ support, enhancements to the DragonEgg GCC plug-in, a near feature-complete alternative to libstdc++, a drop-in system assembler, ARM code-generation improvements, and many other changes. With there being great interest in the Low-Level Virtual Machine, we have conducted a large LLVM-focused compiler comparison at Phoronix of GCC with versions 4.2.1 through 4.6-20101030, GCC 4.5.1 using the DragonEgg 2.8 plug-in, LLVM-GCC with LLVM 2.8 and GCC 4.2, and lastly with Clang on LLVM 2.8.

    http://www.phoronix.com/vr.php?view=15422

  2. #2
    Join Date
    Oct 2007
    Location
    Sweden
    Posts
    174

    Default

    What happened there at graphics magic tests? Do the newer gccs utilize OpenMP that much better?

  3. #3
    Join Date
    Sep 2006
    Location
    PL
    Posts
    916

    Default

    While LLVM/Clang is not yet decisively faster than GCC, it is continuing to make great progress and does boast a different feature-set and focus from the GNU Compiler Collection.
    this template footer found in nearly every phoronix article, sounds like "please disregard this article, it's going to be obsolete soon"

  4. #4
    Join Date
    Jul 2009
    Posts
    261

    Default

    quite nice.
    it seems llvm likes c-ray but no chess.
    what id like to see now is a linux kernel (compiled with these different compilers) performance benchmark.

    a thing that bothers me aswell is the fact that there are being "regressions found" on intel cpu whereas there is a huge performance improvement on AMD (or vice versa). maybe one should take this into account when looking for regressions. as far as i remember there have only been benchmarks with intel hardware...
    and as the graphs show there really are some interesting splits in performance with different hardware

  5. #5
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,811

    Default

    Quote Originally Posted by jakubo View Post
    quite nice.
    it seems llvm likes c-ray but no chess.
    what id like to see now is a linux kernel (compiled with these different compilers) performance benchmark.
    Last I heard, can't be done with llvm.

  6. #6
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,811

    Default

    Sorry, that was ambiguous. As in, Linux kernel compiling can't be done with llvm.

  7. #7
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,052

    Default

    I may have missed it but what optimizations were used for these tests (especially for gcc)?

    Code:
    -march=native -mtune=native -fomit-frame-pointer -O3
    More, less?

  8. #8
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,811

    Default

    -mtune=native is redundant if you're using -march=native.
    -fomit-frame-pointer breaks debuggability in x86.
    -O3 has bugs and might slow down run-time in many cases.

  9. #9
    Join Date
    Oct 2009
    Posts
    845

    Default

    Quote Originally Posted by nanonyme View Post
    -mtune=native is redundant if you're using -march=native.
    -fomit-frame-pointer breaks debuggability in x86.
    -O3 has bugs and might slow down run-time in many cases.
    -O3 has been stable to compile with for ages, I can't recall having encountered any program that compiles with -O2 which has problems with -O3 in years. Also I haven't encountered any cases where -O3 is slower than -O2 in ages, so obviously these tests should be done with -O3, especially since that's where most of the new optimizations will end up.

  10. #10
    Join Date
    Jan 2008
    Location
    Have a good day.
    Posts
    678

    Default

    Quote Originally Posted by XorEaxEax View Post
    -O3 has been stable to compile with for ages, I can't recall having encountered any program that compiles with -O2 which has problems with -O3 in years. Also I haven't encountered any cases where -O3 is slower than -O2 in ages, so obviously these tests should be done with -O3, especially since that's where most of the new optimizations will end up.
    These benchmarks disagree:

    http://www.linux-mag.com/id/7574/2/

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •