Page 4 of 5 FirstFirst ... 2345 LastLast
Results 31 to 40 of 45

Thread: Benchmarks Of GCC 4.5.0 Compiler Performance

  1. #31
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,811

    Default

    Quote Originally Posted by V!NCENT View Post
    Derivatie IS a basic math skill...
    I agree. Also if we're actually talking about computer science, basic maths (at least here where I study computer science) probably means a very different thing than for majority of people. I'm not saying I'm especially good at it, I'm just saying basic maths actually means ginormous amounts of maths. It's just basic in the way that it's not actually anywhere near the core of CS, just a basic requirement before getting there.

  2. #32
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    To me basic math skills are at least formula's and function, trigionometry, calculus, probability stuff, matrices, statistics and analysis of it, volumetric objects... and anything in between.

    I am sorry that I do not know the english terms as I am not a native speaker.

    And oh yeah: Learning a Texas Instruments ti-84...

  3. #33
    Join Date
    Nov 2008
    Posts
    786

    Default

    Quote Originally Posted by Xake View Post
    This is because much optimization in all code really is to know how to in as few steps as possible do as much as possible. And sometimes that includes knowing things like mathematically what advantages and disadvantages different sort-algorithms has and on what load.
    Popular opinion seems to be: if you do it at a keyboard, it's CS. If it contains numbers, it's math. But that's simply not true.

    computability and complexity may look like math because there are numbers, you need proof for every statement and the whole thing makes your head spin, but it's not. It's CS.

  4. #34
    Join Date
    Sep 2008
    Posts
    130

    Default

    Quote Originally Posted by rohcQaH View Post
    not math nerds. It's computer science.

    compilers need some linguistics ("how do I parse this stuff?"), a lot of theory of computation (i.e. "prove that these two code snippets produce identical output" and other problems), lots of data structures ("how do I organize the information about the code so it's quick to access and modify, but still easy to handle?", basically the question of picking good IRs), a lot of technics specific to compilers (that have to be invented, implemented and tested) and a whole bunch of other skills. Basic math is included in the latter, but nowhere near a dominant skill.
    Parsing and generating intermediate languages requires an understanding of the formal grammar of the COMPUTER language, which is based in mathematics, not linguistics. And proving that your formal language is consistent is yet again the domain of mathematics.

    The science of computing is math. Any CS program worth it's salt will have you doing courses in Discrete and Computational Math.

  5. #35
    Join Date
    Oct 2008
    Posts
    3,248

    Default

    Formal language theory = math. Advanced math. Compilers and the related CS bits are just an example of a practical application of it.

  6. #36
    Join Date
    Oct 2009
    Posts
    845

    Default

    Quote Originally Posted by Smorg View Post
    Yeah I was going to ask the same. GCC 4.4 also needed -floop-interchange -floop-strip-mine -floop-block to enable graphite which I'd suspect would affect benchmarks versus GCC 4.3
    Have phoronix ever published the actual compiler optimization flags for their benchmarks? Alot of packages also have different default optimization settings, like for instance p7zip defaults to -O0 which makes people wonder why it's so slow. Also, even when using the same -O level across all packages when comparing to previous versions it's worth noting that the optimizations activated in the -Ox levels are not the same across all versions, so one version may require you to manually set an optimization that other versions automatically set at the same -O level.

    This of course makes it very hard to benchmark, and I'd say that using -O3 across all tests is the closest thing to being fair while still keeping it practical. This however means that you miss out on alot of optimizations that has to be manually sepcified, such as lto, graphite etc.

  7. #37
    Join Date
    Dec 2009
    Posts
    110

    Default

    Quote Originally Posted by XorEaxEax View Post
    Have phoronix ever published the actual compiler optimization flags for their benchmarks? Alot of packages also have different default optimization settings, like for instance p7zip defaults to -O0 which makes people wonder why it's so slow. Also, even when using the same -O level across all packages when comparing to previous versions it's worth noting that the optimizations activated in the -Ox levels are not the same across all versions, so one version may require you to manually set an optimization that other versions automatically set at the same -O level.
    You have a good point here. But it is the fact that -O2 contains different optimizations from compiler version to compiler version that actually makes this interesting. So AFAICS:

    gcc -march=i686 -O2 -pipe -o out.bin in.c

    would actually give the best comparision between compiler versions. Because what optimization -O2 gives is more interesting then what optimizations it actually includes.

    Quote Originally Posted by XorEaxEax View Post
    This of course makes it very hard to benchmark, and I'd say that using -O3 across all tests is the closest thing to being fair while still keeping it practical. This however means that you miss out on alot of optimizations that has to be manually sepcified, such as lto, graphite etc.
    Actually -O2 is more interesting as that is more in the terms of what the distributions would actually use, and also -O3 sometimes contains broken optimization (try out -ftree-vectorize on x86-32 if you do not know what I am talking about).

  8. #38
    Join Date
    Mar 2010
    Location
    Poland
    Posts
    11

    Default

    Quote Originally Posted by stephane View Post
    Michael L., are you sure to not be affected by this change in your BPE tests?

    http://gcc.gnu.org/gcc-4.5/changes.html

    On x86 targets, code containing floating-point calculations may run significantly slower when compiled with GCC 4.5 in strict C99 conformance mode than they did with earlier GCC versions. This is due to stricter standard conformance of the compiler and can be avoided by using the option -fexcess-precision=fast; also see below. [...]
    The tests were run on an x86-64 system, and in that case GCC uses SSE (not the 387 coprocessor) for floating point math by default (on 32-bit x86 you'd need to set -mfpmath=sse, which is default for x86-64). With SSE -fexcess-precision=standard (which is BTW default only when -std=c99 is used) has no effect (i.e., it behaves the same as -fexcess-precision=fast).

  9. #39
    Join Date
    Mar 2010
    Location
    Poland
    Posts
    11

    Default

    Quote Originally Posted by stephane View Post
    Michael L., are you sure to not be affected by this change in your BPE tests?

    http://gcc.gnu.org/gcc-4.5/changes.html

    On x86 targets, code containing floating-point calculations may run significantly slower when compiled with GCC 4.5 in strict C99 conformance mode than they did with earlier GCC versions. This is due to stricter standard conformance of the compiler and can be avoided by using the option -fexcess-precision=fast; also see below. [...]
    The tests were run on an x86-64 system, and in that case GCC uses SSE (not the 387 coprocessor) for floating point math by default (on 32-bit x86 you'd need to set -mfpmath=sse, which is default for x86-64). With SSE -fexcess-precision=standard (which is BTW default only when -std=c99 is used) has no effect (i.e., it behaves the same as -fexcess-precision=fast).

  10. #40
    Join Date
    Oct 2008
    Posts
    3,248

    Default

    For the compile speed tests, it would be nice to see it run at -O0 like most development is done with, since that's where the compilation speed tends to matter most. A regular optimized build would be ok as well, but tends to happen much less often.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •