Announcement

Collapse
No announcement yet.

GCC Automatic Parallel Compilation Viability Results Help Up To 3.3x

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC Automatic Parallel Compilation Viability Results Help Up To 3.3x

    Phoronix: GCC Automatic Parallel Compilation Viability Results Help Up To 3.3x

    One of the most interesting projects out of Google Summer of Code 2020 has been the ongoing work for allowing individual code files to be compiled in parallel, building off work last year in addressing GCC parallelization bottlenecks. The final report for GSoC 2020 on this work has been issued...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Speeding up compilation of large source code files seems somewhat counter productive. It encourages large source code files which then lack modularity. From thirty years ago, when I managed a software development group, we forbad any source file greater than 64k bytes. We did so simply because it would force an engineer to break his or her sub-project into rational modules and think about the structure of their work.

    An exception could be made for machine generated source code. However, these tend to not be recompiled very often.

    I still review my own code that way. If a source file exceeds 64k I dig into it and break it into more logical units.

    Comment


    • #3
      Yes, 64K ought to be enough for anybody.
      Last edited by ed31337; 03 September 2020, 01:38 PM.

      Comment


      • #4
        Originally posted by Clive McCarthy View Post
        Speeding up compilation of large source code files seems somewhat counter productive. It encourages large source code files which then lack modularity. From thirty years ago, when I managed a software development group, we forbad any source file greater than 64k bytes. We did so simply because it would force an engineer to break his or her sub-project into rational modules and think about the structure of their work.

        An exception could be made for machine generated source code. However, these tend to not be recompiled very often.

        I still review my own code that way. If a source file exceeds 64k I dig into it and break it into more logical units.
        People have different use-cases, it might be ok for you but not for others.

        At my previous job, we had a generated file with a few tens of thousands of lines that was initializing the database, all the data being put + pre-calculated in-place at compile-time, in one structure(it could not be split).
        The alternative would of been to not care about runtime performance and split the code and initialize it at runtime, consuming A Lot of time at initialization(over 1 minute), which we could not afford since this was a critical component for the entire system to start working.
        The build machines with a lot of cores but low GHz would compile that for ~2 hours - we had to move that to the developer-side only and compile it manually, it was still very slow.
        This GCC change would of helped A Lot.

        It might not be your use-case, but there are other people that have to do these kind of things for reasons which you might not care or understand but it is important for the optimal functioning of the system.

        Simply put, in our case: we cared more about starting fast and being able to catch that you had a car crash and call the emergency services, instead of worrying that somebody doesn't like that the code is not split and that the structure/database compiles slowly.
        Last edited by Alliancemd; 03 September 2020, 08:23 PM.

        Comment


        • #5
          Originally posted by Clive McCarthy View Post
          Speeding up compilation of large source code files seems somewhat counter productive. It encourages large source code files which then lack modularity. From thirty years ago, when I managed a software development group, we forbad any source file greater than 64k bytes.
          i take it #include directives bringing in inlines/templates weren't invented back then
          Originally posted by Clive McCarthy View Post
          An exception could be made for machine generated source code. However, these tend to not be recompiled very often.
          subj is being tested on gcc's machine generated files

          Comment


          • #6
            3x on 64 threads is not the most efficient use of resources. and jobserver can't help here, because it can't prioritize first thread per compiler invocation vs additional threads per compiler invocation

            Comment

            Working...
            X