Announcement

Collapse
No announcement yet.

Fedora Cleared To Build Python Package With "-O3" Optimizations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fedora Cleared To Build Python Package With "-O3" Optimizations

    Phoronix: Fedora Cleared To Build Python Package With "-O3" Optimizations

    The Fedora Engineering and Steering Committee (FESCo) has signed off on the plans for Fedora 41 to build its Python using the "-O3" compiler optimization level rather than the "-O2" default for Fedora packages in the name of better performance...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I've always thought that going with whatever upstream sets while using -O2 as the fallback O level would be the best way of doing things. If the program's developers set it to -O3, -Os, -Ofast they probably have a reason. If they didn't set it to anything, err on the side of caution.

    To me, building everything with -O2 "because it's safe" seems about as silly as rebuilding the system with -Os "because it's an Android ROM on an ARM processor and the smaller binaries will better utilize less cache" and then -O2 and -O3 builds still run faster. I don't want to pull up bullshit from XDA from a decade ago...just saying that this is a good reason as to why O levels aren't necessarily a great thing to set globally due to following the wisdom of ancient PC voodoo anecdotes.

    Whatever the case, it's always good news when systems become faster.

    Comment


    • #3
      So this is what Fedora has become? Updated Software to version XY, changed some build flag or *gasp* changed some build flag to what everyone else and upstream uses? Its really time to look for another distribution. I can't even remember the last Fedora release that included something I was excited about.

      Comment


      • #4
        I thought they never cared about performance or end-user experience, only the debugging experience of "engineers" and the religious and legal and bureaucratic fanaticism of "GNU", "LL.M", "Committee".

        Comment


        • #5
          Originally posted by npwx View Post
          So this is what Fedora has become? Updated Software to version XY, changed some build flag or *gasp* changed some build flag to what everyone else and upstream uses? Its really time to look for another distribution. I can't even remember the last Fedora release that included something I was excited about.
          What more are you expecting, exactly? Fedora does releases every 6 months, they can't all be big and exciting. If you want interesting and experimental, go with Fedora Atomic or something.

          I don't even agree w/ your assessment, replacing zlib with zlib-ng is a pretty big change in Fedora 40. Bootable containers are significant. Fedora 41 will have DNF 5. The "modern C" initiative is boring but useful.

          Comment


          • #6
            Originally posted by npwx View Post
            So this is what Fedora has become? Updated Software to version XY, changed some build flag or *gasp* changed some build flag to what everyone else and upstream uses? Its really time to look for another distribution. I can't even remember the last Fedora release that included something I was excited about.
            I think it's been almost a decade since Linux distributions stopped getting big new features since there's less to innovate upon. I think the two largest changes I can recall in the last 10 years are Debian and Ubuntu switching to systemd, and Ubuntu switching (back) to GNOME. This is something all software faces when it exists for long enough.

            There's also an increasing push to do as much stuff upstream as possible, particularly in Fedora/Arch. This means there's less and less distro-specific development being done, which means many distros' release notes are mostly upstream updates. I'd say this is how it's supposed to work, even – the word Linux *distribution* is here for a reason

            I remember in 2009-2010 when Ubuntu was getting a big new feature in every release, including non-LTS ones.

            Comment


            • #7
              It is, wait for it, 4% faster overall. Pickle is 16% faster, but unpickle_list test is 16% slower.
              Maybe they should try "No-Python -- even faster Fedora".

              Comment


              • #8
                Originally posted by skeevy420 View Post
                I've always thought that going with whatever upstream sets while using -O2 as the fallback O level would be the best way of doing things. If the program's developers set it to -O3, -Os, -Ofast they probably have a reason. If they didn't set it to anything, err on the side of caution.

                To me, building everything with -O2 "because it's safe" seems about as silly as rebuilding the system with -Os "because it's an Android ROM on an ARM processor and the smaller binaries will better utilize less cache" and then -O2 and -O3 builds still run faster. I don't want to pull up bullshit from XDA from a decade ago...just saying that this is a good reason as to why O levels aren't necessarily a great thing to set globally due to following the wisdom of ancient PC voodoo anecdotes.

                Whatever the case, it's always good news when systems become faster.
                There is no definition what -O3 or -O2 should be doing, but in general -O3 is a place for opts with a capacity of regression or the resource blow-up of the compilation process, so it shouldn't be applied blindly. Besides, because everybody is using -O2, -O3 is simply less tested. The plus side of this is that -O3 exposes bugs, like relying that compiler won't rearrange original order around atomic operation (that's why ARM is more impacted than x86).

                -Ofast, through fast-math, may break reproducibility, which is almost always a bad idea. For a typical use, its better to recommend proven BLAS & HPC libraries with optimised paths rather than this.

                Comment


                • #9
                  Originally posted by mb_q View Post

                  There is no definition what -O3 or -O2 should be doing, but in general -O3 is a place for opts with a capacity of regression or the resource blow-up of the compilation process, so it shouldn't be applied blindly. Besides, because everybody is using -O2, -O3 is simply less tested. The plus side of this is that -O3 exposes bugs, like relying that compiler won't rearrange original order around atomic operation (that's why ARM is more impacted than x86).
                  There is some semblance of a definition and what they should be doing. LLVM has a similar page. The problem is all that can change between compilers, compiler versions, and even distributions and what they're trying to accomplish like security hardening. That's actually made me wonder if they're going to come out hardened O levels, like -HO2 (hydroperoxyl). The juvenile side of me snickers at the thought of O levels and HO levels. If libass pisses people off, they're sure to love -HO3.​

                  -Ofast, through fast-math, may break reproducibility, which is almost always a bad idea. For a typical use, its better to recommend proven BLAS & HPC libraries with optimised paths rather than this.
                  I don't disagree, however, that's why I said that if a project's developers picked something that isn't -O2 they probably have a reason for doing so. Intentionally picking compiler levels and options that upstream doesn't use may also break reproducibility.

                  Comment


                  • #10
                    There's this thing called "Undefined behaviour" which compilers are free to use to produce better optimizations. And these come and go based on architecture and compiler evolution. An entire category of bugs is hidden in "optimization unstable code" which changes behaviour based on optimizations employed on it. -O3 enables more optimizations -> more probability of hitting such cases.
                    This is a very good, albeit a bit dated, read on the subject. It's really fun to read and think about. And do test code with UBSAN.

                    This is most often the case why -O2 is chosen.

                    Comment

                    Working...
                    X