Announcement

Collapse
No announcement yet.

The Dracut Initramfs Generator Is Slow - Could Be Much Faster As Shown By Distri's Minitrd

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Dracut Initramfs Generator Is Slow - Could Be Much Faster As Shown By Distri's Minitrd

    Phoronix: The Dracut Initramfs Generator Is Slow - Could Be Much Faster As Shown By Distri's Minitrd

    Dracut that is used for generating the initramfs image on Linux distributions like Fedora / RHEL, Debian, openSUSE, and many other distributions could be much faster...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Arch Linux is considering as well dracut, so I tried it as well and I must say it is not really convincing. It might cover more edge cases that mkinitcpio, but it feels really slow. As i install always the latest released kernel and lts I would not like to switch if it will not perform better.

    Comment


    • #3
      20x improvement, wow.
      It would be so nice if this were in the next Ubuntu LTS.

      Comment


      • #4
        initrd generation has been the slowest part of the whole package management infrastructure since forever. Every time I've seen this step during an Ubuntu update I've asked myself why it would take so long.

        Comment


        • #5
          Originally posted by Danny3 View Post
          20x improvement, wow.
          It would be so nice if this were in the next Ubuntu LTS.
          Ubuntu still uses initramfs-tools, so they're quite a bit behind everything.

          Comment


          • #6
            Originally posted by R41N3R View Post
            Arch Linux is considering as well dracut, so I tried it as well and I must say it is not really convincing. It might cover more edge cases that mkinitcpio, but it feels really slow. As i install always the latest released kernel and lts I would not like to switch if it will not perform better.
            Is this going to eventually mean that "mkinitcpio -p linux-lts" and /etc/mkinitcpio.conf are going to be something different in the future? I hope not.

            For me, I don't really care if the initramfs generates faster or not. 99.999% of the time I'm building one it's during a kernel upgrade. It takes around 5 minutes to build all the dkms modules I use and no initramfs generator is going to speed the "compiling software" part of the process up.

            Comment


            • #7
              It doesn't surprise me that a lot of performance was left on the table as the original compressor was just using a single thread (he got down from 31s to 9s by just changing to a multithreaded compressor). His blog post is insightful and hopefully achieves his goal to inspire more optimizations. Also there are a couple of other areas where single-threaded compression is still used today, e.g. the rpm-pkg and deb-pkg build target of the kernel which slows down these builds significantly on modern hardware. This needs to go!

              Comment


              • #8
                I wonder if using zstd for compressing the image would result in a significant gain in compression/decompression time?

                Comment


                • #9
                  Originally posted by skeevy420 View Post

                  Is this going to eventually mean that "mkinitcpio -p linux-lts" and /etc/mkinitcpio.conf are going to be something different in the future? I hope not.

                  For me, I don't really care if the initramfs generates faster or not. 99.999% of the time I'm building one it's during a kernel upgrade. It takes around 5 minutes to build all the dkms modules I use and no initramfs generator is going to speed the "compiling software" part of the process up.
                  You can rebuild dracut images with just "dracut -f".

                  Fedora is even discussing moving to prebuilt dracut images, and Silverblue already uses them.

                  Comment


                  • #10
                    Originally posted by ms178 View Post
                    It doesn't surprise me that a lot of performance was left on the table as the original compressor was just using a single thread (he got down from 31s to 9s by just changing to a multithreaded compressor). His blog post is insightful and hopefully achieves his goal to inspire more optimizations. Also there are a couple of other areas where single-threaded compression is still used today, e.g. the rpm-pkg and deb-pkg build target of the kernel which slows down these builds significantly on modern hardware. This needs to go!
                    Up until very recently, Arch Linux used single-threaded XZ for package compression (and kernel compression) and it was the weak/slow link in the chain. Still is for AUR users because it is still the default in makepkg.conf. I use multithreaded and adaptive zstd when building my packages. Another trick is to use ZFS on Linux and set your package directory to use gzip-9. Don't need to run any compressor in makepkg.conf then

                    Comment

                    Working...
                    X