You could actually decompress all of the packages in parallel regardless of their dependencies, and start installing (in serial) at the lowest level leaf of the dependency tree that is decompressed.
Problem is, most filesystems fall down under heavy parallel load. You get a lot of context switching because the kernel tries to give each decompression process its own fair share of CPU time, and the filesystem tries to give each process its own fair share of IO time, and so on. Look at some of Michael's benchmarks for 64MB random reads/writes with 8 or 16 threads, even on processors that have that many physical threads. It goes down to a crawl.
Why? Because, assuming the individual archives have relatively low internal fragmentation, you introduce a lot more seeks into the operation when you have many processes doing reads and writes in parallel. There aren't really any good solutions that maintain >90% throughput under this type of load, without being inordinately unfair to one process and just letting that one process run the show for an extended period of time. But our modern stacks are configured for just the opposite scenario because of the demand for "responsive" desktops.
Of course, if you have an SSD, seeks are basically a no-op. Telling an SSD to seek just elicits a reply like "LOL OK" because there are no moving parts involved that need to move before the data can be retrieved. So you can get great parallel performance on an SSD.
Alternatively, if you have an insane amount of RAM (greater than 16GB), you could store all the archives in a RAM disk and decompress them from there -- in parallel. That would be "holy hell" fast, and then you could just directly copy the uncompressed data from the ramdisk to the HDD/SDD for long-term storage during package installation.
Shoot; the packages would probably already be in RAM anyway, because you just downloaded them. If only there were a way to "force" those pages to stay in memory between the network-downloading phase and the unpacking phase, so that you don't have to download the packages, write them to disk, read the compressed data from disk, then write the uncompressed data back to disk. You'd just download them straight to RAM, then write the uncompressed data to disk. That eliminates writing the compressed data to disk and then reading the compressed data from disk. But you need as much RAM as the size of the packages you're downloading.
OK, this is cool. This is cool. We're getting somewhere.
Download each package to a buffer in RAM. As they are downloading, directly wire up Xz (which is a streaming decompressor) to the buffer, so that you are decompressing in parallel with the download. Since a lossless decompressor reading from RAM is going to be many orders of magnitude faster than any internet connection, you're practically guaranteed that the Xz decompressor will be sitting there twiddling its thumbs for minutes on end, occasionally waking up to read a block of data and decode it.
As it's decoding, it's writing its results to disk. So at the same time as you're pulling the data off of the network, you could even set it up so that you mmap the network buffer itself to make it a zero copy architecture, so that the data travels: NIC -> buffer in RAM -> XZ reads data from buffer (zero copy) -> XZ writes data to disk.
Then, once Xz writes the decoded data to disk, you just "mv" (re-link) the files to the correct locations.
Installation of packages would go from taking many minutes to taking exactly as long as it takes to download the packages. And the effect would be better the slower your internet connection is.
I think I'm on to something. The only caveat is that you'd have to have enough dependency information up front to know the correct order to download the packages (because presumably you can't install packages before all their dependencies are installed, because the install scripts may depend on some other package already being there). But if you have multiple isolatable independent tree structures (for example if you install Eclipse and all of its dependencies and GIMP and all of its dependencies in the same go) you could parallelize those separate trees and use all of your CPU cores while your network downloads multiple files from the network and decodes them on the fly......
Last edited by allquixotic; 07-10-2012 at 02:16 PM.
Anyway, even decompressing while downloading would probably also provide some gains by itself.
To put it simply, you are under-estimating the information we already have available. You can assume that cheaply/efficiently obtaining the dependency graph for the desired packages is an easy and standard feature for Linux package managers.
As for dpkg not supporting parallel installation, I haven't used Debian in about five years. I'd have thought them to have improved in this time, as even back then it was rather annoying to be unable to install something else in another terminal when one apt-get was doing its thing.
(yes, that problem could be solved simply by queuing. But true parallel install should be possible.)
How To Optimize Apt Archives
Add following line into /etc/fstabCode:sudo chmod u+x /etc/rc.local
Add following line into /etc/rc.localCode:tmpfs /var/cache/apt/archives/ tmpfs defaults,noatime 0 0
All deb package will be downloaded into ramfs, and keep in mind not to make to much upgrade at one time to save ram space, usually 512MB should be enough for most install and upgrade.Code:mkdir /var/cache/apt/archives/partial
Close enough right? Credits to the above commands go to here
I like the idea of having one big image to put on a usb stick. Could this be an iso bigger than the 4gig limit - is there a way to make an 80gig iso image to put on a usb 3.0 stick? All top quality free software tons on one stick.
Can't I just use 1 cd?
Debian packages are an ar archive (same as static libraries), containing 2 compressed tarballs: the first has metadata, the second has the package content. I think they want to move from tar.gz to tar.xz for these two tarballs.
Debian policy requires that anything that can be compressed must be. That includes manpages, fonts, and much of the documentation. Besides that, you have shell scripts and stripped binaries (plus debug symbols for some). There isn't much that's highly compressible.
It's quite possible to install from 1 CD; however, the "Debian operating system" includes every package in main. Hence a 73-CD media set; almost noone wants all of it, but it is available (for those who want to set up a workstation offline, or such).
Last time I tried installing everything (press + over uninstalled in aptitude), there were ~400 conflicts, it would take ~90 GB, and it took nearly 2 minutes to resolve the order. The archives are a lot larger now, so it might be near 200 GB.
The install media they're talking about is for the next Debian stable; that means you might have a DVD or two worth of updates by the next release. And yes, they do offer a disk containing all updates.
A minimal install of Debian is around 300 MB. It will run on i486, with minimum RAM in the 32-64 MB range.
Does anyone know of a distro that does real package management and allows parallel operations?
Systems that don't handle dependencies are irrelevant; I mean something where you can't get a race condition by say starting gnome install and then (while that's in progress) uninstalling GTK.