However, It doesn't apply to design phase and compile time optimisations. For eg, ubuntu is compiled to support many old archetectures back to (i think) the original pentium and therefor does not allow a lot of optimisations that could be applied to a single archetecture system. Even if (like SSE, SSE2, etc optimisations) they can be detected and used at runtime, there is still the dead weight of the legacy support code in there takeing up memory space. Some distros build for i386, i think ubuntu doesn't go that far back, only to the i586.
This is why an well tuned gentoo system SHOULD run faster, load quicker and be lighter on memory usage than a system that is compiled to make sure it will work on every 32bit x86 archetecture you throw at it.
I guess the X86-64 build SHOULD dodge most of this legacy junk, if done properly?
ok, the phoronix executive editor said (on page 2 of this thread i think)
In 'stock mode' ubuntu is compiled for the lowest common denominator (ie i586, the original pentium). Ubuntu is distributed as precompiled binary so it is very much non-stock mode to compile it with different archetecture spescific optimisations.The test programs may have been compiled spescifically on each system, but they rely on librarys, kernel modules and the core of the kernel its self in order to work.Each OS was left in their stock mode, which is why JFS or any other FS wasn't used.
I don't say this makes much difference to the outcome of the tests, just that its something inherently in the advantage of a restriceted OS-Hardware partnership.
I have personal experience with HFS+ and it _sucks_.
Really. It's utter shit. You don't want it. Out of OS X, Linux, BSD, Windows, whatever.. hfs+ is the crustiest and most backward file system out there. It is a fat32-generation file system with BSD VFS layered above it to create the illusion of semi-POSIX-compatibility and journalling... both come with a big hit in performance.
I can't really stress this enough. It's a slow, old fashioned FS that is much more likely to corrupt your data then Linux's Ext3.
What is more likely happening is that MacOS is simply lying about file system operations. For example; With firefox one of the big performance misfeatures for Linux was that the application was called fsync() a hundred times a second. (Firefox folks are stupidly trying to make their sqlite database ACID compliant or some bizzare thing like that)
With most operating systems, like Windows or OS X, the OS just ignores that sort of thing and lies to the application that it has sync'd to the harddrive, in order to improve the perception of performance. With Linux it's taken much more seriously actually does the sync'ng casing the harddrive to thrash around and freezes any application waiting on I/O.
So you have to take any file system benchmark with a huge grain of salt.
For example unless that Bonnie++ benchmark was done correctly it's completely and 100% misleading.
The deal is that in order for it to be accurately determining the performance of the file system you have to drive the files out of cache and make sure that it's actually written to the drive. Otherwise your just judging memory I/O and Bonnie++ is not designed to do that accurately. And sync OS X lies about things like 'sync' (thus making it much likelier your data will be corrupted during any power outage or system crash) you can't trust anything the benchmark says otherwise.
This system has 1GB of RAM. With running benchmarks I expect most of the OS is going to be pushed to cache and you'll have about 700 megs or more of file system cache you have to deal with.
So with bonnie++ you'll have to make sure that the benchmarks are using files that are 1.5x times the amount of memory. Then you'll get a much more accurate response to disk and file system performance.
In other words.. without actually seeing the settings used in the benchmarks the benchmarks are WORTHLESS and MISLEADING. No doubt about it. Disk and file system benchmarks are insanely hard to get right and while it's possible to draw conclusions from bonnie++, it's going to require much more then a simple graph. We need settings and other data output.
Take a look at the Gzip performance benchmark. Your dealing with a 2GB file, so cache performance isn't going to enter into it much. With that Linux trounces OS X. The CPU is more then fast enough, even on these low-end boxes, make Gzip'ng a 2GB a I/O bound operation.. the cpu is capable of compressing much faster then the disk is capable of reading and writing files.
Here is a example of what I am talking about:
I am running Debian Unstable on a Core2Duo laptop. I have 2GB of RAM, 150GB harddrive, and a T7300 2.0ghz dual core CPU. The Gnome environment, with a couple terminals and web browser, actively uses about 256-300 MB of RAM. This leaves 1.7GB of RAM for application and file system cache.
I have a 701MB AVI file that I am compressing using Gzip. AVI is already heavily compressed using media-optimized compression technology (aka mpeg4) so there is no way in hell Gzip is going to be able to improve on that. So it'll just thrash around and is a mostly worst-case scenario for this sort of application using up as much CPU resources as possible.
First run is:
$ time gzip -c Star\ Trek\ 10\ -\ Nemesis.avi > /dev/null
Second run is:
$ time gzip -c Star\ Trek\ 10\ -\ Nemesis.avi > /dev/null
A minute of difference. A 100% improvement in performance in just two runs. This is because when reading the file from disk. However the second time the file was loaded into FS cache, so the disk was no longer the bottleneck. So the second time it was a CPU benchmark since it was CPU bound.
So it's very important to include the settings for the benchmarks. Especially the FS. Because with FS benchmarks the numbers completely change their meanings depending on your settings.
Thanks for that explanation, it makes a lot of sense.
Ok, why not try to benchmark ubuntu running on ReiserFS partitions?
I think that would had made a difference on the results...
Ubuntu loads tons of unnecessary drivers.
Looks like Ubuntu got a serious beat down. Very interesting comparison (or not?) But even if it came out on top this still fails to answer the question which is: Has Ubuntu has become slower over time?