Fedora v. Ubuntu: A Performance Look
Need beetter tests...
The issue I have with most of these test is that they are more of a system/ram/harddrive tests more then anything.
There are 2 main tests that should be done, ram used , boot/load times.
How much ram is used in a default config once your at the desktop?
How much ram Xorg is using once your at the desktop (yes its different for every distro, and longer that its running)?
How much ram does Firefox use with a blank page?
How much ram does OpenOffice Writer use?
How long does it take the system to boot? get to the login? or the desktop?
How long does it take to load firefox , first time? , second time?
How long does it take to load OpenOffice Writer , first time? , second time?
Im sure there are many other things to test for these two types of tests.
If your going to do a performance test of the distro, test the distro, not the system that its ran on.
Lousy limited and uninteresting set of test.
Try gtkperf for example and a firefox benchmark.
I see too many people bitching here for nothing...get a life. Also, testing for any kind of RAM usage is worthless, as unused RAM is wasted performance. You kids need to go back to school and stay there.
All I could think of when reading the benchmarks was this:
Linux is Linux
Didn't matter if it was Red Hat or Ubuntu, both seemed to perform about the same to my eye sight.
Personally, I think that's good news. You have two different distributions which use two different Package Management systems, are run by two different companies, and cater to two different markets...
that when placed head to head perform just about the same...
Fedora 7 Test 2 is built with heavy debugging enabled like older test releases, also in kernel space. Without looking under the hood and without comparing components and configurations, the benchmarks cannot be used for any usable performance comparison.
I agree with this. There is no mention of the kernel configuration used on the distributions, so it's hard to know what other things are going on in the background on the distribution that would influence the outcome of the tests. I'm not an expert on which tests should be run like some in this thread claim to be, but I do know that distributions take many different sets of defaults when compiling their kernel and other various programs. I think it would be beneficial to go into the different kernel options and possibly boot parameters given when any differences are known to take lots of memory or be slower than some default setting that another distribution takes. The types of tests and results given in this article could then still be given, but it would have more of a cause and effect.
Originally Posted by Bradinger
I'm also open to having other tests included in these articles, but I don't know of which tests would be good to include. I agree that whatever tests are chosen should compare the software more than the underlying hardware. I'll look around and see if I can find some performance tests that fit this description. One place I'll start is this project:
EDIT: This test stands out after a brief look at the above list:
This program is designed to test system responsiveness by running kernel
compilation under a number of different load conditions. It is designed to
compare different kernels, not different machines. It uses real workloads
you'd expect to find for short periods in every day machines but sustains
them for the duration of a kernel compile to increase the signal to noise
Last edited by joshuapurcell; 03-08-2007 at 11:57 AM.
Apples and Oranges
That's what I mean.
The article mentions Fedora 7, yet FC7 Test2 is not even a release candidate. You cannot jump to conclusions based on giving FC7 test2 a try. It is a snapshot of the Fedora Development tree, built with extra debug options that won't be used for the final release. It is noticeable slower due to that. Using it for benchmarks is ridiculous so to say.