Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15

Thread: Fedora 10 vs. Ubuntu 8.10 Benchmarks

  1. #11
    Join Date
    Oct 2008
    Posts
    66

    Default

    Quote Originally Posted by Takla View Post
    I think the more interesting comparison is with the series of tests which showed Ubuntu's performance decline very sharply after 7.04 and recover a little with 8.10. The fact that Fedora 10 and Ubuntu 8.10 are in effect identical performers leads me to wonder if all desktop distributions have suffered a big performance hit after kernel 2.6.15 (the Ubuntu 7.04 kernel).
    Sorry, but you are only the 100000st person who believes that the test Phoronix had run with Ubuntu 7.04, 7.10 etc. were correct. There is enough prove that something went wrong during the tests (e.g. my P3-1000Mhz gets nearly the same numbers for Ubuntu 8.10 as the tested Core2Duo 1,87Ghz and my P-Mobile 1,7GHz gets nearly the same numbers for Ubuntu 8.10 as Ubuntu 7.04 in the Phoronix test, etc), so the numbers for the old tests can't be trusted.

    @Michael :

    Now, as Ubuntu 8.10 and Fedora 10 are marked stable, this would be a good time to rerun the tests with Ubuntu and Fedora (7.x, 8.x, etc.) on the same hardware as the "Fedora 10 vs. Ubuntu 8.10"-test

  2. #12
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,587

    Default

    Quote Originally Posted by glasen View Post
    @Michael :

    Now, as Ubuntu 8.10 and Fedora 10 are marked stable, this would be a good time to rerun the tests with Ubuntu and Fedora (7.x, 8.x, etc.) on the same hardware as the "Fedora 10 vs. Ubuntu 8.10"-test
    If you would wait 20 days you can include openSUSE 11.1 in that too.

  3. #13
    Join Date
    Mar 2008
    Posts
    20

    Default

    Very nice, aways wanted to see some 64-bits benchmarks too, thanks for the review

  4. #14
    Join Date
    Jul 2008
    Posts
    565

    Default

    Quote Originally Posted by deanjo View Post
    OS vs OS comparisons should really be done utilizing the distro's packages anyhows as that is how most people run those packages. In a hardware comparison test then the same version of OS should be used and the packages recompiled.
    OS vs. OS can use the defaults, sure, but comparing performance differences between library versions is even better so that users will know that they can get better performance by installing the newer or older libraries.

    Compilation doesn't have to be required, that doesn't really have anything to do with this, that's what binary packages are for, but whatever. The point is, you should be able to pinpoint the cause of slowdowns to differences in the libraries hopefully, but if those are the same then you know the performance loss lies elsewhere obviously.

    You're probably right though, most users probably don't care so much, though maybe if it was easier to install newer libraries and compilation was required less of the time, more users would, and IMO that's where the focus should be is on the actual programs that cause the differences in performance. If you don't direct the problems to where they actually are then they'll never get solved.

  5. #15

    Default

    Consistency in testing?

    I was looking through the recent set of tests (vs Mac, vs Fedora, vs OpenSolaris, etc) I'm surprised to see that there doesn't seem to be a consistent set of test results published. While you seem to use the same test suite the results that are shown are cherry picked.
    Perhaps this is just the highlights that shows the interesting comparisons, but it would be good to publish links to entire set or results, otherwise there's the chance that an unfair comparison is being mode - showing only the favorable results.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •