Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

  1. #1
    Join Date
    Jan 2007
    Posts
    13,407

    Default Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

    Phoronix: Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

    Following the release of the first beta for Red Hat Enterprise Linux 6.0 back in April we delivered our first RHEL 6.0 benchmarks while putting it up against CentOS 5.4 and Fedora 12. Now that the second beta of Red Hat Enterprise Linux 6.0 was released last week, we took the workstation build and have benchmarked it against the latest releases of Ubuntu, CentOS, and openSUSE.

    http://www.phoronix.com/vr.php?view=15104

  2. #2
    Join Date
    Sep 2006
    Location
    PL
    Posts
    906

    Default

    benchmarking enterprise distributions.

    well, i don't know about this, but it feels fundamentally wrong somehow.

  3. #3
    Join Date
    Jan 2009
    Location
    Sweden
    Posts
    3

    Thumbs up

    Quote Originally Posted by yoshi314 View Post
    benchmarking enterprise distributions.

    well, i don't know about this, but it feels fundamentally wrong somehow.
    Why? I like it. Altough a few more HPC calculation benchmarks is needed.

  4. #4
    Join Date
    Sep 2008
    Posts
    270

    Default

    Quote Originally Posted by yoshi314 View Post
    benchmarking enterprise distributions.

    well, i don't know about this, but it feels fundamentally wrong somehow.
    Nothing wrong with making money out of opensource software.

  5. #5
    Join Date
    Apr 2010
    Posts
    60

    Default

    Once again, a useless comparison from Phoronix. All systems have different kernels, different xorg-server, different DEs etc. You are comparing apples and oranges.

    But enough with that. What's more disappointing about this article is that it lacks commentary/conclusion. If you are not willing to investigate why X performs better than Y, you shouldn't bother writing the article. Maybe Y has more background apps than X? Maybe there is a known regression in Y's kernel?

    Once again, you (Michael), wrote an article without including any analysis whatsoever, decreasing it's value to "crap".

    Phoronix used to be better than this...

  6. #6
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by dcc24 View Post
    Once again, a useless comparison from Phoronix. All systems have different kernels, different xorg-server, different DEs etc. You are comparing apples and oranges.

    But enough with that. What's more disappointing about this article is that it lacks commentary/conclusion. If you are not willing to investigate why X performs better than Y, you shouldn't bother writing the article. Maybe Y has more background apps than X? Maybe there is a known regression in Y's kernel?

    Once again, you (Michael), wrote an article without including any analysis whatsoever, decreasing it's value to "crap".

    Phoronix used to be better than this...
    Every benchmark he does seems to elicit someone making this sort of comment.

    Truth: it would be better (for the long-term benefit of the products he is reviewing) if he would spend many hours/days figuring out exactly what components were directly responsible for quantitative deltas between executing the same code, and what could be done -- either to the application or to the platform -- to make it run faster.

    Falsehood: It is the responsibility of a journalist to do this.

    When someone gets murdered, journalists don't pull out the fingerprint dust, black lights, and security camera video tapes. They tell the public what was observed to happen. They let the police figure out exactly what the cause of the happenings was. It's not their job to figure out "if the person had ducked exactly 333 msec before they did, the bullet would have missed them."


    Truth: People don't run benchmarks on their computer all day, so telling someone that they can expect higher performance with a TTISOD benchmark is not directly applicable to real world experience with actually-useful applications. Furthermore, your mileage will vary dramatically depending on what library calls are made in your particular app; how carefully your app is optimized (does it pay attention to locality, cache coherency, take advantage of SSE, etc), etc. Your experience may also differ on a different architecture or hardware.

    Falsehood: There is no value in reporting benchmark results.

    At least some of Michael's benchmarks do, in fact, reflect real-world scenarios to some degree. If you have an extremely busy website serving static pages, and you use Apache, that benchmark at least will give you some meaningful data on which system is fastest on Michael's hardware. The Apache benchmark is not very synthetic; it's more of a stress-test of a very widely-used app. By contrast, the more artificial benchmarks have admittedly less significance. If you're using similar hardware, OS version and architecture as Michael, the benchmark really ought to be reproducible, at least within a 10 or 20% margin of error. For some of the tests where the results were dramatically different between distros, this margin of error isn't enough to invalidate, at least, the ordering of the distros. Dramatic differences usually are symptoms of major system design changes (such as from the 2.6.18 kernel all the way up to the modern 2.6.32) or of major performance regressions (TTSIOD slow on CentOS and RHEL? WTF?)


    I agree that it would be more useful for software engineers and project contributors to know the how and why of these, but if the performance is really that glaringly bad on a particular distro, the least the article can do is entice a contributor to run the test, reproduce the relatively poor results, then dig in and figure out why.

    Don't equate "non-ideal" with "crap". Michael has finite time and is providing something with positive value, no matter how limited it may be (especially when the tests are "close" -- differences of <10% could be due to ANYTHING). That is more than can be said of many people.

  7. #7
    Join Date
    Apr 2010
    Posts
    60

    Default

    Quote Originally Posted by allquixotic View Post
    Don't equate "non-ideal" with "crap". Michael has finite time and is providing something with positive value, no matter how limited it may be (especially when the tests are "close" -- differences of <10% could be due to ANYTHING). That is more than can be said of many people.
    Fair enough, "crap" may be too harsh here. Nevertheless, these results that are published are merely "observations", not "benchmarks". Here's what wikipedia say:

    Benchmarking is not easy and often involves several iterative rounds in order to arrive at predictable, useful conclusions.
    Now, he only ran these tests once (he didn't say otherwise), there are no conclusions mentioned, no comparison made etc. What the article does, is merely displaying tabular data. This is not benchmarking.

  8. #8
    Join Date
    Oct 2009
    Posts
    1,987

    Default

    Must not forget the fact that while beta, debug code is still slowing it all down, therefore though it may be fun to benchmark, the results really have no meaning.... at least not with respect to rhel6.

    I think a more interesting focus when looking at betas with their heaping piles of debug code, is to look at FUNCTIONALITY and to count bugs.

  9. #9
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,729

    Default

    @dcc24: PTS runs each benchmark a min of 3 times (IIRC), more if they deviate enough.

  10. #10
    Join Date
    Jul 2009
    Posts
    416

    Default

    Quote Originally Posted by dcc24 View Post
    Once again, a useless comparison from Phoronix. All systems have different kernels, different xorg-server, different DEs etc. You are comparing apples and oranges.

    But enough with that. What's more disappointing about this article is that it lacks commentary/conclusion. If you are not willing to investigate why X performs better than Y, you shouldn't bother writing the article. Maybe Y has more background apps than X? Maybe there is a known regression in Y's kernel?

    Once again, you (Michael), wrote an article without including any analysis whatsoever, decreasing it's value to "crap".

    Phoronix used to be better than this...
    The comparison is actually useful for people who are deciding which enterprise OS to use.

    Comparing apples to apples as you call it doesn't make much sense. Not many people are going to install CentOS and update the kernel and X.org in the real world. But it's useful to see if there are any performance benefits for CentOS 5.5 users to upgrade to what will be CentOS 6 (or RHEL 6)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •