Page 1 of 2 12 LastLast
Results 1 to 10 of 20

Thread: Red Hat Enterprise Linux 6.0 Benchmarks

  1. #1
    Join Date
    Jan 2007
    Posts
    15,396

    Default Red Hat Enterprise Linux 6.0 Benchmarks

    Phoronix: Red Hat Enterprise Linux 6.0 Benchmarks

    There's been a number of individuals and organizations asking us about benchmarks of Red Hat Enterprise Linux 6.0, which was released earlier this month and we had benchmarked beta versions of RHEL6 in past months. For those interested in benchmarks of Red Hat's flagship Linux operating system, here are some of our initial benchmarks comparing the official release of Red Hat Enterprise Linux 6.0 to Red Hat Enterprise Linux 5.5, openSUSE, Ubuntu, and Debian.

    http://www.phoronix.com/vr.php?view=15510

  2. #2
    Join Date
    Nov 2010
    Posts
    2

    Default

    Could be interesting add in this benchmarks the Oracle's kernel in Oracle "Unbreakable" Linux http://www.oracle.com/us/corporate/press/173453

  3. #3
    Join Date
    Oct 2006
    Location
    Israel
    Posts
    626

    Default

    I assume that PTS still uses self-compiled binaries right?
    DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB, GTX780, F20/x86_64, Dell U2711.
    SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F20/x86_64, Dell U2412..
    BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F20/x86-64.
    LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F20/x86_64.

  4. #4
    Join Date
    Oct 2008
    Posts
    64

    Default

    Quote Originally Posted by gilboa View Post
    I assume that PTS still uses self-compiled binaries right?
    Yes. And because of this the whole comparison is completely senseless. Phoronix isn't comparing how good the binaries of a distribution are. It is comparing the efficiency of the compiler of a distribution.

    The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.

  5. #5
    Join Date
    Jan 2008
    Posts
    206

    Default

    I can't help but I don't see how benchmarks like "lame" "mafft" or the whole compression/decompression benchmarks are relevant for en enterprise distribution.

    They almost exclusivly rely on the quality of generated code (ie compiler benchmarks).

  6. #6
    Join Date
    Jan 2008
    Posts
    772

    Default

    Many companies have their own internal-use apps that will be compiled and run on whatever their preferred platform happens to be, or have certain apps that they customize and/or follow upstream development for. Some of these even do a lot of FFT and/or DCT operations (think communications engineering R&D). So while these benchmarks might not represent stereotypical "enterprise" use, they are at least indirectly relevant to a subset of corporate users. I'll agree that the benchmarking could be better-targeted (I doubt many engineers or scientists are sensitive to LAME performance on their work computers), but I think it's going overboard to say that it's completely senseless or irrelevant.

  7. #7
    Join Date
    Oct 2006
    Location
    Israel
    Posts
    626

    Default

    Quote Originally Posted by glasen View Post
    Yes. And because of this the whole comparison is completely senseless. Phoronix isn't comparing how good the binaries of a distribution are. It is comparing the efficiency of the compiler of a distribution.

    The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.
    I'm a big RHEL user - we use it to deploy our own software stack - which makes part of the comparison - namely kernel performance (large chunk of our software is kernel based) and compiler performance paramount.
    However, the other half of our software stack is standard - E.g. DB, web servers, etc - and we wouldn't -dream- abut using unsupported binaries for these rolls - especially given the huge number of patches included by RH.
    It strikes me that PTS is should, when ever possible, use distribution supplied binaries instead of simply defaulting to self-compiled-binaries. (I believe the same point was raised by the Fedora devs last a thread was started about deploying PTS as a general regression detection tool).

    - Gilboa
    DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB, GTX780, F20/x86_64, Dell U2711.
    SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F20/x86_64, Dell U2412..
    BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F20/x86-64.
    LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F20/x86_64.

  8. #8
    Join Date
    Jun 2010
    Location
    Poland
    Posts
    26

    Default

    This test is joke?! Why author is using RHEL 6 against Desktop distribution which are free for pay, while RHEL 6 cost some money?

    I don't know why there isn't systems like SUSE 11, Ubuntu Server 10.10, CentOS 5.5.

  9. #9
    Join Date
    Jan 2008
    Posts
    772

    Default

    Just pretend that it's CentOS 6 or Scientific Linux 6 instead of RHEL 6. Problem solved.

  10. #10
    Join Date
    Aug 2009
    Location
    south east
    Posts
    342

    Cool Good test

    Way I see it, you got a write fast read slow file-system.

    After installation, right out of the gate you got issues. SELINUX requires extra links into various libraries. So even if you're not running it you're running it. The file-system driver had to be manipulated to include stuff in it.


    If you actually are running SELINUX you'll see a huge performance hit.
    Mandatory Access Control on a server sucks.

    I was surprised to see it perform as well as it did.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •