Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

  1. #11
    Join Date
    Apr 2010
    Posts
    65

    Default

    Again, the data presented in this article would be misleading if someone is to decide which enterprise OS to use, looking at this article. The debugging code alone is reason enough not to compare them.

    The only real benchmark is running the application itself (the one you will be using in a production environment) on both OSes which are properly updated and configured. If there are still differences in performance - that are not caused by upstream - then and only then, you can say "distro X performs better than distro Y".

  2. #12
    Join Date
    Jul 2008
    Posts
    1,730

    Default

    ffmpeg: Ubuntu 1% faster than opensuse and the winner.
    7-zip: suse 2% faster than Ubuntu and 'virtualle the same'

    The bias is very hard to miss.

  3. #13
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,587

    Default

    I really wish Michael would either post the PTS results on the tracker or give details on the package selection. openSUSE by default installs the desktop kernel instead of the more server oriented -default kernel.

  4. #14
    Join Date
    Feb 2010
    Location
    Sweden
    Posts
    58

    Default

    ffmpeg: Ubuntu 1% faster than opensuse and the winner.
    7-zip: suse 2% faster than Ubuntu and 'virtualle the same'

    The bias is very hard to miss.
    Either that or Michael wrote it that way because the difference between the best and worst cases in the 7-zip was 4% while it was 13% in the ffmpeg-benchmark.

  5. #15
    Join Date
    Jul 2008
    Posts
    1,730

    Default

    well it happens in almost every article. If someone is faster than Ubuntu you have a good chance to find a 'virtually the same'. But with ubuntu leading...

  6. #16
    Join Date
    Feb 2010
    Location
    Sweden
    Posts
    58

    Default

    Or it could be that people not using Ubuntu sees such quotes more often than the rest of us , I think we need some real statistics to figure this out.

  7. #17
    Join Date
    Aug 2009
    Posts
    13

    Default

    With the EXT4 issues and all possibly slewing the issues. I'd really love to see a comparison of these where all the reading/writing is NOT done to the local disk. i.e. using NFS mounted for all your data and work.

    Reason being, is in general(for non home users), the local disk of a Linux/Unix server/workstation is usually just there for the OS, tmp, and the software they bundle with it. All the real data and custom software/apps are all stored on some type of NAS(even if that NAS may just be another Linux/Unix machine.

    I'd love to see some benchmark possibly of how the different OS's deal with reading/writing NFS mounted data.

  8. #18
    Join Date
    Jul 2008
    Posts
    1,730

    Default

    Quote Originally Posted by matobinder View Post
    With the EXT4 issues and all possibly slewing the issues. I'd really love to see a comparison of these where all the reading/writing is NOT done to the local disk. i.e. using NFS mounted for all your data and work.

    Reason being, is in general(for non home users), the local disk of a Linux/Unix server/workstation is usually just there for the OS, tmp, and the software they bundle with it. All the real data and custom software/apps are all stored on some type of NAS(even if that NAS may just be another Linux/Unix machine.

    I'd love to see some benchmark possibly of how the different OS's deal with reading/writing NFS mounted data.
    and what fs to use as basis for the nfs on the server?

    How about using tempfs for the tests?

    I mean, if we are marching forward into the realm of 'no home user is doing that', it should be done right.

  9. #19
    Join Date
    Aug 2009
    Posts
    13

    Default

    Quote Originally Posted by energyman View Post
    and what fs to use as basis for the nfs on the server?

    How about using tempfs for the tests?

    I mean, if we are marching forward into the realm of 'no home user is doing that', it should be done right.
    I guess I wouldn't care so much as to what the filesystem is being used on the NFS server side. I've just came across so many times in the past where one linux release performs very differently on NFS mounted system.
    I deal with a variety of machines mostly RHEL 5.x boxes, but we've found kernel patches and things that really change the NFS performance, not just tunables, but bug fixes or so on. Example being, keeping the NFS mount options the same, we saw a big NFS performance increase going from 5.2 to 5.4. Our initial migration from RHEL 3.8 to 5.2 was a HUGE slowdown in NFS. Whatever changed between 5.2 to 5.4 fixed some of that. I'm no IS guy these days anymore, so I didn't pay attention to all the fixes that were made, but there was some directly related to NFS performance in the kernel patches.

    I guess what I would really be looking for, is out of the box, without tweaking, how do the different distros perform for NFS related things. Maybe the only "tuning" would be to make sure you use the recommend mount options that the NAS vender wants.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •