Page 5 of 9 FirstFirst ... 34567 ... LastLast
Results 41 to 50 of 90

Thread: Ubuntu 9.04 vs. Mac OS X 10.5.6 Benchmarks

  1. #41

    Default

    Quote Originally Posted by deanjo View Post
    Actually when it comes to SQLite in OS X, SQLite sends a F_FULLFSYNC request to the kernel to ensures that the bytes are actually written through to the drive platter. This causes the kernel to flush all buffers to the drives and causes the drives to flush their track caches. So if anything, OS X should be slower.
    Some guy tried to proof OS X isn't eight times slower then Linux in PostgreSQL or MySQL using the same argument :>

    http://archives.free.net.ph/message/...1f71b8.pl.html

    http://www.anandtech.com/mac/showdoc.aspx?i=2520&p=7

  2. #42

    Default

    Quote Originally Posted by Hephasteus View Post
    SQL is threaded to high heaven. OS/X is a microkernel OS. It loves threads. It's built from threads. A microkernal OS will always destroy a giant kernel OS in SQL and graphics.
    You'd need GNU Debian HURD to even play in the same ballpark and that's like barely existing. Linux and unix's will likely evolve into macrokernel like OS's just because IBM, nvidia, ati, AMD etc all need them to be like that so OpenCL doesn't turn into a turd for linux.
    You're totally wrong. OS X doesn't have a chance when comes to real world benchmarks in SQL databases (but it's still better then Windows there). Hurd is slow piece of crap and it will probably never reach Linux or any Unix like system performance.

  3. #43

    Default

    Quote Originally Posted by yotambien View Post
    Tell me which distributions, out of the main ones, update the whole stack every six months. And add to that the known preference of Ubuntu for new and flashy bits that break the system in many colourful ways.
    When comes to bleeding edge software I always have Fedora on mind Btw. I never suffered from brokenness in Ubuntu.

    I'm not sure I understand you here...but...yes?
    You missed my point here, but maybe I was wrong. Read Deanjo response to Kano post.

    I don't complain. The analogy doesn't hold. Two operative systems targeted at desktop users, one computer, one benchmark. What's wrong?
    The point is those two operating systems have different approach to desktop. Ubuntu is still closer to server then to desktop in my opinion, but maybe that's intended.

    You are playing here. You know perfectly well that they didn't benchmark Ubuntu server edition. And you know that the differences between the server and desktop versions lie more in the packages than in the kernel included. So, give me the numbers showing that changing the kernel configuration from the default one has a critical and positive impact on performance.
    The disk I/O scheduller is different in server edition, default Hz value is also different. You would probably see better results, but still "worse" then OS X, because Linux isn't buffering disk writes while OS X is.

    Because of course, you know what Mac users run or not. The important point here is the difference between the benchmark applications used and what is actually being benchmarked. According to you, because Ubuntu user X doesn't play Tremulous, this benchmark doesn't mean anything to him/her.
    I said except 2D and 3D Those tests are really fair and they show Intel drivers sucks a lot right now (maybe not in 2D when compared to other cards).

    Well, that's just too bad. Again, that it is a known fact doesn't change a thing.

    Insert coin, please (in six months time).
    My coin is real world benchmark, but I just want to keep on this topic.

  4. #44
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by deanjo View Post
    Actually when it comes to SQLite in OS X, SQLite sends a F_FULLFSYNC request to the kernel to ensures that the bytes are actually written through to the drive platter. This causes the kernel to flush all buffers to the drives and causes the drives to flush their track caches. So if anything, OS X should be slower.

    Well... The kernel can't do that. The firmware on the harddrives decide what gets written to the platter. That sort of thing isn't going to be something the kernel has control over.

    The best the OS can do is flush it's own buffers and then send a request to the drive to flush it's buffers and sometimes the firmware lies about it and other times it doesn't. It usually lies.

    but that sort of behavior would affect both Linux and OS X equally so it wouldn't give a performance advantage to either OS over the other.

    Otherwise that's a nice insight and I am still reading other replies in this thread.

    I'm just nit picking.

  5. #45
    Join Date
    Sep 2006
    Posts
    714

    Default

    SQL is threaded to high heaven. OS/X is a microkernel OS. It loves threads. It's built from threads. A microkernal OS will always destroy a giant kernel OS in SQL and graphics.
    Holy crap, NO.

    First off.. The kernel OS X uses is NOT A MICROKERNEL.

    The OS X kernel is called XNU. It's a so-called 'hybrid' kernel that uses code from a development kernel that died in 1995 combined with BSD stuff. The Mach kernel at different times in it's history was a microkernel and then not a microkernel. OS X does not use a Microkernel.

    The Windows NT kernel was another one that was based on a Microkernel design, but is not a Microkernel. Early versions of NT were microkernels, but unfortunately for that design Microsoft could not figure out how to make it scale and the excess overhead caused by the message-passing design doomed it. So later versions of the NT kernel were monolythic.

    If you want to you can call them 'hybrid kernels', but I think that is just a made-up term to make the OS kernel sound all microkernel-ish and cool while it is, in fact, a modular monolythic design.


    You'd need GNU Debian HURD to even play in the same ballpark and that's like barely existing. Linux and unix's will likely evolve into macrokernel like OS's just because IBM, nvidia, ati, AMD etc all need them to be like that so OpenCL doesn't turn into a turd for linux. IBM probably already has it that way for custom CELL implimentations that it doesn't share with the general public.
    Nope. Not going to happen. Microkernels were essentially a pipe dream and only one Microkernel-based OS actually made it into widespread use. That OS was QNX and was popular for embedded systems due to it's realtime-like nature.

    But it wouldn't scale to anything big and nobody wanted to use it as a desktop or server platform.

  6. #46
    Join Date
    Sep 2006
    Posts
    714

    Default

    The reality of the matter is that with this particular computer OS X is able to offer superior performance over Linux in many of the benchmarks.

    It's obvious that OS X developers put a lot of time and effort optimizing and tweaking OS X to work as fast as possible on this machine.

    That is one of the nice things about having a sharp focus. OS X developers are able to concentrate on specific hardware configurations and on specific uses. While Linux developers are producing a much more general purpose OS and it is much more modular, portable, and diverse. There are lots of layers and complexity that are designed into the system to allow this.

    There is a definate penalty for doing this. In the past Linux was almost always faster then OS X.. but nowadays not so much.. OS X has caught up apparently. It would be interesting to see benchmarks of higher level of complexity that create higher loads on a system and exhaust the file systme buffers and see if the Linux scedualling and scalability come into play.

    Also it would be interesting to see the performance on higher end hardware.

    -------------------------------


    As far as proprietary drivers go... Nvidia uses the same general OpenGL and driver code for all the OSes that they produce the drivers for.

    For the longest time Apples sucked for graphics performance because it was Apple developers who were writing the drivers... but apparently they've decided to switch to the approach that Linux users use... which is to shove a bunch of code originally designed and developed for Windows into their systems.

    ----------------------

    Keep in mind that there are still other benefits from using the Linux approach to modular operating systems.

    The package system is superior to what Apple uses... they are able to make things pretty usable and very easy, but technically the package repository approach is a superior one.

    Also Linux has much much better hardware support.

    A few examples of my machines...
    A)
    A Dell Mini-9 with 16GB of disk and 1GB of RAM. OS X can install on it, but the wireless network is flaky, the wired port is unusable, the sound is flaky, and it takes a lot of effort to get hardware working as well as it can. And on this class of hardware OS X runs poorly.

    Meanwhile with Linux on that system I have a snappy system with somewhat lousy graphics performance.. but compiz still works and is quite usable with little effort.

    Uses about 256MB of RAM ideling with a browser window or two open.

    B)
    This laptop. Running Fedora 11 beta. Core 2 duo with 4GB of RAM, 320 gig 7200rpm harddrive, intel graphics, etc.

    Got at a fraction of the price that similar hardware would cost from Apple.

    Fedora runs great. Suspend works. Wifi works. My wacom tablet is now completely plug-n-play with extended input support and everything.

    On top of that I have KVM which I use with Virt-manager. I have XP, Vista, Windows 7, Ubuntu, Debian, FreeBSD, NetBSD, and even OSX installed in the VM. All of them work pretty well except for OSX..

    I use rdesktop to access the XP VM for times when I need to use MS Office.

    The thing is fabulous and is a hog. With XP and full blown Gnome running on it consumes almost a 1 and a half gigs of RAM ideling. This is in 'high productivity' mode for when I need to get work for work done.

    C)
    My Dell Inspiron 4100.

    A fabulous machine, it's being slowly being turned into a lovely hack box.

    Uses Debian Sid. One of hte best operating systems you could ever use for anything. Usuability sucks somewhat, but I dont' care... this thing is for fun. This I rescued from work. They were going to throw it away. To slow, ran like crap, crashed, hanged, refused to boot... they thought it was hardware issues... It was just Windows sucking so hard it removed all the oxygen from the room.

    Pentium III 1.13ghz, 512MB of RAM, 40GB harddrive, some ancient radeon mobility with 16MB of vram. I discarded the DVD drive and jammed both modular bays with batteries with the most life left I could fine. Has about 4-5 hours battery life.

    This thing is _FANTASTIC_. Using Debian Sid it is running better then it ever did in it's entire life. Seriously. Blows XP right out of the water.

    Midori runs fast and is very responsive. Uses about half the RAM that Firefox does. I am using the LXDE desktop environment. I added on some Gnome features I 'require' like network-manager and gnome-power-manager, but it's still mostly LXDE.

    LXDE is to XCFE what XCFE is to Vista.

    With the couple gnome-add-ons I have the system uses a total of 50MB memory to run, before openning up any applications. Suspend works perfectly. Suspend-to-disk and Supsend-to-ram.

    I am going to get a couple Alfa 500mw 802.11g USB adapters, make some homemade high-gain directional antennas.. get that working with my GPS devices and some mapping software and go have some fun wardriving. With that configuration I should be able to pick up a wifi access point for up to a mile away.

    I can watch DVD rip movies on it. It's fast, it's repsonsive it sips only a small amount of power (For this era of laptop) and there is no way in hell you could ever possibly get OS X (Or Vista, or Windows 7) to run nearly as well.
    Last edited by drag; 05-13-2009 at 04:57 AM.

  7. #47
    Join Date
    May 2009
    Posts
    2

    Default Ubuntu 9.04 vs OS X 10.5.6

    I think the best thing to test these 2 operating systems is to install ubuntu 9.04 as a sole OS on a mac computer and the other one is a mac with OS X 10.5.6 installed natively.

    Or we could use similar hardware configuration to the mac computer with an PC intel X86 computer. That will be very exciting to see...

  8. #48

    Default

    Quote Originally Posted by jybumaat View Post
    I think the best thing to test these 2 operating systems is to install ubuntu 9.04 as a sole OS on a mac computer and the other one is a mac with OS X 10.5.6 installed natively.

    Or we could use similar hardware configuration to the mac computer with an PC intel X86 computer. That will be very exciting to see...
    Hardware has probably nothing to those results. Using noatime, using EXT4 with delayed allocation will probably speed up things a lot. Sadly, Phoronix benchmarks only defaults when comes to operating systems and then some people believe OS X is faster in Postgre or Mysql then Linux, FreeBSD, Solaris...

    P.S. Why there's so many OS X advertisements now? On the left, on the right on top... :>
    Last edited by kraftman; 05-13-2009 at 09:24 AM.

  9. #49
    Join Date
    May 2009
    Posts
    6

    Default

    Personally I find the benchmark incorrect because it compares two systems that operate in different modes. In my opinion (and please correct me if I'm wrong), Mac OS X, runs 64bit applications even though the kernel itself runs in 32bit mode:

    Because device drivers in operating systems with monolithic kernels, and in many operating systems with hybrid kernels, execute within the operating system kernel, it is possible to run the kernel as a 32-bit process while still supporting 64-bit user processes. This provides the memory and performance benefits of 64-bit for users without breaking binary compatibility with existing 32-bit device drivers, at the cost of some additional overhead within the kernel. This is the mechanism by which Mac OS X enables 64-bit processes while still supporting 32-bit device drivers.
    (from http://en.wikipedia.org/wiki/64-bit)

    Mac OS X uses an extension of the Universal binary format to package 32- and 64-bit versions of application and library code into a single file; the most appropriate version is automatically selected at load time.
    ...
    Mac OS X v10.4.7 and higher versions of Mac OS X v10.4 run 64-bit command-line tools using the POSIX and math libraries on 64-bit Intel-based machines, just as all versions of Mac OS X v10.4 and higher run them on 64-bit PowerPC machines. No other libraries or frameworks work with 64-bit applications in Mac OS X v10.4
    (from http://en.wikipedia.org/wiki/X86-64)

    So what happens in my opinion is that the tests are executed in 64bit mode, and, not surprisingly, the results of Ubuntu are worse in this case. What brought me to this idea is the extremely bad performance of Crafty, while there should be no difference in performance at all. What Crafty does is mainly evaluating moves, accessing transposition table and using threads to utilize more processors. So apart from the thread usage, which involves system calls (but Linux is generally fast when creating threads), the rest of the program avoids any system calls and should be OS independent. On the other hand, 64bit versus 32bit application performance is very noticeable because of more efficient chessboard representation in 64bit mode.

    In my opinion, when comparing Mac OS X and Ubuntu 64bit the difference of these two systems will be marginal (of course apart from the graphics benchmarks and possibly that SQLite regression). I would even expect Ubuntu to be slightly better. I also think that ext4 should be used for comparison - first, it will probably become the default file system in 9.10, second, it would be more interesting to see the performance of "the state of the art" FS (well, almost, until replaced by something better like btrfs) instead of an old FS with known performance issues.

  10. #50
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,532

    Default

    Quote Originally Posted by drag View Post
    Well... The kernel can't do that. The firmware on the harddrives decide what gets written to the platter. That sort of thing isn't going to be something the kernel has control over.

    The best the OS can do is flush it's own buffers and then send a request to the drive to flush it's buffers and sometimes the firmware lies about it and other times it doesn't. It usually lies.

    but that sort of behavior would affect both Linux and OS X equally so it wouldn't give a performance advantage to either OS over the other.

    Otherwise that's a nice insight and I am still reading other replies in this thread.

    I'm just nit picking.
    Utimately yes, firmware does decide data's fate. If the firmware is giving false responses back then there is nothing a OS can really do to effect that. If that is the case though it should effect all OS's. As a side note there is something drastically wrong when it comes to SQLite performance in Ext3. Switch to XFS and you will get much faster results and Ext3 has been getting slower since around ~2.6.18. It's something I have noticed for a while now.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •