I find it even more interesting that he did use the intel graphics for the tests. It shows just how bad they are in linux.
Originally Posted by mendieta
The majority of Linux users today use 64bit operating systems since there are serious perfomance boost issues and no compatibility problems anymore. So why at first place these benchmarks took place with a 32 bit version of Ubuntu I can hardly understand. 64 bit would make the difference in ffmpeg, ogg and lame encoding and I don't speak for tweaking and hacking just a very dekstop choice. Currently the benchmarks shows 17 vs 12 for MacOSX favour while with 64 bit the result would be 14 vs 15 for Ubuntu's favour pretty easily.
A third benchmark with the 64 bit version of Ubuntu is essential imho.
Ya I agree with ya on the most of your points. But I'd argue that microkernels are all over the place. In fact. I'd call every single memory and thread manager of every single SQL database system today a microkernel implimentation hybridized to their monolithic OS.
Originally Posted by drag
Does room for error exist? absolutely! I think more tests need to be done using mac systems for sure! Why not put a MAC server against a debian server and compare how several linux desktop distros performs under a mac system. Put both of them in X86_64 and build linux with the same optimizations you get out of Intel macs, ie. SSE instructions. The Linux kernel doesn't even do things to benefit from this to the best of my knowledge.
But as i see it now. We got our asses handed to us. Is it sad? u better bet it is. But we know we can improve. Excuses are excuses... The whole fedora is amazing thing is a pile because we all know the difference is nominal at best.
We lost... lets not act like 9 year olds and debate that we really didn't. Certainly we can be grown men (and women) and discuss how we can actually make the situation better.
Originally Posted by Hephasteus
I don't know anything about that.. but I do know that having a threaded model and memory management isn't something that is unique to SQL databases. Pretty much every large multi-threaded application is going to have to management it's memory and threads and such.
Remember what makes a microkernel a Microkernel is that the actual kernel doesn't do anything more then message-passing.
Then various seperate processes 'orbit' that kernel and provides services that the OS can use. Like the 'Hurd' is a collection of programs that provide low-level facilities to a L4 kernel. So you'd have a program that provides access to the harddrive, then another program that provides file system access, then another that provides POSIX APIs, etc etc.
So all the kernel does is then pass messages from one service daemon to another. It has zero functionality beyond that.
And, perversely, Microkernels tend to be hugely complicated. They are usually quite a bit larger then a Monolythic kernel even though they have no functionality built-in besides message handling.
It's pretty obvious why they are not really that successfull if you can step back and look at what they really are.
Now a modern monolythic kernel, like the Linux kernel, is a big object-oriented, multithreaded monster. Each major feature has it's own thread and there are lot of different small 'kernel level program'-type things that provide services and features that are used by the rest of the kernel. The difference being is that there is no message passing going on and they all occupy the same address space so one can twiddle the other's bits and read each other's memory in a very efficient manner.
This is why proprietary software like Nvidia's drivers that while high performance and have lots of features tend to like to stuff huge amounts of code into the kernel tend to suck. Then the nvidia driver, at any time, can abritrarially access and overwrite any other part of the kernel. If nvidia drivers have a memory overflow or other hicccup it can easily blow away the memory containing... say.. your Ext3 support.
With normal applications each one occupies it's own Virtual Memory space. That is each application see it's own unique address space. To the application all they see is their own virtual 4GB of RAM that they can do with whatever they will. This is the 'virtual' part of Virtual Memory. Each application has it's own VM sandbox and it's very difficult for that application to break out of it's memory sandbox... it can't even see what is going on in the memory of other applications. Now with kernel modules in Linux there is no memory protection features like that and a kernel module can very easily access any other part of the kernel and view and edit any other part of the running kernel.
There really isn't anything that would stop it and is the major design deficiency of a Monolythic kernel.
This is why the video OSS driver model tries to shove as much as the video driver into userspace as possible... the kernel portion is kept as small as possible and the majority of the video processing happens via the DRI2 protocol in userspace.
Last edited by drag; 05-13-2009 at 01:53 PM.
Thank you for sharing your OS understandings. This has been a very enlightening thread. It's good to see more and more daemons running on linux and I see more clearly that a herd of daemons might not be that great. It'll be interesting watching them hybridize the kernel as it moves to more advanced video handling. Hope they do a good job. But it's looking like a bumpy ride so far. Can't last forever though.
To make the situation better is to make a 32 bit OS to act as if it was 64 bit. You admit defeat only after a fair battle.
Originally Posted by L33F3R
Well you could be a child and fight over details or you could move on and improve your product. This principal has been shown in Japanese business and has proven to be quite successful. You cant have a fair battle on two different platforms, one of which consists of in-house hardware; apple is going to take the advantage in any OS fight because of this.
Originally Posted by Apopas
That brings up a good plus for Linux, unlike mac it can use a large variety of hardware. Historically problems have erupted with hardware drivers but I have noticed that in recent time the driver situation has been getting alot better. Additionally I can build a $300 computer and play ETQW on high quality with linux; the mac mini is $600 and has moderate HDD/RAM at best.
Dont look at the situation from a linear perspective. I agree more tests need to be done but lets not forget theres room for improvement.
Exactly that's my point. I believe even in in-house Apple's hardware Linux will have the best performance. Just use the newest of it and the newest is the 64 bit one. When we have it and it's easy to install it then why to stick on the old one? It is not a minor detail, it's the logical choice.
Originally Posted by L33F3R