i'm also shocked to see how poorly virtualbox performed and how good xen performed. i'd have used xen a long time ago if it were easier to use. i'm surprised nobody made a graphical tool for it yet. i'd gladly do it if i knew how to use it better.
I'm kinda puzzled too but in general very satisfied I've been using KVM since the very beginning. At least this testing kicks (or seems to) Virtual box major performance arguments about their JIT (is that the right term?) compiler . I'm pretty happy to see the two major open source virtualization components be almost hand-in-hand. That is very good news indeed.
I too hope to see such a comparison between Intel and AMD soon, Michael. Thanks for this bench.
I'm very surprised by the poor VirtualBox performance in computationally intensive tasks. They are the ones that shouldn't care much if they are running in a virtual machine or on the bare metal. The virtual machine overhead should only be apparent when a process calls into the kernel - it's the kernel that is being lied about the hardware. The user processes are always running in a sort of a virtual machine provided by the kernel. A computationally intensive task should mostly run in user space. It will call into the kernel mostly for IO (but that should be rare, otherwise it's IO intensive, not computationally intensive) and memory management. Perhaps it's the latter that causes the performance hit. The tester should make sure that VirtualBox is configured for using Nested Pages which should help with heavy memory management.
As far as IO is concerned, VirtualBox supports virtio-net devices but not virtio-block. So some penalty in IO-intensive tasks is to be expected. And at least in my experience it has shitty SATA virtualization. I get kernel error messages with an Ubuntu virtual machine running in VirtualBox on Win7 with AHCI enabled (on Windows that is). I had to revert to using a virtual IDE controler to get rid of the error messages and the unresponsiveness that they manifest themselves through.
"Virtual Machine Manager" (virt-manager.org) in Ubuntu requires libvirt-bin & qemu-kvm to work (on compatible processors only, which is most all nowadays).
This KVM newbie w/o docs fired-up an Ubuntu VM just as fast as I could with Virtualbox. It's fast & doesn't need driver installs to avoid the cursor lock concepts of Virtualbox, a plus! The internet connection (NAT default) worked just fine for my needs.
The menu options have no 3d video, but under "Filesystem" you can give it a physical path with "mapped", "passthru" and "squash" options.
Leaving twisties open messes up the view (bug). You've got all kinds of processor fine-grain control & even the bios options in the setup. You can start the VM on host boot or even boot a kernel directly to avoid BIOS & grub/lilo time.
Last edited by snadrus; 03-29-2012 at 02:19 PM.
Reason: more info
Great article Michael and very useful to my current research - shame Xen had probs with one of the mobo's and I noticed this occurred with Xen on a different AMD-based machine under another one of your recent virtualisation benchmarks so thats another plus for KVM and more justification for Red Hats backing of it over the more mature and widely deployed Xen.
I'm currently researching my options for virtualising a number of Windows and *nix servers, but more Windows than nix. The problem is that all the benchmarks I've found so far that compare any of the different hypervisors have only benchmarked open source apps like Apache, dovecot and Postgres etc. running under Linux guests when what I need to discover now is how a vitualised Server 2008 guest (64-bit) runs AD, Exchange, SQL server, IIS, Sharepoint, SMB under the main competing virtualisation technologies compared to bare metal. I'd expect such a benchmark, if it exists, would utilise the PV/ virtio or whatever drivers each virtualisation solution provides for Windows Server so as to give the optimum results but other than that the test guests should be identical of course. I know Phoronix is a Linux site but maybe Michael or another reader has spotted such info on their web travels?
As it stands it looks like KVM has a slight performance edge over both Xen and VMware and ovirt seems to be the management console of choice for KVM but Xen and VMware are generally regarded as safer bets dur to their relative maturity. Hyper-V seems to be the also-ran only for the most devout MS shops.
Also, if there is anyone reading this who is familiar with both KVM and VMWare - are there any features that VMW has that KVM/ovirt doesn't that would make you consider shelling out all the extra cash required to go the VMWare route over the free and allegedly better performing KVM? Yes, I know about ESX but KVM and XEN seem to do a lot more for zilch quid.
Next time you do a virtualisation comparison Michael, forget about Virtualbox as it obviously isn't competitive. Instead I'd like to know how Linux Containers stacks up against KVM, XEN PV and VMware