Page 1 of 3 123 LastLast
Results 1 to 10 of 25

Thread: Intel Core i7 Virtualization On Linux

  1. #1
    Join Date
    Jan 2007
    Posts
    14,799

    Default Intel Core i7 Virtualization On Linux

    Phoronix: Intel Core i7 Virtualization On Linux

    Earlier this month we published Intel Core i7 Linux benchmarks that looked at the overall desktop performance when running Ubuntu Linux. One area we had not looked at in the original article was the virtualization performance, but we are back today with Intel Core i7 920 Linux benchmarks when testing out the KVM hypervisor and Sun xVM VirtualBox. In this article we are providing a quick look at Intel's Nehalem virtualization performance on Linux.

    http://www.phoronix.com/vr.php?view=13734

  2. #2
    Join Date
    Sep 2006
    Posts
    714

    Default

    KVM kicks ass. It really really does.

    So does the Virt-manager and libvirt stuff. Not only supports KVM, but the Qemu + Kqemu alternative for non-virtualization-enabled hardware.

    Seriously. I have no doubt that KVM is going to be the dominate virtualization technique used on Linux, along with libvirt, and other associated things.

    On virtualized machines the I/O performance is the current limitation. That is the disk performance and network performance especially. With KVM the Linux kernel has built in paravirt drivers. These are drivers that are specially made for running in a virtualized environment and are able to avoid much of the overhead of trying to use emulated hardware and real drivers.

    Network performance is especially affected by this. The best performing fully virtualized card would be the emulated Intel e1000 1Gb/s nic. Doing benchmarks in a earlier version of KVM I was able to get a 300% improvement in performance by switching to paravirt network driver. With lower cpu usage in BOTH the guest and host systems. (in a dual-core system the guest being restricted to a single cpu and the host primarially using the other)

    For Windows there is a paravirt driver for network, but not for block.. yet.

    -------------

    But the terrific thing about KVM is it's ability to deliver enterprise level features but retain the userfriendliness of things like virtualbox or parrellels.

    Now the userland and configuration stuff is not up to the same level of Virtualbox, yet, but it won't take long.

    ---------------


    So far I've done a install of Debian, OpenBSD, FreeBSD, and Windows XP pro in my virt-manager managed KVM environment and they all work flawlessly so far, not that I have had much of a chance to excersize them.

    I am working on a install Windows 2008 and pretty soon I'll get a install of Vista going. Then I am going to try to tackle OS X and see how that goes...

  3. #3
    Join Date
    Jan 2009
    Posts
    464

    Default

    It appears that you can improve the disk IO performance of KVM by using the LVM backend. There seems to be some room for improvement though. Hopefully it will come soon.

    http://kerneltrap.org/mailarchive/li...09/1/4/4590044

    F

  4. #4
    Join Date
    Apr 2009
    Location
    London, UK
    Posts
    1

    Thumbs up A note about VirtualBox

    Fantastic article as always!

    I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.

    For what it's worth I made a switch from VMWare to VirtualBox about 8 months back and have to say it's very promising how much headway Sun have been making - particularly OpenGL and DirectX support for Windows and *nix

    I guess I should try KVM again - I've never had joy getting it to work properly in the past and that's why I stuck with a third-party package that just 'works'

    It's probably on an article somewhere but this is the sort of review that would be great with equivelant benchmarks for a Core2Quad to see how they stack-up to the i7. My main reason for multi-core is virtualisation and it'd be great to see if it's worth an upgrade or not yet.

    Keep up the fab work Michael !

  5. #5
    Join Date
    Sep 2006
    Posts
    714

    Default

    Ya, to clarify there is no 'lvm backend' for KVM. It's your using logical volumes as raw block devices to be assigned as hard drives to guests. Treat them like raw devices. Just to avoid confusion.

    I used to do that when I ran Xen on my desktop, but I added iSCSI to the mix. I had a file server using LVM to divide up storage into logical volumes...

    So:

    File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.

    So I had a few different VMs I'd fire up for one purpose or another.

  6. #6
    Join Date
    May 2007
    Posts
    16

    Default

    Thanks for the nice article, quite what I was looking for, as we need some virtualisation at work, and the main war is between vbox and kvm

    However I miss some thing, which you can hopefully add/clarify:
    1. Was VT enabled in VirtualBox? IIRC 2.1.4 doesn't do this by default. If it wasn't - would you rerun the tests where VirtualBox was *really* bad compared to the others?
    2. Which exact version of KVM (kernel und userspace) were you running? The one from the 2.6.28 kernel plus 0.84 from jaunty?
    3. Would you share the exact options for KVM and VirtualBox (IDE vs S-ATA vs SCSI drive emulation etc)?
    4. Is there any reason you choose the non-free version over the OSE one?

    Regards and again thanks for the article
    Zhenech

  7. #7
    Join Date
    May 2008
    Posts
    598

    Default

    Quote Originally Posted by bbz231 View Post
    I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.
    Isn't VirtualBox a fork of Xen???

  8. #8
    Join Date
    May 2008
    Posts
    598

    Default

    Quote Originally Posted by drag View Post
    File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.
    So you had a LUN for each LV ???

    If so, how could that be?

    Btw. I have never tried to use iSCSI, LUN or LVM... Yet

  9. #9
    Join Date
    May 2008
    Posts
    598

    Default

    Right now I use Xen on CentOS with images for each guest.

    It would be very interesting to see a test of image vs LVM vs partition.

  10. #10
    Join Date
    Nov 2007
    Location
    Die trolls, die!
    Posts
    525

    Default

    How would perform XEN? A comparison would be nice.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •