Page 1 of 3 123 LastLast
Results 1 to 10 of 25

Thread: Intel Core i7 Virtualization On Linux

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    15,103

    Default Intel Core i7 Virtualization On Linux

    Phoronix: Intel Core i7 Virtualization On Linux

    Earlier this month we published Intel Core i7 Linux benchmarks that looked at the overall desktop performance when running Ubuntu Linux. One area we had not looked at in the original article was the virtualization performance, but we are back today with Intel Core i7 920 Linux benchmarks when testing out the KVM hypervisor and Sun xVM VirtualBox. In this article we are providing a quick look at Intel's Nehalem virtualization performance on Linux.

    http://www.phoronix.com/vr.php?view=13734

  2. #2
    Join Date
    Sep 2006
    Posts
    714

    Default

    KVM kicks ass. It really really does.

    So does the Virt-manager and libvirt stuff. Not only supports KVM, but the Qemu + Kqemu alternative for non-virtualization-enabled hardware.

    Seriously. I have no doubt that KVM is going to be the dominate virtualization technique used on Linux, along with libvirt, and other associated things.

    On virtualized machines the I/O performance is the current limitation. That is the disk performance and network performance especially. With KVM the Linux kernel has built in paravirt drivers. These are drivers that are specially made for running in a virtualized environment and are able to avoid much of the overhead of trying to use emulated hardware and real drivers.

    Network performance is especially affected by this. The best performing fully virtualized card would be the emulated Intel e1000 1Gb/s nic. Doing benchmarks in a earlier version of KVM I was able to get a 300% improvement in performance by switching to paravirt network driver. With lower cpu usage in BOTH the guest and host systems. (in a dual-core system the guest being restricted to a single cpu and the host primarially using the other)

    For Windows there is a paravirt driver for network, but not for block.. yet.

    -------------

    But the terrific thing about KVM is it's ability to deliver enterprise level features but retain the userfriendliness of things like virtualbox or parrellels.

    Now the userland and configuration stuff is not up to the same level of Virtualbox, yet, but it won't take long.

    ---------------


    So far I've done a install of Debian, OpenBSD, FreeBSD, and Windows XP pro in my virt-manager managed KVM environment and they all work flawlessly so far, not that I have had much of a chance to excersize them.

    I am working on a install Windows 2008 and pretty soon I'll get a install of Vista going. Then I am going to try to tackle OS X and see how that goes...

  3. #3
    Join Date
    Jan 2009
    Posts
    466

    Default

    It appears that you can improve the disk IO performance of KVM by using the LVM backend. There seems to be some room for improvement though. Hopefully it will come soon.

    http://kerneltrap.org/mailarchive/li...09/1/4/4590044

    F

  4. #4
    Join Date
    Apr 2009
    Location
    London, UK
    Posts
    1

    Thumbs up A note about VirtualBox

    Fantastic article as always!

    I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.

    For what it's worth I made a switch from VMWare to VirtualBox about 8 months back and have to say it's very promising how much headway Sun have been making - particularly OpenGL and DirectX support for Windows and *nix

    I guess I should try KVM again - I've never had joy getting it to work properly in the past and that's why I stuck with a third-party package that just 'works'

    It's probably on an article somewhere but this is the sort of review that would be great with equivelant benchmarks for a Core2Quad to see how they stack-up to the i7. My main reason for multi-core is virtualisation and it'd be great to see if it's worth an upgrade or not yet.

    Keep up the fab work Michael !

  5. #5
    Join Date
    May 2008
    Posts
    598

    Default

    Quote Originally Posted by bbz231 View Post
    I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.
    Isn't VirtualBox a fork of Xen???

  6. #6
    Join Date
    Sep 2006
    Posts
    714

    Default

    Ya, to clarify there is no 'lvm backend' for KVM. It's your using logical volumes as raw block devices to be assigned as hard drives to guests. Treat them like raw devices. Just to avoid confusion.

    I used to do that when I ran Xen on my desktop, but I added iSCSI to the mix. I had a file server using LVM to divide up storage into logical volumes...

    So:

    File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.

    So I had a few different VMs I'd fire up for one purpose or another.

  7. #7
    Join Date
    May 2008
    Posts
    598

    Default

    Quote Originally Posted by drag View Post
    File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.
    So you had a LUN for each LV ???

    If so, how could that be?

    Btw. I have never tried to use iSCSI, LUN or LVM... Yet

  8. #8
    Join Date
    May 2008
    Posts
    598

    Default

    Right now I use Xen on CentOS with images for each guest.

    It would be very interesting to see a test of image vs LVM vs partition.

  9. #9
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by Louise View Post
    So you had a LUN for each LV ???

    If so, how could that be?

    Btw. I have never tried to use iSCSI, LUN or LVM... Yet
    it's pretty simple.

    With iSCSI you conifgure a block device to be exported over the network. LUN is just SCSI term for identifying a drive.

    So what I did is just had a simple software RAID 5 array that I divided up using logical volume management. I'd create a logical volume to be used for a VM then configured iSCSI enterprise target to use the logical volume as a drive that gets exported over the network.

    With Xen I'd then use the Linux kernel's iSCSI support on my desktop to access it and then use one of those as a raw device for each guest VM.

    With KVM and virt-manager stuff they have it setup so that the VM can be configured to use iSCSI directly. I have not tried it with KVM yet.

    --------------------------

    This sort of thing is important if your going to use VMs for business or whatever and want to take advantage of the "Live migration" features. One of the requirements are that you have a common storage backend so that the VM has consistant access to it's storage after the move.

    Then, for reliability, you'd have to take advantage of other features like Ethernet bonding and Linux multi-path and maybe DRBD or other storage replication features so that you'd have the ability to replicate storage and create highly reliable storage networks. Although all the details are beyond me, I've done research but no actual implimentation for stuff like that.

    That's one of he kick-ass things about KVM is that you can then more easily take advantage of all the little features, drivers, and hardware support that have been developed for Linux server use in the enterprise.

    ---------------------

    Oh, and for these block-level protocols like iSCSI or stuff from Redhat's clustering things, or fiberchannel... the security for these things suck huge donkey balls. Their 'security' features are more for avoiding accidents and not so much for stopping attackers. So for security purposes you'd generally want to use a private network just for the storage. It's good for performance. Of course for more casual uses like home usage it's not that important.
    Last edited by drag; 04-22-2009 at 10:05 AM.

  10. #10
    Join Date
    May 2008
    Posts
    598

    Default

    Quote Originally Posted by drag View Post
    With iSCSI you conifgure a block device to be exported over the network. LUN is just SCSI term for identifying a drive.
    OK, so LUN could be "/vol/iscsivol/tesztlun0" but is never exposed to the client?

    Quote Originally Posted by drag View Post
    So what I did is just had a simple software RAID 5 array that I divided up using logical volume management. I'd create a logical volume to be used for a VM then configured iSCSI enterprise target to use the logical volume as a drive that gets exported over the network.

    With Xen I'd then use the Linux kernel's iSCSI support on my desktop to access it and then use one of those as a raw device for each guest VM.
    Very interesting. Could you post the config file for the Xen guest here?

    RHEL and Novell uses different ways of booting a Xen guest, and they also use different ways to access the image. One uses "file:" and the other uses "tap:".

    I have then read when using LVM I should use "phy:".

    But how does it look like with iSCSI ?

    What OS are you using as your Xen host?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •