Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: Ubuntu 12.10: Linux KVM vs. Xen Virtualization Preview

  1. #11
    Join Date
    Jul 2012
    Posts
    12

    Default

    Quote Originally Posted by evanjfraser View Post
    I know this was probably supposed to be a like for like comparison... but why wasn't the Xen guest a PV guest rather than HVM? I'm not sure why someone would choose HVM with Xen unless there was no other option (eg windows guests).

    My testing with Xen 4.x showed PV guests to have approx 4 * the memory benchmark performance over HVM guests which makes a huge difference in SMP applications.

    Michael, could you please replicate the tests with a PV guest?

    (I'm not really a Xen fan, but this seems like a huge oversight)
    I was wondering exactly the same. Why use Xen with HVM when you can use PV?

    Also as someone already pointed out scenarios with 5-10-15 VM would be great in comparing Xen, KVM and Virtualbox.

    Though improvement can be made in future, it's nice to see articles like this

  2. #12
    Join Date
    Oct 2007
    Posts
    102

    Default XEn VGA PAssthru 95% baremetal at MS WOS

    Thanks a lot, If I where a PC for hard gaming vendor I would sell Computers with this preinstalled config.

    i only would change the GPUs for better performance of DX games and put Sabayon as Linux OS

    This kind of tests are what i would like to read at Phoronix.

    Quote Originally Posted by Thanat0s View Post
    I tested Xen VGA Passthru (ASUS Crosshair IV + Phenom X6 + PCI-E 1 : HD5870 (Windows HVM) + PCI-E 2 : HD 6450 (Linux)) and it's very easy to configure in the secondary passthru configuration, it means you have to keep the Cirrus virtual GPU from Xen as your Windows primary adapter but it is not used (in the end, no trouble at all).

    I dedicated the HD6450 to Ubuntu/Arch and the HD5870 to Windows.
    I ran a Unigine benchmark and I got 95% of the native Windows performance (4 vCPU HVM vs 6 cores native Windows, 4GB HVM vs 8GB native, same Catalyst version).

    From this results and what I have read in forums, the performance in many cases is (almost) the same as Windows on bare-metal, so if you have time & money it is a very good solution (and space on your desk to double everything ^^).

    Edit: it also works pretty well with KVM now : http://tavi-tech.blogspot.fr/2012/05...ra-17-and.html

  3. #13
    Join Date
    Nov 2011
    Posts
    12

    Default

    Quote Originally Posted by tangram View Post
    I was wondering exactly the same. Why use Xen with HVM when you can use PV?
    Xen HVM is actually faster than Xen PV for some workloads.. it really depends on the workload.
    (And vice versa; for some workloads Xen PV is faster than HVM).

    If the workload consists of creating a lot of new processes (say, a kernel compile) then Xen HVM will be faster than Xen PV
    (assuming you're of course running Xen PVHVM drivers in the HVM guest VM), because in Xen PV every new process creation
    needs to be trapped by the hypervisor to check the pagetables for the new process. In Xen HVM guests you don't need to trap to the hypervisor to create new processes.

    If the workload consists of "long running" processes, say, a mysql server, then Xen PV might be faster.

  4. #14
    Join Date
    Nov 2011
    Posts
    12

    Default

    Where can I get the VM cfgfiles and hypervisor/system settings for this benchmark? I'd like to run some benchmarks myself with same settings.

  5. #15
    Join Date
    Jul 2012
    Posts
    12

    Default

    @ pasik

    The article doesn't state PVonHVM. It states HVM which ends up being a different beast. Also withh HVM you have emulation overhead.

    But that sure would be an interesting article covering XEN PV, PVonHVM and HVM.

  6. #16
    Join Date
    Nov 2011
    Posts
    12

    Default

    Quote Originally Posted by tangram View Post
    @ pasik
    The article doesn't state PVonHVM. It states HVM which ends up being a different beast. Also withh HVM you have emulation overhead.
    Xen PVHVM drivers are in upstream Linux kernel, so I believe they used PVHVM. If they didn't, then the benchmark is pointless, because they used VirtIO for KVM guests! (VirtIO is the PV driver for KVM).

  7. #17
    Join Date
    Sep 2011
    Posts
    139

    Default

    Quote Originally Posted by M1kkko View Post
    The conclusion from this article: Run your operating system(s) on bare metal.
    That would be the conclusion if this so called benchmark was run on the same hardware.

  8. #18
    Join Date
    Apr 2012
    Posts
    62

    Default

    It has already been pointed out that Xen has various options for virtualization. Benchmarks can be very helpful, particularly if they are setup to provide optimum performance. The exact setup must be documented.

    I was surprised to see Xen tested with an Ubuntu HVM guest, and from the discussion here I gather that this guest wasn't even using PVHVM drivers.

    +1 for tangram and pasik and others who pointed that out.

  9. #19
    Join Date
    Apr 2012
    Posts
    62

    Default

    I had a second look at the benchmark and noticed that Xen was using ext4 partitions. IIRC the Xen wiki clearly states that Xen should be used with LVM for best disk I/O performance. Of course one would have to use the PVoHVM drivers to reach the full potential under HVM, and as already mentioned before, why not use a PV guest in the first place?

    Running a benchmark where one contender is set up in a more or less optimal way (KVM with VirtIO driver) and the other (Xen) using what seems to be a low-performance setup is highly questionable. It hasn't been explained why HVM was chosen over PV (or why not using both options?), nor is it clear whether or not the PVoHVM drivers were used. It looks like most of the tests where Xen under-performs are disk I/O related.

    Since I'm running a Xen system and I don't experience such performance issues, perhaps there is something wrong with the benchmark? For reference, my Windows 7 HVM guest achieves a Windows Experience Index (WEI) of 7.8 (of 7.9) for disk I/O using a SSD and GPLPV drivers. The driver itself helped improve the WEI by 1 index point.

    The way it stands now, the conclusion of the benchmark is simply misleading.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •