Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: Multi-Core Scaling In A KVM Virtualized Environment

  1. #11
    Join Date
    Jan 2008
    Posts
    206

    Default 12 cores?!

    I can't help but I find this benchmark misleading.
    12 threads are not 12 cores, furthermore turbo kicks in if only a few threads are executed.

  2. #12
    Join Date
    Jan 2010
    Location
    Portugal
    Posts
    945

    Default

    Quote Originally Posted by Xanbreon View Post
    I do wonder how much the different generations of hyperthreading hurt or help performance, P4 HT, Atom 330 HT, Atom D510 HT, Early i7 HT (eg 920), and later i7 HT (eg the 960 you have and maybe a 860 as well).
    I can tell you that Atom benefits greatly from HT. It's probably the cpu that most benefits from it actually. I did some tests a while back with Cinebench on an atom and using 2 threads gave about 60% more performance than a single thread IIRC. The same goes for compression.

  3. #13
    Join Date
    Nov 2009
    Posts
    22

    Default

    Most of the results are quite expected.
    Lastly, with the x264 media encoding benchmark, with one and two cores enabled the performance was close between the host and guest, but the VT-x virtualized guest began to stray as the core count increased.
    What did you think, that with KVM x264 somehow scales better then without? When the difference is 2fps for 1 core, its somehow expected to be at 8fps for 4 cores.

    I also looked at the "TTSIOD 3D Renderer" and the Graphicsmagick Resizing benchmark. Those don't scale very well on the host.
    With basic statistics, I made a graph of what I expected KVM to scale:
    http://img340.imageshack.us/img340/9...eenshottvk.png
    The situation seems more complex then what I assumed, but it is expected the graph to get down again, when the application doesn't scale well on the host.
    (In the table, the KVM values are calculated, expect for the first one. I guessed all other values based on the graphs.)

  4. #14
    Join Date
    Dec 2010
    Posts
    15

    Default

    Quote Originally Posted by mtippett View Post
    I liked the results, the historic statement that virtualization doesn't work with multiple CPUs has now been reduced to "for some workloads" it collapses at some point.
    Hmm, I'm not sure you can conclude that, based on these test results. When I read the test results, I came to a different conclusion: When increasing the number of cores dedicated to virtual guests, you're increasing the overhead of scheduling and handling of the virtual guest(s) on the hostside. With *zero* cores available/dedicated to the host system, everything slows down when adding more cores to the guests and ignoring the host.

    It would be interesting to see if these bad-performing tests, shows the same curve if the tests is run with 5 cores for the guest + 1 for the host and 11 cores the guest + 1 for the host. If they do not share the same curve, the conclusion should be "don't forget to dedicate some resources to the host" and not "for some workloads virtualization collapses at some point".

  5. #15
    Join Date
    Oct 2010
    Posts
    90

    Default

    Another interesting thing is that in most cases, a benchmark that scaled nicely to six cores also benefited from enabling HT. (In other words, it looks like many of the benchmarks where HT decreased performance would also have done badly with twelve physical cores.)

  6. #16

    Default

    It was a very good test, Michael.

    Thank you for disproving me!

  7. #17

    Default

    Typically in virtualization software, the guest operates on 'virtual CPU cores'. These virtual CPUs are (depending on the hypervisor) treated as threads which the hypervisor can schedule. Depending on the workload this scheduling can have bad results (the hypervisor scheduler can fight with the guest OS scheduler). Expect issues during high load. A solution is to use what usually called 'cpu pinning' which allows you to lock each virtual cpu to a specific physical core. This might be something to look into.

  8. #18
    Join Date
    Feb 2011
    Posts
    1

    Cool how about xen?

    dear Phoronix,

    A few years ago, there was a paper about performance comparson between xen, kvm, virtualbox, linux-vserver, openvz. In that paper, kvm does not scale well with multi-core, too. I am not sure if it is the same linux issue.

    Will Phornix plan to repeat the test with Xen 4.01 to see if it has same problem? Xen alway claims scalability and stability up to 128 cores. If this is still the different between kvm and xen, kvm will have a hard time going to major cloud servers replacing xen.

  9. #19
    Join Date
    Dec 2010
    Posts
    15

    Default

    Quote Originally Posted by soldcake View Post
    A few years ago, there was a paper about performance comparson between xen, kvm, virtualbox, linux-vserver, openvz. In that paper, kvm does not scale well with multi-core, too. I am not sure if it is the same linux issue.

    Will Phornix plan to repeat the test with Xen 4.01 to see if it has same problem? Xen alway claims scalability and stability up to 128 cores. If this is still the different between kvm and xen, kvm will have a hard time going to major cloud servers replacing xen.
    Phoronix didn't leave any resources for the host operating system in their test, so the results are invalid and you can't use them for anything or make any conclusions based on them.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •