The software subsystems under test are different. Well, actually the VM based testing is a superset.
For the native testing, IO operations are a lot simpler and in the case of ext4, barriers are used to ensure filesystem consistency. This penalizes numbers where fsyncs and the like are used (since it flushes data to the disk).
When you move to a VM based environment the guest goes through the guest filesystem, the guest block device drivers, the host qemu block device emulation and into the host filesystem, host disk drivers and then into the disk. Barriers are a filesystem construct that forces some level consistency in the data written to disks.
Adding that complexity results in sometimes vastly different behaviours, particularly when it comes to barriers. With that many layers of sw, there are many options about how you can deal with the barriers. For example, six or so months ago I had to dig through KVM/QEMU/SQLITE/Ubuntu for a similar issue - barriers were being ignored. You can then go to barriers being simplistically honored, or you can look to batch the collection of barriers from the upper layers, etc.
Michael's statement for virtualization based tests is that they had a similar change in performance to Apache. He didn't compare the KVM guest behaviour to the native behaviour differences.
I noticed one difference. The later VM based testing used 10.04.1 (August 2010)release build while previous testings used 10.04 (April 2010).
Is it worth having a comparison test between these two in the near future?