Phoronix benchmarks between native MS WOS and Xen Vga passthrough MS WOS please
I would like to read a test with
One Intel processor, One AMD processor, One Nvidia card, One Ati card, One Intel card.
This makes 6 configs.
With SDD, and HDD, and test MS WOS - MicroSoft Windows Operating System -
with Xen / Xen + Vga Passthrough / KVM - all with antivirus and ninite full bundle and fresh install without antivirus / Native - with antivirus and a Ninite full bundle / Native without antivirus an bundle.
If you virtualize for gaming, you can have a MS WOS fresh installed without no more programs than games, and without antivirus, and even if you have to native installations virus will affect you.
I think hardcore gamers would play with better performance with XEN and without antivirus.
Most people have an OEM license, so they're S.O.L. on transferring it to a VM unless they are sufficiently L337 enough to do a physical-to-virtual conversion, and then be able to fake all of the necessary UUIDs and BIOS strings to make the license not invalidate itself(ie: not very many people).
Originally Posted by neuron
I have made a post at Intel's forums about virtualization and where it will be in the future. It can be found here:
Note that I made this post before I discovered the "Ubisoft" article at the Phoronix website.
Anyone try it with Windows 8? Might as well, it has better performance for multcores and happens to be free for the time being. Here are the isos:
I read your post as "playing game X which they do every 2 minutes" and couldn't stop laughing...
Originally Posted by neuron
I don't see VGA passthrough being useful anytime soon.
Part of the problem is it's only available for HVM guests. Not a big deal, but PVM has better performance and you must have a CPU with virtualization extensions.
The BIG problem is only 1 card can be assigned to 1 DomU. This means if you want to have multiple DomU's with VGA passthrough, you need that many GPU's. We're not going to have "Cloud" clusters of dedicated game clients on Xen anytime soon.
Another problem is you're not guaranteed that the GPU you bought is going to work with VGA passthrough. Some embedded BIOS settings in the card may screw up Xen and other things.
VGA and PCI passthrough, GPU sharing and other thoughts on VM
@Lemrouch: Interesting setup! Thanks for sharing.
I for myself am fed up with dual-boot and am looking for a solution that allows me to run Linux as my standard OS, and Windows 7 (or 8) in a VM for photo editing. I would happily migrate to Linux only, but there is no professional photo editing solution available on Linux. So I'm kinda stuck with Windows (though I could move to Mac, but that would be really expensive).
While I appreciate the challenges of getting VGA passthrough working, I again and again read comments that question this approach. For example, Ian Molton proposed a KVM/Qemu patch that would implement a virtio GPU transport solution back in October 2010 (see http://lists.gnu.org/archive/html/qe.../msg01263.html). I understood that his proposal/patch was turned down. Instead the KVM team focuses on SPICE. SPICE itself looks like a great idea, but will it really cut the dough for single station applications where a user wants to get the best graphics performance from within a VM? I just don't know. Perhaps SPICE is not even meant to provide that solution, I'm simply not familiar with it.
In my particular case, I must have direct access to the graphics adapter from within the VM, in order to calibrate my screen with a colorimeter and to upload the profile into the screen via the DVI port of the graphics adapter. I doubt SPICE will be able to accommodate that requirement.
I saw quite many posts in various forums where people are looking for a way to play Windows-based games on Linux PCs and require video hardware acceleration. Just like the OP demonstrated.
The way I understand it, VGA passthrough today requires a dedicated graphics adapter for each VM and one for the host (unless it's a headless host). Why can't we have ONE graphics adapter that is shared by both the host and the VM, each with direct access to it when needed? I believe many users would be perfectly happy to work in full screen mode in either the VM or the host OS, where one and only one has exclusive access to the graphics card.
Another, perhaps better and certainly more universal approach would be GPU sharing or pooling, that is some kind of GPU virtualization. One or several GPUs could be shared by multiple guest VMs or users. It's been done by companies such as onlive to deliver online gaming services where all the GPU rendering is done in the data centers. They had to build their own hardware to get this running, but it seems to work extremely well. Aside from the hardware, it's the software that makes it happen. Is anyone familiar with a Linux/opensource solution for GPU pooling?
I'm still undecided about which solution to take. First I thought about Virtualbox, which I'm using now for some simple tasks, but it doesn't support VGA passthrough, yet. Phoronix recently published benchmarks that didn't favor VB. So that leaves KVM and Xen, or the commercial VMware and Parallels Workstation solutions. I'm willing to pay some money for a good solution, since I'm anyway going to spend a little fortune on the hardware. Suggestions and help are welcome.
@Lemrouch: Any reason you chose AMD/ATI graphics cards? I've had good experience with Nvidia cards and like their driver support (I know, some hate Nvidia for not sharing open source drivers or supporting the community, but at least they do a good job in updating their drivers, and provide longterm support for their cards).
Because VGA passthrough means give the OS direct access to the hardware. So you require no custom drivers/hacks to get it going. And can use the standard graphics features and drivers for the hardware in question.
Originally Posted by powerhouse
In order to do what your proposing the "host" OS would have to go through hardware shutdown for the primary graphics adapter (which I dont think is possible), to be able to switch it to the other OS. And we're not just talking about fullscreen here, as you wouldn't be able to switch between instances without going through said hardware shutdown. I don't think current day bioses are too happy about shutting down the only graphics adapter though.
Thanks neuron. I read about FLR (function level reset) and other ways to reset the graphics adapter. Surely there must be away - perhaps not with every card or BIOS - to reset the GA?
The reason I mentioned full screen mode is that windowed VMs would make it much more difficult, if not impossible. However, when switching from one OS (the VM for example) to the other (e.g. host) in full screen mode, one could reset the GA to work with the other OS.
There is an interesting study about "ELI: Bare-metal performance for I/O virtualization" (pdf found here: http://www.mulix.org/pubs/eli/eli.pdf) that may not be directly related to this subject. However, this study by IBM Haifa and Technion (Haifa) researchers show that one can even forward interrupts to the guest OS without breaking functionality nor security. More interesting and relevant to this subject, the study stipulates that the user is primarily interested in VM performance, and that the host OS is nothing but an underlying layer that, today, slows down our VM experience. The researchers have successfully demonstrated that letting the VM handle the I/O tasks and interrupts - using their proposed ELI mechanism - brings the I/O performance right up in par with the bare-metal performance.
So my question is: why can't we have the same kind of direct control of the GA from within the VM OS?
Let's even assume that only a few GA vendors and GA models support the necessary GA reset functions. If someone tells me that graphics adapter X will do the job, I and probably some others would choose to get that brand & model, making it worthwhile for vendors to pay attention to this detail. The same goes for the motherboard and other components, if their support is required.
If we take the current situation, I would have to have two GAs to get VGA passthrough. This does neither fit my needs (I don't see how I would want to handle two screens at a time - I'm not usually working simultaneously in two environments) nor is it financially economical. Let alone the waste of hardware resources and energy.
My dream is that GPU resources can be pooled and made available at nearly no performance cost to whichever VM or host that needs them. One application I would have very much wanted to implement is a server plus several thin clients. This way my kids could mess around with their virtual PC and I wouldn't have to worry much about things breaking, as I could always return to a save snapshot. Let alone the money saving potential. But for this to be feasible, VM performance - in particular graphics performance - must be improved. My kids wouldn't want to play a game at 10 fsp - neither would I (not that I do that very often). The SPICE project might be heading that way, though I have no idea how well it supports hardware acceleration. Any insights on that?
Thanks again for your response, neuron!
I have a lot of EM to switch to Linux, because they are sick and malware threats / Automatic Updates restart tired, automatically restart your computer for you / random things breaking down over time.