Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: A Virtual Gallium3D Driver Coming For VMware

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    15,677

    Default A Virtual Gallium3D Driver Coming For VMware

    Phoronix: A Virtual Gallium3D Driver Coming For VMware

    For months Sun's VirtualBox virtualization software picked up OpenGL and Direct3D acceleration support for virtualized guest operating systems, but now 2D/3D hardware-acceleration support for those running operating systems under VMware's virtualization products are imminent. It was almost exactly one year ago that VMware acquired Tungsten Graphics, but now their motives behind that acquisition are becoming more clear. Being hosted at VMware's headquarters today in Palo Alto, California was a Gallium3D Workshop, where various open-source Mesa developers are currently at and others connecting remotely. At this workshop it has just been announced that a "virtual" GPU driver for Tungsten's Gallium3D driver architecture will soon be publicly released...

    http://www.phoronix.com/vr.php?view=NzcwNQ

  2. #2
    Join Date
    Sep 2008
    Posts
    16

    Default

    That was unexpected. It was, but it wasn't. So as far as graphics acceleration in a Virtual Machine goes wouldn't this be capable of providing graphics acceleration that is as close to bare metal GPU acceleration as possible. If both the host and guest are using Gallium3D drivers could they use Virtualization technologies to connect their drivers, share the hardware, and provide direct acceleration to a guest OS?

  3. #3
    Join Date
    Sep 2007
    Location
    Connecticut,USA
    Posts
    985

    Default

    At some point VM 3D acceleration could be mature enough for people to play many graphics intensive games on Windows running in a VM. Imagine loading up Windows XP/W7 in a VM and play a game such as CoD or Crysis at a darn good playable framerate as if you had booted Windows off bare metal?

  4. #4
    Join Date
    Sep 2008
    Location
    Netherlands
    Posts
    510

    Default

    Quote Originally Posted by DeepDayze View Post
    At some point VM 3D acceleration could be mature enough for people to play many graphics intensive games on Windows running in a VM. Imagine loading up Windows XP/W7 in a VM and play a game such as CoD or Crysis at a darn good playable framerate as if you had booted Windows off bare metal?
    If everything becomes virtualized, that will be the end of anti-cheat software. How can you detect a wallhack that runs outside the VM?

  5. #5
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,801

    Default

    Quote Originally Posted by Remco View Post
    If everything becomes virtualized, that will be the end of anti-cheat software. How can you detect a wallhack that runs outside the VM?
    How can the wallhack inject its code from outside the VM in a process running in the VM?

  6. #6
    Join Date
    Sep 2008
    Location
    Netherlands
    Posts
    510

    Default

    Quote Originally Posted by RealNC View Post
    How can the wallhack inject its code from outside the VM in a process running in the VM?
    Well, you can just glean the necessary information from the VM and inject it into the scene on the host using that nifty Gallium3D interface.

  7. #7
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by DeepDayze View Post
    At some point VM 3D acceleration could be mature enough for people to play many graphics intensive games on Windows running in a VM. Imagine loading up Windows XP/W7 in a VM and play a game such as CoD or Crysis at a darn good playable framerate as if you had booted Windows off bare metal?

    Yeah. something like that.

    The best you'd be able to do is poke a hole in the VM through 'paravirtualization' technique and say "This memory is directly accessable by both the drivers in the host and in the guest". Then the state trackers issue commands that get read in by the rest of the gallium drivers on the host.

    So your adding in a few context changes and that sort of thing. You'd probably get 60-70% performance for most operations.

    The ultimate future will just being able to hand off complete control of the video card to the guest OS. That is you let native drivers access the hardware in the native way, but be it controlled in a guest.

    We already have that for most PCI devices on very new hardware. KVM and friends have the ability to hand off control of hardware to guests, but with video cards (of course) they are to complicated to be handled in such a ham-fisted manner.

  8. #8
    Join Date
    Sep 2007
    Location
    Connecticut,USA
    Posts
    985

    Default

    Quote Originally Posted by drag View Post
    Yeah. something like that.

    The best you'd be able to do is poke a hole in the VM through 'paravirtualization' technique and say "This memory is directly accessable by both the drivers in the host and in the guest". Then the state trackers issue commands that get read in by the rest of the gallium drivers on the host.

    So your adding in a few context changes and that sort of thing. You'd probably get 60-70% performance for most operations.

    The ultimate future will just being able to hand off complete control of the video card to the guest OS. That is you let native drivers access the hardware in the native way, but be it controlled in a guest.

    We already have that for most PCI devices on very new hardware. KVM and friends have the ability to hand off control of hardware to guests, but with video cards (of course) they are to complicated to be handled in such a ham-fisted manner.
    That could be what Sun's doing with Vbox...adding paravirtualization support to allow hardware to be controlled by drivers on the guest in a virtualized manner without impacting the host or the guest negatively.

    This should be quite interesting to see how all this evolves. Someday I'd be able to dump the physical Windows install and just keep "Bill in a box" for those times I want to run a Windows program without all the hassles and bugs of Wine. This would be definitely good for those games and graphics apps that makers can't or won't port to Linux.

  9. #9
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by DeepDayze View Post
    That could be what Sun's doing with Vbox...adding paravirtualization support to allow hardware to be controlled by drivers on the guest in a virtualized manner without impacting the host or the guest negatively.

    This should be quite interesting to see how all this evolves. Someday I'd be able to dump the physical Windows install and just keep "Bill in a box" for those times I want to run a Windows program without all the hassles and bugs of Wine. This would be definitely good for those games and graphics apps that makers can't or won't port to Linux.
    KVM already supports this, as does Xen I believe. Probably Vmware also. I suppose Vbox and any other virtualization can support it.


    It's very important for server applications since emulating the network cards and using virtual ethernet is a big bottleneck. If you have a 10Gbs ethernet connection there is no chance in hell that even the best virtualization support can keep up with it using emulation or even paravirtualized drivers.


    http://www.mjmwired.net/kernel/Docum...ntel-IOMMU.txt

    And it has AMD IOMMU support also, of course.

    The trouble with all of this stuff is finding the right motherboard and all that.

  10. #10
    Join Date
    Dec 2008
    Location
    Halifax, NS, Canada
    Posts
    63

    Default

    Quote Originally Posted by drag View Post
    The ultimate future will just being able to hand off complete control of the video card to the guest OS. That is you let native drivers access the hardware in the native way, but be it controlled in a guest.

    We already have that for most PCI devices on very new hardware. KVM and friends have the ability to hand off control of hardware to guests, but with video cards (of course) they are to complicated to be handled in such a ham-fisted manner.
    What makes them "too complicated" is DMA. Vid cards directly access host memory, e.g. to fetch textures, using physical memory addresses programmed by the driver. The problem is that a graphics driver running in a guest will put vm guest "physical" addresses on the card, not real hardware physical.

    Either you code a VM-aware version of the driver to work around this, and run an insecure VM that trusts the guest not to use the vid card to read memory outside the VM, or you need a hardware IOMMU. Some current hardware has IOMMUs, but I can't remember which. Except for server stuff, probably only AMD and maybe Nehalem, since it's a lot easier to implement when the PCIe <-> RAM path goes through the CPU anyway.

    I've been wanting to do this for years, to run a multi-seat box (multiple users on one machine, each with their own kbd+mouse+vga). I was disappointed a couple years ago when I got a core2 to find out that even with HW virt support, giving the guest direct VGA access wasn't possible.

    What current and future x86-64 hardware has an IOMMU suitable for guest video drivers? Anyone more up to date on this than I am?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •