Announcement

Collapse
No announcement yet.

VMware Releases Its New Gallium3D Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by RealNC View Post
    I just love freetards.
    Agreed. With all of the work that VMWare/Tungsten have put into Gallium3D to this point, I think it's fine if they want to donate some Mesa Gallium3D driver that they can use in VMWare. There's nothing (besides possible/probable unsuitability for the purpose) preventing someone else from using the same code in another driver.

    VMWare definitely benefits, but so does the Mesa project as a whole.

    Comment


    • #12
      More importantly, VMWare is committed to doing this process correctly. There's more than a few poor-faith efforts in the community; VIA's my favorite example, although some of the older devs have horror stories of Trident, S3, and others. Seeing code that is not just an undocumented blob without proper DRM, and written according to kernel and Mesa rules, is refreshingly awesome.

      Comment


      • #13
        Originally posted by MostAwesomeDude View Post
        More importantly, VMWare is committed to doing this process correctly. There's more than a few poor-faith efforts in the community; VIA's my favorite example, although some of the older devs have horror stories of Trident, S3, and others. Seeing code that is not just an undocumented blob without proper DRM, and written according to kernel and Mesa rules, is refreshingly awesome.
        Does the VMware developers communicate with the rest of the developers, or are they doing everything in-house, and just pushes they patches regardless of what others are working on?

        What I mean is; Are the any collaboration between VMware and the community?

        Comment


        • #14
          Since this is just a virtual Gallium3D driver - I assume it requires a real Gallium3D driver for your hardware to be running outside the virtual environment no? (ie - to pass the TGSI out to a real driver to convert to commands for your own hardware)

          So what benefits will we actually see when this is released? (Are there any Gallium3D drivers in or close to production?)

          Comment


          • #15
            Louise, I think SVGA just refers to the emulated "Super VGA" (ie an enhanced VGA chip) that VMWare has been providing for some years to provide graphics on guest OSes. The emulated SVGA chip received hardware acceleration extensions recently but driver support was relatively weak until now.

            There's info available at : http://sourceforge.net/projects/vmware-svga/

            Craig73 - AFAIK the emulated SVGA uses OpenGL calls on the host OS to execute commands sent from the guest driver stack. It probably does make sense to replace this with Gallium3D calls on the host OS in the future but that would be an optimization and isn't necessary to make things run today. That's what the docco says anyways.
            Test signature

            Comment


            • #16
              Originally posted by bridgman View Post
              Louise, I think SVGA just refers to the emulated "Super VGA" (ie an enhanced VGA chip) that VMWare has been providing for some years to provide graphics on guest OSes. The emulated SVGA chip received hardware acceleration extensions recently but driver support was relatively weak until now.

              There's info available at : http://sourceforge.net/projects/vmware-svga/
              Thanks for clearing that out

              Comment


              • #17
                Originally posted by bridgman View Post
                Craig73 - AFAIK the emulated SVGA uses OpenGL calls on the host OS to execute commands sent from the guest driver stack. It probably does make sense to replace this with Gallium3D calls on the host OS in the future but that would be an optimization and isn't necessary to make things run today. That's what the docco says anyways.
                Hmmm... that would make sense for what is in place today... perhaps I need to follow the links and read a bit more (lot's of layers here :-) )

                I imagine (without further reading) - someone running an OpenGL app would run the OpenGL state tracker, which would translate calls into Gallium3D TGSI, perhaps optimizing it, which would translate it the 'virtual SVGA' hardware... which is a generic OpenGL command set to pass to the host OS? (Which then would be optimized in Mesa, and then translated into hardware instructions by the XOrg/Mesa driver).

                And all other state trackers would translate into OpenGL calls

                I guess it's progress regardless... because some level of acceleration is available, which usually makes up for the extra layers :-)

                Comment


                • #18
                  Given that qemu already has some (older) support for the vmware svga driver, there's nothing (in theory) preventing it from supporting this driver... right?

                  Comment


                  • #19
                    Hi, I'm one of the VMware 3D developers. I didn't work directly on the Gallium driver but I do work on the 3D backends and had a substantial contribution to the WDDM driver that we shipped with Workstation 7.0.

                    Yeah, SVGA just means "Super VGA", it was an extension of the VGA device that we initially shipped and the people responsible for naming things at that point were less creative with the names

                    Originally posted by Craig73 View Post
                    I imagine (without further reading) - someone running an OpenGL app would run the OpenGL state tracker, which would translate calls into Gallium3D TGSI, perhaps optimizing it, which would translate it the 'virtual SVGA' hardware... which is a generic OpenGL command set to pass to the host OS? (Which then would be optimized in Mesa, and then translated into hardware instructions by the XOrg/Mesa driver).
                    Close, TGSI is only used as a shader language temporarily in the driver; it's intended to be an intermediate representation that gets translated by the gallium hardware-specific code into whatever the hardware needs. The state tracker manages a lot of other state as well but most of the traditionally fixed function aspects of OpenGL (fogging, lighting, most of the transform and projection state) are converted to shaders and shader arguments. There are a lot of things that aren't directly shader state like texture bindings and blend modes which are still handled separately from the shader code.

                    The VMware Gallium driver translates the TGSI code into D3D9 shader byte code. That's mostly an artifact of our device and I wouldn't be surprised if we changed that in the future to be closer to TGSI. The command set as a whole in the SVGA hardware is much closer to D3D than OpenGL. The hardware attempts to implement Direct3D semantics for most things. That's not to say that we view OpenGL as a second class driver, but this is all much easier if you pick one semantic to implement and our first 3D drivers were Direct3D so that won. Everything is communicated over a publicly documented binary protocol to the host.

                    The process is:
                    - The guest generates a series of commands (most of them are on the order of "draw primitives" or "set a bunch of texture states" or "create a texture")
                    - The guest stuffs these commands into a FIFO ring buffer
                    - The host interprets the commands and uses a host graphics API to accelerate them. On Workstation for Linux and Fusion for Mac, we use OpenGL in the backend and on Windows we use Direct3D or OpenGL if D3D isn't available for some reason
                    - The host then reports completion of the commands to the guest

                    This process allows everything to happen asynchronously. The guest can be generating commands while the host is processing commands it already has. We use fences to report completions of operations to the guest so that it knows when it may proceed. This protects access to memory that is shared between the guest applications and the virtual GPU. We've actually found a few applications that can run faster in the guest than directly on the host machine since we get to take advantage of an extra core or two in the process.

                    As for using the driver in other virtual environments... I don't know. It's certainly possible but there's a fairly substantial body of code that makes up the 3D backends that would need to be implemented by the other virtualization vendors. Since the VMware SVGA device is relatively simple (fewer than 100 registers, most things are fifo commands) it would be much easier than virtualizing an R300 for example.

                    Comment


                    • #20
                      FWIW ... SVGA, meaning "Super VGA", is not a VMWare term but an industry term. After VGA (640x480) hardware came Super VGA (800x600) which supposedly was to be replaced by XGA but really Super VGA stuck... even in Windows 7, my laptop uses the Super VGA driver (a generic high resolution driver)... this stuff used to be exciting, clearly I'm too old.

                      Comment

                      Working...
                      X