With OSS drivers it won't make them hot right now (with my 3870 I have ~6x less frames). When it will change it may make some problems, but right GPU cooling should suffice.
AFAIK this problem was caused by third-party designs whoose cooling was very modest, but I'm not sure. Better not trying this with RV770 GPUs.
On the host OS, commands passed down from the virtual SVGA hardware are translated into OpenGL and executed using whatever OpenGL driver is present on the host. They don't actually need a Gallium3D driver for the host GPU - even a proprietary binary driver will work.
It seems like a good plan - gives VMWare a more maintainable driver suite for the SVGA adapter than they could get through other means, provides lots of goodness to the open source community, and still lets VMWare get "first dibs" on the new code by having the actual hardware driver tied to their emulated SVGA chip.
This seems like some of the best roadmap planning I have seen in a long time -- I was both surprised and impressed by the end of the VMWare session.
It may turn out to be easier to just look at how r600 does something and write "similar but all new" code for r600g - it depends a bit on how each developer likes to work. Either way, having a working r600 driver and a working r300g driver should make work on r600g a lot more satisfying, in the sense that the "visible results per unit of work" should be a lot higher than normal.
One important thing to remember is that maybe 90% of the 3D code (nearly all of the mesa code *above* the HW driver layer) will be the same in both cases. Rather than calling a "classic" HW driver, the upper level code calls a set of routines which translate the "classic mesa" calls into Gallium3D calls then act like a state tracker to the Gallium3D driver - including translating "classic mesa" IL into TGSI. It's really "mesa using the old HW drivers" vs "mesa using the new HW drivers".
Last edited by bridgman; 12-20-2009 at 11:54 AM.
So according to the Gallium status matrix
VMware is very close to have reached their goal, assuming they use the close source driver on the host.
Although it doesn't show OpenCL and D3D, but in the workshop videos they said that OpenCL was almost good to go.
It is almost too existing that they have a working OpenCL state tracker just sitting there.
Of course no AAA title is going out as a bunch of Python scripts , but let's say that a game studio developed their own scripting language for this purpose.
Could OpenCL used in this respect?
And how much performance do you think you would burn off going for the scripting approach to utilize the sea of processors we soon will get with 12 cores and 16 cores?
Let me remind you that r600-r700 opengl 2.0 and GLSL support in radeon driver has been pushed by an AMD employee; Richard Li. With the release of Ubuntu 10.04 or F13 we will all see r600-r700 users playing ET Quake Wars on their computers with good performance while r500 performance will remain GARBAGE.
This makes it clear that AMD urges us users to upgrade to newer GPUs by supporting whatever feature they see fit. Well I will definitely upgrade to a new GPU as you guys suggest, but an NVIDIA ONE.
Keep your number post fascism to yourself forum thug.I just noticed your number posts is equal to your I.Q.
You do just a minimum of resource before posting flame messages to see if your postulates are true, and I won't accuse you for being a minor
Btw. Didn't I see you on the nVidia forum bitching about nVidia doesn't give a damn about open source 3D drivers?