TTM-based OpenChrome In A Working State
Phoronix: TTM-based OpenChrome In A Working State
With VIA Technologies delivering on their promises by finally releasing 2D/3D documentation and driver code, and Tungsten Graphics creating a new VIA 3D stack for a client, there has been a lot to report on in the VIA Linux scene. Tungsten Graphics and VIA are both interested in creating a Gallium3D driver for the Chrome 9 series, Tungsten already created a feature-rich DRM and Mesa driver, and there is a lot of other work going on too...
What does it mean for a driver to only use TTM? I thought that either GEM or TTM run partially on the kernel, or is that only needed with KMS?
Lol I'm one of those that are getting kinda confused with all of this
Either way, this is great news. Hope to see some testing and benchmarks soon.
Originally Posted by [Knuckles]
I'm a bit confused on the real differences myself. TTM and GEM are both memory managers for graphics memory, which comprise both an API and an actual implementation (I think). So the driver is coded against the TTM or GEM APIs... I think. Kind of like the difference between Qt or GTK+ -- they do the same thing, but they're totally different codebases.
What confuses me is why drivers can't just have a few bits rewritten to work against GEM, especially given that it's in the kernel now and TTM is not. What does TTM do that makes it so hard to switch to GEM?
GEM is only in the kernel for Intel, not for any other GPU family. The implementation for Intel IGPs is felt to be fairly IGP-specific and not suitable for use with GPUs with dedicated video memory (which is most of them). This is, of course, hotly debated. Alsi, the GEM API defines many of the API calls as driver specific, so even "having the GEM API implemented" doesn't mean you have something directly useful or portable to another GPU family.
Finally, the changes to make use of a full memory manager tend to be fairly significant. The big issue is that buffers can move around dynamically and you don't know where they are until you actually go to use them; most drivers were written assuming that the buffers stay put once they are allocated.
Last edited by bridgman; 01-21-2009 at 11:56 AM.
ok. so what's the decission? a hick-hack. is ttm going away and gem is going to get ttm-like features? or does ttm "survive", gets in the kernel for ati, via, s3, nvidia perhaps etc.beside from gem?? are they going to try to get ttm in the kernel however?
GEM makes a lot of assumptions that you only have one set of ram between the CPU and GPU, which is true on IGPs but not discrete cards.
nvidia now sucks even more
Wow! Those guys suck big time now.
For how long they plan to be lame like that?