The way I understand the situation is that gem is a little bit harder to code for, but is alot more capable if you know what your doing. On the other hand ttm will produce working code faster.
Originally Posted by Regenwald
I dont know exactly what that means or how it works, or even what the differences are that make it so. I'm a total noob, but I guess the idea is to use ttm to get working code fast or use gem to get the most capable code.
The two APIs seem to be fairly different. TTM has been around for a couple of years. Some of it (the memory management part) seems to be quite well understood, but TTM also includes some mechanisms to synchronize buffer release with the GPU having finished using the buffer (the "fence" mechanism) and that seems to be causing grief. Not sure if there is an inherent API problem or whether the synchronization is just so different from one GPU vendor to the next that any API would cause pain. As I understand it you can't just use memory management without the fences, but I'm only guessing that from the general discussion.
GEM takes a different approach from TTM in a number of ways. The main differences are (a) it adds a new way of accessing vram buffers, using a filesystem-type pread/pwrite mechanism, and (b) all vram buffers are backed up by an equal size buffer in system memory, simplifying suspend/resume (at the cost of consuming more system memory space). That second point seems bad at first glance, but Linux does a lazy allocation of physical memory (ie you can allocate virtual memory but if you never use it no physical memory gets harmed) so in principle it's not as bad as it sounds. Discussion has now moved onto nuances of suspend/resume partly to see if this will be sufficient, and partly because a couple of potential problems with the current driver stack showed up during the discussion.
GEM also adds a mechanism to formalize the change of "ownership" of a buffer between CPU and GPU, called domains, which took care of things like cache flushing. Initial thought was that GEM provided a significant performance boost over TTM, but I believe current thinking is that the initial tests were not using the same version of the test program and that other factors were causing most of the performance delta. That said, I expect the initial GEM implementation has been tweaked up these days so it probably is faster again
I think GEM was actually proposed in part because it was *easier* to code for; TTM is more complex (primarily the fence mechanism) but has been around longer so initial implementations exist for most GPUs -- the problem is that those implementations are often a year or two old and so neither match the current driver stack nor the current state of Gallium.
Last edited by bridgman; 06-04-2008 at 02:17 PM.
Ah thanks man
I tell you, you've gone above and beyond again. I cant begin to tell you how much I appreciate you stepping up to help explain these things to us noobs. I think this is about 12 or more times that I have had a question that you have personally answered in detail. Thank you so much for trying to help us understand. You've gone above and beyond.
Just be aware that my posts in these areas are comparable to a typical poll; accurate +/-10%, 19 times out of 20
Last edited by bridgman; 06-04-2008 at 03:44 PM.
An interesting LWN article about TTM and GEM: GEM v. TTM