Mesa Vs ICD
There have been some comments in regards to Mesa's performance and the comparison to the so called "binary blobs" in the forums as of late, especially with the comparison of OS X to Linux and its respective thread in the forums. It is rather clear that the performance delta in the 3D graphics department is more attributed to drivers than anything else, really... Now this performance difference with the drivers may also be attributed to the inherent OpenGL implementation used by them. One thing one readily notices when using Mesa compared to any of the binary drivers (for the hardware with a binary and Open Source Mesa driver, that is) is the number of extensions supported by the "blob" (or rather its LibGL?), which brings me to the point I wanted to discuss:
What is what gives these blobs their speed boost? The use of their own GL extensions (optimizations) in their supplied GL library or the Kernel/X11 drivers as such? I tend to lean toward the number and characteristics of the GL extensions supported, then again, I have not much of an idea how the drivers actually work, other than what I can figure out of the little information I have read about this subject (and am trying to read more).
It is often said that Mesa drives could never achieve the same degree of performance that a dedicated ICD would on a given hardware... But I'm still not sure why that is. My limited understanding of how things work is something like: App -> libGL <--> Driver (<-> OS) <--> Hardware, and that the driver serves as sorts of translator of the GL API into architecture-specific commands for the different GPUs... Maybe I've oversimplified the way things actually work.
Pushed the send button too soon... I hadn't finished...
Last edited by Thetargos; 11-26-2008 at 01:02 PM.