Results 1 to 3 of 3

Thread: Mesa Vs ICD

Hybrid View

  1. #1
    Join Date
    Apr 2007
    Location
    Mexico City, Mexico
    Posts
    899

    Default Mesa Vs ICD

    There have been some comments in regards to Mesa's performance and the comparison to the so called "binary blobs" in the forums as of late, especially with the comparison of OS X to Linux and its respective thread in the forums. It is rather clear that the performance delta in the 3D graphics department is more attributed to drivers than anything else, really... Now this performance difference with the drivers may also be attributed to the inherent OpenGL implementation used by them. One thing one readily notices when using Mesa compared to any of the binary drivers (for the hardware with a binary and Open Source Mesa driver, that is) is the number of extensions supported by the "blob" (or rather its LibGL?), which brings me to the point I wanted to discuss:

    What is what gives these blobs their speed boost? The use of their own GL extensions (optimizations) in their supplied GL library or the Kernel/X11 drivers as such? I tend to lean toward the number and characteristics of the GL extensions supported, then again, I have not much of an idea how the drivers actually work, other than what I can figure out of the little information I have read about this subject (and am trying to read more).

    It is often said that Mesa drives could never achieve the same degree of performance that a dedicated ICD would on a given hardware... But I'm still not sure why that is. My limited understanding of how things work is something like: App -> libGL <--> Driver (<-> OS) <--> Hardware, and that the driver serves as sorts of translator of the GL API into architecture-specific commands for the different GPUs... Maybe I've oversimplified the way things actually work.

    Edit

    Pushed the send button too soon... I hadn't finished...
    Last edited by Thetargos; 11-26-2008 at 01:02 PM.

  2. #2
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,283

    Default

    Some of the performance difference is related to the driver model implemented in Mesa, but the work being done on Gallium (as a replacement for the current Mesa HW driver model) will go a long way to fixing those issues.

    The other big issue is that the underlying support code (drm, memory management etc..) is quite a bit "older" in design and functionality than what we use in the binary drivers. Again, the good news is that this is being addressed today. Between the addition of memory management to the kernel (drm) and the work being done to refactor command submission and fencing in the drm, there should be a big lurch forward in both performance and stability over the next 6-12 months.

    The lack of good memory management has also been a factor for the degree of OpenGL support (extensions etc..). Again, once memory management makes it into the kernel I think you will see a lot more progress being made in terms of level of GL support.

  3. #3
    Join Date
    Jun 2007
    Location
    Albuquerque NM USA
    Posts
    342

    Default

    Quote Originally Posted by bridgman View Post
    ...there should be a big lurch forward in both performance and stability over the next 6-12 months.
    Brilliant description.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •