Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 35

Thread: Intel's OpenCL Beignet Project Is Gaining Ground

  1. #21
    Join Date
    Jul 2010
    Posts
    449

    Default

    Quote Originally Posted by uid313 View Post
    I see.
    Why is that Intel make their own DRI driver and refuse to use Gallium3D?
    They don't agree that it's the best way to write their drivers. If you look for posts by Kayden on here he's gone into detail about it.

  2. #22
    Join Date
    Nov 2008
    Location
    Madison, WI, USA
    Posts
    877

    Default

    Quote Originally Posted by uid313 View Post
    I see.
    Why is that Intel make their own DRI driver and refuse to use Gallium3D?
    In the past, I believe the reasons have been along the lines of:
    1. They've already put a lot of work into the DRI driver, and don't want to stall development for a year or so while they port everything to Gallium, and then also have to deal with all the bug reports of anything that gets mis-ported.
    2. They don't believe that Gallium would give them superior performance to the back-end that they already have been working on.
    3. Most of the Intel developers weren't familiar with the Gallium APIs and TGSI, and it would take training time to get up to speed... time where they wouldn't be improving the DRI driver


    So... that being said, If Chia-I Wu can get the 'ilo' driver up to the point where it is competitive with the Intel DRI driver, maybe we'll see a re-evaluation in policy. I believe that it re-uses much of the existing Intel back-end code, so it would make it easier for Intel to transition over to the new Gallium model, since the code for their back-end would still be familiar, and most of the TGSI abstraction/conversion code would already be done.

  3. #23
    Join Date
    Jun 2009
    Posts
    2,932

    Default

    Quote Originally Posted by uid313 View Post
    I see.
    Why is that Intel make their own DRI driver and refuse to use Gallium3D?
    Because they had a working driver long before Gallium3D was ready.

    AMD made the switch after somebody else did the original port of r300 and r600 to the Gallium architecture. Once these ports started outperforming the original drivers, AMD made them the default drivers and started working on them.

    Intel had a range of drivers working on classic Mesa when Gallium3D was still in its infancy. There was a proof-of-concept driver for one chipset which was kind-of-OK, but Intel never saw the need to switch to Gallium3d. They have a large Linux team, and a codebase they are familiar with. Switching to Gallium would mean lots of short-term headaches, and they don't see any significant pay-off in the long term. At least that's my understanding.

    Nouveau was Gallium3D from the beginning.

    EDIT: Veerappan was faster.

  4. #24
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    Quote Originally Posted by pingufunkybeat View Post
    Because they had a working driver long before Gallium3D was ready.

    AMD made the switch after somebody else did the original port of r300 and r600 to the Gallium architecture. Once these ports started outperforming the original drivers, AMD made them the default drivers and started working on them.

    Intel had a range of drivers working on classic Mesa when Gallium3D was still in its infancy. There was a proof-of-concept driver for one chipset which was kind-of-OK, but Intel never saw the need to switch to Gallium3d. They have a large Linux team, and a codebase they are familiar with. Switching to Gallium would mean lots of short-term headaches, and they don't see any significant pay-off in the long term. At least that's my understanding.

    Nouveau was Gallium3D from the beginning.

    EDIT: Veerappan was faster.
    There are benefits to going togalluim that makes those headaches worthwhile. I could could pop off at least half dozen right now just off the top of my head. But Intel isnt going to change their mind though, so really isnt any point in trying.

  5. #25
    Join Date
    Jan 2011
    Posts
    1,287

    Default

    Quote Originally Posted by Veerappan View Post
    They don't believe that Gallium would give them superior performance to the back-end that they already have been working on.
    That was never the point of Gallium, but just a possible side-effect. The point of using Gallium is to speed up development. Better performance *might* happen because of this (you spend less time reinventing the wheel, and more time optimizing your code), while the theoretical maximum is probably by using specific code for specific drivers, just that it would take forever.

  6. #26
    Join Date
    Jan 2011
    Posts
    1,287

    Default

    Quote Originally Posted by duby229 View Post
    There are benefits to going togalluim that makes those headaches worthwhile. I could could pop off at least half dozen right now just off the top of my head. But Intel isnt going to change their mind though, so really isnt any point in trying.
    I thought we were discussing to know more, not to convince anyone of changing their minds. I'm interested in knowing about such benefits.

  7. #27
    Join Date
    Dec 2011
    Posts
    2,108

    Default

    Quote Originally Posted by archibald View Post
    They don't agree that it's the best way to write their drivers. If you look for posts by Kayden on here he's gone into detail about it.
    Then maybe he should have proposed how to fix Gallium3D or propose something better than Gallium3D.

    I think unified graphics architecture is a good idea.
    Windows have Windows Display Driver Model (WDDM).

  8. #28
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    Quote Originally Posted by mrugiero View Post
    That was never the point of Gallium, but just a possible side-effect. The point of using Gallium is to speed up development. Better performance *might* happen because of this (you spend less time reinventing the wheel, and more time optimizing your code), while the theoretical maximum is probably by using specific code for specific drivers, just that it would take forever.
    The reason that gallium speeds up development is because it allows a lot more code sharing. Intel could use the existing VDPAU state tracker instead if writing a VA-API state tracker and save time. But that is also exactly the same reason they don't want to use gallium.

    EDIT: or more pertinent to this thread, they could use clover instead of writing beigenet.
    Last edited by duby229; 08-19-2013 at 05:22 PM.

  9. #29
    Join Date
    Jan 2011
    Posts
    1,287

    Default

    Quote Originally Posted by duby229 View Post
    The reason that gallium speeds up development is because it allows a lot more code sharing. Intel could use the existing VDPAU state tracker instead if writing a VA-API state tracker and save time. But that is also exactly the same reason they don't want to use gallium.

    EDIT: or more pertinent to this thread, they could use clover instead of writing beigenet.
    I'm aware of how it does speed up development. My point is, it doesn't inherently lead to better performance, and that's what I was correcting on the quote. It usually leads to better performance because of the faster development, caused, as you said, because of the shared code. I already stated in a previous code such facts about the use of Gallium and why I think they avoid it (since I'm not an Intel developer/executive, I can't do much more than speculating about it, but my guess is they don't want to benefit their competitors through shared code, even if that means more work for them).

    EDIT: Anyway, I want to know of the other reasons to use Gallium you thought about.

  10. #30
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    Quote Originally Posted by mrugiero View Post
    I'm aware of how it does speed up development. My point is, it doesn't inherently lead to better performance, and that's what I was correcting on the quote. It usually leads to better performance because of the faster development, caused, as you said, because of the shared code. I already stated in a previous code such facts about the use of Gallium and why I think they avoid it (since I'm not an Intel developer/executive, I can't do much more than speculating about it, but my guess is they don't want to benefit their competitors through shared code, even if that means more work for them).

    EDIT: Anyway, I want to know of the other reasons to use Gallium you thought about.
    You can take that as two examples of code sharing that Intel has chosen not to participate in. Don't misunderstand me, Intel has every right to want their OSS drivers to work with their OSS solutions. I'm fine with that. Plus they do contribute a lot of code to a lot of projects. Nobody can really fault Intel for their OSS commitment.

    I do feel that there is an argument to be made for Intel to port their OSS driver to gallium due to the potential it would have on improving the whole stack. But that is really selfish of me to want.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •