Page 1 of 2 12 LastLast
Results 1 to 10 of 72

Thread: Gallium3D OpenCL GSoC Near-Final Status Update

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    15,098

    Default Gallium3D OpenCL GSoC Near-Final Status Update

    Phoronix: Gallium3D OpenCL GSoC Near-Final Status Update

    Google's 2011 Summer of Code is coming to an end with today being one of the soft deadlines for the student developers to finish up work on their summer projects. Of the Mesa / GSoC summer projects this year, I mentioned the MLAA support for Gallium3D was a success with the post-processing infrastructure and morphological anti-aliasing support seeking mainline inclusion into Mesa. Here's a status update on how the Gallium3D OpenCL support has come over the summer...

    http://www.phoronix.com/vr.php?view=OTgwOQ

  2. #2
    Join Date
    Oct 2008
    Posts
    3,173

    Default

    Unfortunately, there's no word right now on future plans for the coming months -- if he will even be contributing to the Mesa project following the formal end of GSoC 2011 -- or eventual plans for graphics driver support and mainline integration.
    I believe he has already stated his intent to continue some work on the project after GSoC ends, at a lesser rate. That was why he tried to get the complicated stuff done this summer and then later he can add in all the simple builtin functions that are missing like clamp, dot products, etc. that have to be done for all the different parameters types.

    Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.

  3. #3
    Join Date
    Feb 2009
    Location
    France
    Posts
    309

    Default

    Quote Originally Posted by smitty3268 View Post
    Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.
    Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

    As for sending the code to the GPU, it is relatively easy.

  4. #4
    Join Date
    Jan 2009
    Posts
    1,706

    Default

    Quote Originally Posted by MPF View Post
    Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

    As for sending the code to the GPU, it is relatively easy.
    You talk in riddles old man

  5. #5
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    I haven't looked at the implementation at all, so I could be wrong, but I'm going to assume that the LLVM backend is just for the CPU implementation, and that it's intended to compile to TGSI or such directly for GPU-based backends.

  6. #6
    Join Date
    Oct 2008
    Posts
    3,173

    Default

    Quote Originally Posted by MPF View Post
    Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

    As for sending the code to the GPU, it is relatively easy.
    Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.

    And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.

  7. #7
    Join Date
    Feb 2009
    Location
    France
    Posts
    309

    Default

    Quote Originally Posted by smitty3268 View Post
    Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.

    And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.
    I'm not the right guy to ask. I'll add a suggestion to talk about that at the XDC 2011. We know how to execute kernels on the nvidia boards, but we need to discuss the architecture that will generate this code.

  8. #8
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,514

    Default

    I'm trying to get at least one of our developers up/down (depending on the developer ) to XDC to talk about this as well. I think it's fair to say that we are leaning towards using LLVM IR for compute but staying with TGSI for graphics (or potentially GLSL IR if the community moves that way).

    Obviously having two different IRs implies either some stacking or some duplicated work, so it is a good topic for discussion.

    LLVM IR didn't seem like a great fit for graphics on GPUs with vector or VLIW shader hardware since so much of the workload was naturally 3- or 4-component vector operations, but for compute that isn't necessarily such an issue.

  9. #9
    Join Date
    Sep 2010
    Posts
    147

    Default

    Quote Originally Posted by smitty3268 View Post
    Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.

    And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.
    Well, compute != graphics. The developer consensus on LLVM is that it's not well-suited for shaders, i.e. graphics. Compute is a different matter.

    If we're lucky, we can get Gallium driver developers to add LLVM IR support to their drivers, if someone comes up with a good way to code-generate for GPUs using LLVM. But I would guess that the LunarGLASS developers have already figured that part out. Or at least, I think they've figured out a way to code generate for targets that require structured control flow. I think they said a few months ago that they had gotten it working when targeting Mesa IR, which requires structured control flow like GPUs. So maybe some of their work can be adapted to Clover.

  10. #10
    Join Date
    Mar 2011
    Posts
    13

    Default

    Hello,

    Here are more details about the status of Clover:

    • The API is complete, that means that any application can now use Clover and will not fail due to missing or unimplemented symbols.
    • The implementation in itself is complete: there are no stubs, and all the API actually does something.
    • The interesting part: Clover can launch native kernels (Phoronix spoke about that two months ago), and compiled kernels. So, it is really feature-complete.
    • The only thing missing are built-ins functions. It means that even if we can create memory objects, images, events, command queues and all the OpenCL objects, and that we can compile and launch OpenCL C kernels, these kernel cannot yet use functions like clamp(), smooth(), etc.
    • The most complex built-ins are implemented though, like image reading and writing, and barrier() (a built-in that will be described in detail in the documentation as it uses things like POSIX contexts (man setcontext)).


    I'll write the documentation in the following days (I already begun). It will be in Doxygen format, and I was pleased yesterday to see that Doxygen is now able to produce exceptionally good and beautiful documentation, in regards of what it produced one or two years ago. Then, I'll rework some part of the image functions (they are currently tied to the x86 architecture using SSE2, I will reimplement them in a more architecture-neutral way).

    The documentation will be available in the source code and also on my people.freedesktop.org page, so anybody will be able to view it.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •