Page 1 of 3 123 LastLast
Results 1 to 10 of 56

Thread: OpenCL Is Coming To The GIMP Via GEGL

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    14,548

    Default OpenCL Is Coming To The GIMP Via GEGL

    Phoronix: OpenCL Is Coming To The GIMP Via GEGL

    Outside of the direct X.Org / Mesa / Linux work being done this year as part of Google's Summer of Code, one of the more interesting projects is work by a student developer with GIMP who is bringing OpenCL support to the graphics program's GEGL image library...

    http://www.phoronix.com/vr.php?view=OTc5OQ

  2. #2
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by phoronix View Post
    483 milliseconds was needed when on the NVIDIA GPU in OpenCL while it took 526 milliseconds on the CPU without OpenCL. Most of the 483 milliseconds was spent transferring data to/from the GPU memory
    this means if you have a realy fast cpu the cpu will be faster without the GPU!

    only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"

    right now i do have turn of GPU acceleration in firefox6 and flash11 only because i think my phenomII b50 X4 3,8ghz is overall faster as my passiv cooled hd4670...

    i get massiv mouse input lag in heroes of newerth if i use any GPU acceleration outside the game.

  3. #3
    Join Date
    Oct 2009
    Posts
    353

    Default

    Quote Originally Posted by Qaridarium View Post
    this means if you have a realy fast cpu the cpu will be faster without the GPU!
    If one uses PBOs instead of buffer arrays then it should be faster and/or more energy effective because you don't have to transfer data back and forth.
    Also, upgrading the GPU also improves performance, when I upgraded from 9600gt to gtx 560Ti the PBO read/write performance in my little test went up like 4 times!

    So it's really mostly up to the quality of the source code and the solutions it uses.
    Even if both the CPU and GPU solutions run equally fast (that is you have a newer CPU and older GPU) you should still use the GPU solution because it saves energy by doing less I/O.

    But then there's still some folks with old hw that doesn't support PBOs yet (though it's a shame nowadays to not support PBOs) and some crappy drivers maybe.

  4. #4
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by cl333r View Post
    If one uses PBOs instead of buffer arrays then it should be faster and/or more energy effective because you don't have to transfer data back and forth.
    Also, upgrading the GPU also improves performance, when I upgraded from 9600gt to gtx 560Ti the PBO read/write performance in my little test went up like 4 times!

    So it's really mostly up to the quality of the source code and the solutions it uses.
    Even if both the CPU and GPU solutions run equally fast (that is you have a newer CPU and older GPU) you should still use the GPU solution because it saves energy by doing less I/O.

    But then there's still some folks with old hw that doesn't support PBOs yet (though it's a shame nowadays to not support PBOs) and some crappy drivers maybe.
    "So it's really mostly up to the quality of the source code and the solutions it uses."

    i think this is the only problem ;-)

    right now it works.. but it makes mouse input lag... maybe they fix this in the future!..

    then its maybe better maybe not faster on some old cards compared to new cpus but more energy efficiency sure

    but right now it sucks!

  5. #5
    Join Date
    Aug 2010
    Posts
    48

    Default

    Quote Originally Posted by Qaridarium View Post
    this means if you have a realy fast cpu the cpu will be faster without the GPU!

    only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"
    The idea is to copy the data once, to a let a whole lot of filters do their thing and only then copy back. The copying back and forth to the GPU will always be the bottleneck. This benchmark gives the worst case scenario.

  6. #6
    Join Date
    Nov 2009
    Location
    Italy
    Posts
    911

    Default

    Blah blah blah... but what about 16bit/channel?
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

  7. #7
    Join Date
    Jan 2011
    Posts
    58

    Default

    Quote Originally Posted by darkbasic View Post
    Blah blah blah... but what about 16bit/channel?
    High bit depth will be in 3.0. It's been the plan for quite a while already.

  8. #8
    Join Date
    Sep 2008
    Posts
    121

    Default

    This will be highly useful for all filters/plugins.

    It should also be theoretically possible to cut the transfer time in half, keeping the working copy of the graphics on the card at all times, just sending updates on the merged graphics.
    Even for just a simple brush stroke, it will typically be applied on just one layer, and the visible image requires a fair amount of computations on top, which can then be copied back.

  9. #9
    Join Date
    Oct 2009
    Posts
    167

    Default

    Quote Originally Posted by darkbasic View Post
    Blah blah blah... but what about 16bit/channel?
    GeGL has had 16bit or greater per channel since the beginning. Are you perhaps talking about the Gimp?
    In which case the answer is when all the internals are replaced with GeGL.

  10. #10
    Join Date
    Jan 2010
    Location
    Portugal
    Posts
    945

    Default

    Quote Originally Posted by kayosiii View Post
    Are you perhaps talking about the Gimp?
    In which case the answer is when all the internals are replaced with GeGL.
    3.0 should be a massive improvement over the current situation if they manage to pull it all off, and launch it sometime in the next 10 years. Hopefully long before the decade is over :P Two features I miss in GIMP are 16bit/channel and Free Transform. Apart from that it's an excellent program. Even with the multi-window window mode. The single window mode somehow feels more awkward than the classic gimp mode. Oh, and what the hell are they thinking with that Export instead of Save for every single image format except .xcf???

    @cl333r Are you the one working on that OpenCL backend for GEGL? If you are nice work Gimp really needs a performance boost like that.
    Last edited by devius; 08-16-2011 at 05:36 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •