Page 1 of 6 123 ... LastLast
Results 1 to 10 of 56

Thread: OpenCL Is Coming To The GIMP Via GEGL

  1. #1
    Join Date
    Jan 2007
    Posts
    15,193

    Default OpenCL Is Coming To The GIMP Via GEGL

    Phoronix: OpenCL Is Coming To The GIMP Via GEGL

    Outside of the direct X.Org / Mesa / Linux work being done this year as part of Google's Summer of Code, one of the more interesting projects is work by a student developer with GIMP who is bringing OpenCL support to the graphics program's GEGL image library...

    http://www.phoronix.com/vr.php?view=OTc5OQ

  2. #2
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by phoronix View Post
    483 milliseconds was needed when on the NVIDIA GPU in OpenCL while it took 526 milliseconds on the CPU without OpenCL. Most of the 483 milliseconds was spent transferring data to/from the GPU memory
    this means if you have a realy fast cpu the cpu will be faster without the GPU!

    only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"

    right now i do have turn of GPU acceleration in firefox6 and flash11 only because i think my phenomII b50 X4 3,8ghz is overall faster as my passiv cooled hd4670...

    i get massiv mouse input lag in heroes of newerth if i use any GPU acceleration outside the game.

  3. #3
    Join Date
    Oct 2009
    Posts
    353

    Default

    Quote Originally Posted by Qaridarium View Post
    this means if you have a realy fast cpu the cpu will be faster without the GPU!
    If one uses PBOs instead of buffer arrays then it should be faster and/or more energy effective because you don't have to transfer data back and forth.
    Also, upgrading the GPU also improves performance, when I upgraded from 9600gt to gtx 560Ti the PBO read/write performance in my little test went up like 4 times!

    So it's really mostly up to the quality of the source code and the solutions it uses.
    Even if both the CPU and GPU solutions run equally fast (that is you have a newer CPU and older GPU) you should still use the GPU solution because it saves energy by doing less I/O.

    But then there's still some folks with old hw that doesn't support PBOs yet (though it's a shame nowadays to not support PBOs) and some crappy drivers maybe.

  4. #4
    Join Date
    Aug 2010
    Posts
    48

    Default

    Quote Originally Posted by Qaridarium View Post
    this means if you have a realy fast cpu the cpu will be faster without the GPU!

    only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"
    The idea is to copy the data once, to a let a whole lot of filters do their thing and only then copy back. The copying back and forth to the GPU will always be the bottleneck. This benchmark gives the worst case scenario.

  5. #5
    Join Date
    Nov 2009
    Location
    Italy
    Posts
    971

    Default

    Blah blah blah... but what about 16bit/channel?
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

  6. #6
    Join Date
    Jan 2011
    Posts
    58

    Default

    Quote Originally Posted by darkbasic View Post
    Blah blah blah... but what about 16bit/channel?
    High bit depth will be in 3.0. It's been the plan for quite a while already.

  7. #7
    Join Date
    Sep 2008
    Posts
    123

    Default

    This will be highly useful for all filters/plugins.

    It should also be theoretically possible to cut the transfer time in half, keeping the working copy of the graphics on the card at all times, just sending updates on the merged graphics.
    Even for just a simple brush stroke, it will typically be applied on just one layer, and the visible image requires a fair amount of computations on top, which can then be copied back.

  8. #8
    Join Date
    Oct 2009
    Posts
    174

    Default

    Quote Originally Posted by darkbasic View Post
    Blah blah blah... but what about 16bit/channel?
    GeGL has had 16bit or greater per channel since the beginning. Are you perhaps talking about the Gimp?
    In which case the answer is when all the internals are replaced with GeGL.

  9. #9
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,199

    Default

    Quote Originally Posted by Qaridarium View Post
    this means if you have a realy fast cpu the cpu will be faster without the GPU!

    only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"
    Only until Fusion is common Then moving data from cpu to gpu is a zero-copy operation. Perhaps it already is on Sandy (/Ivy for OpenCL) too?

  10. #10
    Join Date
    Jan 2009
    Posts
    1,445

    Default

    Quote Originally Posted by curaga View Post
    Only until Fusion is common Then moving data from cpu to gpu is a zero-copy operation. Perhaps it already is on Sandy (/Ivy for OpenCL) too?
    Since they share both memory and an L3 cache I'd assume so...actually, I'm not sure if there would be a copy from the dedicated SB memory or not (though I know they share the same physical memory unlilke the AMD's integrated memory which had the optional sideboard).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •