Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 38

Thread: RV600, OpenCL, ffmpeg and blender

  1. #21
    Join Date
    Jan 2009
    Location
    England
    Posts
    125

    Default

    Quote Originally Posted by deanjo View Post
    Yes, the drivers ultimately determine how to handle the openCL API.
    You're doing it again!

    You can't just say 'drivers' when you talk about ATI GPUs and Linux as there are 3 right - one closed (fglrx) and the two open ones. As I understand it any opencl 'driver' would be closely tied to both the kernel (dri) and the X server (but which one?) or have I totally misunderstood?

    So neither of the two open source ATI drivers or mesa have announced work on any opencl support right?
    Last edited by danboid; 01-02-2009 at 07:33 PM.

  2. #22
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,584

    Default

    Quote Originally Posted by danboid View Post
    You're doing it again!

    You can't just say 'drivers' when you talk about ATI GPUs and Linux as there are 3 right - one closed (fglrx) and the two open ones. As I understand it any opencl 'driver' would be closely tied to both the kernel (dri) and the X server (but which one?) or have I totally misunderstood?

    So neither of the two open source ATI drivers or mesa have announced work on any opencl support right?
    Yes, all three drivers would have to be made openCL aware. So far there has been no announcement of openCL support work being done in the free drivers AFAIK.

  3. #23
    Join Date
    Jul 2007
    Posts
    404

    Default

    OpenCL support would not exist in the Xorg drivers, because it has nothing to do with display or windowing.

    What it (very likely) will exist as in the open source world is a state engine atop Gallium3D. Gallium3d basically provides a bytecode compiler and interface (TGSI) which allows generic bytecode to be sent to the driver and compiled JIT for the GPU.

    However, Gallium3D drivers (compilers, essentially) need to be written for the cards first. That has just begun for r3xx, and will probably go into full swing for r5xx, r6xx, and r7xx as soon as the basic r6xx and r7xx support is finished.

    FYI, DRI and mesa are not in the kernel, only DRM is. Also both are display-related, not computational-related (although mesa has had basic shader support.)

    As for fglrx, who knows? AMD has committed to supporting OpenCL in their drivers, and fglrx will likely share the codebase with Windows, but there isn't a specific time-frame that I know of.
    Last edited by TechMage89; 01-02-2009 at 08:24 PM.

  4. #24
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,796

    Default

    I wonder how AMD plans to reverse the current situation on Windows where every Radeon gets its ass kicked by NVidia (it's not even considered competition anymore) in every PhysX game and how this will relate to Linux GPGPU (advances in this area are only happening on Windows and only later is Linux considered.)

  5. #25
    Join Date
    Jul 2007
    Posts
    404

    Default

    As far as I know, PhysX isn't all that mainstream presently. I believe AMD GPUs can be used to accelerate Havok physics, though (I've only heard of it in Crossfire configs, so far, though). AMD has committed to supporting OpenCL (and it's definitely in their best interests to be able to get into the GPGPU business.)

    I can give you a pretty good answer here, because I've interned at a company that does GPGPU development. Some of their customers want to run this stuff on Unix or Linux workstations, and many of the developers use Linux boxes (I was involved in improving compatibility between Linux and Windows boxes on their network, and they didn't even have a DC, that was sort of a pain). Right now, they only use CUDA, but I expect that if OpenCL were supported cross-platform and cross-vendor, they would move to that as quickly as possible.

    If AMD wants to compete in this market segment, they need GPGPU support on Linux as well as on Windows (b/c lots of customers want to run GPGPU stuff on Unix/Linux workstations.

  6. #26
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,464

    Default

    For anyone interested in OpenCL, it's worth downloading the spec and taking a look at Appendix B. There are requirements for "live" information sharing with OpenGL, specifically the ability to create OpenCL objects *from* OpenGL objects so that output from OpenCL commands can directly affect what is drawn through OpenGL.

    This doesn't necessarily require that OpenCL be implemented *in* the OpenGL driver, but it does mean that the two drivers (CL and GL) need to be designed to work together and share a common buffer management model among other things.

    http://www.khronos.org/registry/cl/

  7. #27
    Join Date
    Jan 2009
    Location
    England
    Posts
    125

    Default

    Big Thanks to TechMage89 and Bridgman for elucidating the state of GPGPU support under Linux right now for me.

    It will indeed be interesting to see how the ATI cards running opencl will compare running an app thats also been ported to CUDA on a similar NV card and of course comparing opencl performance on similar gen cards across platforms.

    Has ffmpeg been ported to CUDA yet? Are there any CUDA video encoders?

    What about opencl under Windows? Is it any further ahead?

  8. #28
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,584

    Default

    Quote Originally Posted by danboid View Post
    Has ffmpeg been ported to CUDA yet? Are there any CUDA video encoders?
    Not in linux as of yet. In windows yes, there are a few Badaboomit and TMPGenc being a couple for example. Nero is also working on one.

  9. #29
    Join Date
    Nov 2007
    Location
    Die trolls, die!
    Posts
    525

    Default

    Wouldn't it be possible to create a general openCL interface within gallium3d, which every card, that supports gallium 3d, can use?

    In my opinion openCL is needed (even / especially) in linux opensource drivers. And ffmpeg and so on must be extended to be able to use opencl for encoding (on every platform).

    Video decoding is another thing which must be considered. Could openCL be used to decode video while playing it?

    I'm so interested in what will happen in 2009

  10. #30
    Join Date
    Oct 2008
    Posts
    151

    Post

    Quote Originally Posted by danboid View Post
    I understand the drivers are dev only at the moment and we'll need to wait until at least the next big xorg release before mortals can get open source RV600 accel without rolling there own and hoping for the best but how does opencl fit into this? (...) Then of course someone needs to update ffmpeg (...) Then what about blender? Is blender just going to be running straight on top of gallium or gallium and opencl or??
    There's so much happening in all parts of the stack that it's giving me a headache, but here's how I've understood how it's supposed to be laid out:

    Today the Mesa driver is huge - it's pretty much the "kitchen sink" of all things graphics. From what I've understood it's being changed so:

    DRM (kernel) / DRI2 (xorg) handles direct rendering, but now rely on the following components:

    1) Generic GEM memory management in kernel
    2) Generic KMS mode setting in kernel

    Then for 3D, what is today the Mesa driver is split off into:

    1) Hardware-specific Gallium3D driver that exposes the lowest level of 3D functionality like universal shaders.
    2) Generic state tracker that takes higher-level langauges like OpenGL, OpenCL, DirectX (possibly) and translates these into Gallium3D calls.

    It is my understanding that Gallium3D is not directly usable by an application - it requires at least some light state tracker on top to keep track of objects in memory, but that you could make a fairly simple "pass-through" with native Gallium3D instructions. I think OpenCL should perform at near raw Gallium3D performance anyway, but then it has to comply with the OpenCL specification while Gallium3D is there to expose all hardware functionality.

    I would think that most applications would target OpenCL, which would become high-performance Gallium3D instructions (if it can't there's something wrong as the whole purpose of OpenCL is to run that kind of stuff and the purpose of Gallium3D to expose the hardware). Gallium3D would then in turn run this on actual shaders and pass it back up.

    Bridgman did raise an interesting point though, which I haven't seen discussed anywhere - if I run an OpenGL application and a DirectX application (there's been talk of porting WINE's DirectX emulation to Gallium3D) or some other combination something has to make sure different state trackers don't use the same shaders. It would be too silly if only one could run at a time, it'd be a little bit like the old sound server problem, only one could grab the output at a time.

    Also note that AMD is really only needs to release enough specs to implement the hardware-specific bits, but that it leaves a huge, huge job to the community in implementing accelerated 3D state tracker(s) for OpenGL, OpenCL, DirectX/WINE and so on.

    Also, I left out quite a bit of simpler accelerations that probably should, in time, be replaced with Gallium3D converters like 2D acceleration and textured video - modern cards don't have any separate 2D engine.

    Apart from everything else, there's also the question of hardware accelerated vidoe, which could be done as generic OpenCL/Gallium3D instructions or exposing custom hardware which would go ouside everything I've talked about here.

    In short, lots of things happening but it's probably a few years until this is all done, it's basicly rewriting most of the X stack top to bottom.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •