You're doing it again!
Originally Posted by deanjo
You can't just say 'drivers' when you talk about ATI GPUs and Linux as there are 3 right - one closed (fglrx) and the two open ones. As I understand it any opencl 'driver' would be closely tied to both the kernel (dri) and the X server (but which one?) or have I totally misunderstood?
So neither of the two open source ATI drivers or mesa have announced work on any opencl support right?
Last edited by danboid; 01-02-2009 at 08:33 PM.
Yes, all three drivers would have to be made openCL aware. So far there has been no announcement of openCL support work being done in the free drivers AFAIK.
Originally Posted by danboid
OpenCL support would not exist in the Xorg drivers, because it has nothing to do with display or windowing.
What it (very likely) will exist as in the open source world is a state engine atop Gallium3D. Gallium3d basically provides a bytecode compiler and interface (TGSI) which allows generic bytecode to be sent to the driver and compiled JIT for the GPU.
However, Gallium3D drivers (compilers, essentially) need to be written for the cards first. That has just begun for r3xx, and will probably go into full swing for r5xx, r6xx, and r7xx as soon as the basic r6xx and r7xx support is finished.
FYI, DRI and mesa are not in the kernel, only DRM is. Also both are display-related, not computational-related (although mesa has had basic shader support.)
As for fglrx, who knows? AMD has committed to supporting OpenCL in their drivers, and fglrx will likely share the codebase with Windows, but there isn't a specific time-frame that I know of.
Last edited by TechMage89; 01-02-2009 at 09:24 PM.
I wonder how AMD plans to reverse the current situation on Windows where every Radeon gets its ass kicked by NVidia (it's not even considered competition anymore) in every PhysX game and how this will relate to Linux GPGPU (advances in this area are only happening on Windows and only later is Linux considered.)
As far as I know, PhysX isn't all that mainstream presently. I believe AMD GPUs can be used to accelerate Havok physics, though (I've only heard of it in Crossfire configs, so far, though). AMD has committed to supporting OpenCL (and it's definitely in their best interests to be able to get into the GPGPU business.)
I can give you a pretty good answer here, because I've interned at a company that does GPGPU development. Some of their customers want to run this stuff on Unix or Linux workstations, and many of the developers use Linux boxes (I was involved in improving compatibility between Linux and Windows boxes on their network, and they didn't even have a DC, that was sort of a pain). Right now, they only use CUDA, but I expect that if OpenCL were supported cross-platform and cross-vendor, they would move to that as quickly as possible.
If AMD wants to compete in this market segment, they need GPGPU support on Linux as well as on Windows (b/c lots of customers want to run GPGPU stuff on Unix/Linux workstations.
For anyone interested in OpenCL, it's worth downloading the spec and taking a look at Appendix B. There are requirements for "live" information sharing with OpenGL, specifically the ability to create OpenCL objects *from* OpenGL objects so that output from OpenCL commands can directly affect what is drawn through OpenGL.
This doesn't necessarily require that OpenCL be implemented *in* the OpenGL driver, but it does mean that the two drivers (CL and GL) need to be designed to work together and share a common buffer management model among other things.
Big Thanks to TechMage89 and Bridgman for elucidating the state of GPGPU support under Linux right now for me.
It will indeed be interesting to see how the ATI cards running opencl will compare running an app thats also been ported to CUDA on a similar NV card and of course comparing opencl performance on similar gen cards across platforms.
Has ffmpeg been ported to CUDA yet? Are there any CUDA video encoders?
What about opencl under Windows? Is it any further ahead?
Not in linux as of yet. In windows yes, there are a few Badaboomit and TMPGenc being a couple for example. Nero is also working on one.
Originally Posted by danboid
Wouldn't it be possible to create a general openCL interface within gallium3d, which every card, that supports gallium 3d, can use?
In my opinion openCL is needed (even / especially) in linux opensource drivers. And ffmpeg and so on must be extended to be able to use opencl for encoding (on every platform).
Video decoding is another thing which must be considered. Could openCL be used to decode video while playing it?
I'm so interested in what will happen in 2009
There's so much happening in all parts of the stack that it's giving me a headache, but here's how I've understood how it's supposed to be laid out:
Originally Posted by danboid
Today the Mesa driver is huge - it's pretty much the "kitchen sink" of all things graphics. From what I've understood it's being changed so:
DRM (kernel) / DRI2 (xorg) handles direct rendering, but now rely on the following components:
1) Generic GEM memory management in kernel
2) Generic KMS mode setting in kernel
Then for 3D, what is today the Mesa driver is split off into:
1) Hardware-specific Gallium3D driver that exposes the lowest level of 3D functionality like universal shaders.
2) Generic state tracker that takes higher-level langauges like OpenGL, OpenCL, DirectX (possibly) and translates these into Gallium3D calls.
It is my understanding that Gallium3D is not directly usable by an application - it requires at least some light state tracker on top to keep track of objects in memory, but that you could make a fairly simple "pass-through" with native Gallium3D instructions. I think OpenCL should perform at near raw Gallium3D performance anyway, but then it has to comply with the OpenCL specification while Gallium3D is there to expose all hardware functionality.
I would think that most applications would target OpenCL, which would become high-performance Gallium3D instructions (if it can't there's something wrong as the whole purpose of OpenCL is to run that kind of stuff and the purpose of Gallium3D to expose the hardware). Gallium3D would then in turn run this on actual shaders and pass it back up.
Bridgman did raise an interesting point though, which I haven't seen discussed anywhere - if I run an OpenGL application and a DirectX application (there's been talk of porting WINE's DirectX emulation to Gallium3D) or some other combination something has to make sure different state trackers don't use the same shaders. It would be too silly if only one could run at a time, it'd be a little bit like the old sound server problem, only one could grab the output at a time.
Also note that AMD is really only needs to release enough specs to implement the hardware-specific bits, but that it leaves a huge, huge job to the community in implementing accelerated 3D state tracker(s) for OpenGL, OpenCL, DirectX/WINE and so on.
Also, I left out quite a bit of simpler accelerations that probably should, in time, be replaced with Gallium3D converters like 2D acceleration and textured video - modern cards don't have any separate 2D engine.
Apart from everything else, there's also the question of hardware accelerated vidoe, which could be done as generic OpenCL/Gallium3D instructions or exposing custom hardware which would go ouside everything I've talked about here.
In short, lots of things happening but it's probably a few years until this is all done, it's basicly rewriting most of the X stack top to bottom.