Page 5 of 6 FirstFirst ... 3456 LastLast
Results 41 to 50 of 56

Thread: OpenCL Support Atop Gallium3D Is Here, Sort Of

  1. #41
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by bridgman View Post
    In principle, yes. In practice I expect there might be some tweaking required for each driver, in case (for example) the new state tracker used some Gallium3D API combinations which had not been exercised before.
    Okey so basically Gallium3D exposes all 'functions' that a graphics card is capable to perform and a state-tracker can then 'dictate' these functions (which makes a state-tracker a driver for Gallium3D?)?

  2. #42
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by V!NCENT View Post
    Okey so basically Gallium3D exposes all 'functions' that a graphics card is capable to perform and a state-tracker can then 'dictate' these functions (which makes a state-tracker a driver for Gallium3D?)?
    Quite close. All state trackers translate their command streams to a common low-level 'intermediate language' (IL). The various hardware drivers then translate this IL to a format that the hardware can understand and execute.

    The IL and the state trackers are shared between all Gallium drivers, while the hardware drivers are specific for each GPU. The idea is that this increases developer efficiency: if the IL is sufficiently abstract, then adding e.g. an OpenCL state tracker will (ideally) allow all Gallium drivers to execute OpenCL code without modifying the driver! Ditto for OpenGL 3.x, EXA, OpenVG etc etc etc.

  3. #43
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by BlackStar View Post
    The idea is that this increases developer efficiency: if the IL is sufficiently abstract, then adding e.g. an OpenCL state tracker will (ideally) allow all Gallium drivers to execute OpenCL code without modifying the driver! Ditto for OpenGL 3.x, EXA, OpenVG etc etc etc.
    OK so basically soon the entire Linux graphical desktop (well, most parts ofcourse) will be hardware accelerated by the graphics card? OpenGL, OpenVG, OpenCL... And this is all going to be very compatible with any graphics card out there...

    So we will have an extremely fast desktop and applications (OpenCL) and less burden on the CPU which will in turn be free'ed-up (someone please correct this word for me in proper english) so we will also see a performance increase there as well?

    Man-o-man this is gonna be good

    Is the OpenCL state-tracker a library? If so then if I would want to code an app and take advantage of OpenCL than would I have to link to the OpenCL lib? And would this be the Right Thing to do?

  4. #44
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by V!NCENT View Post
    OK so basically soon the entire Linux graphical desktop (well, most parts ofcourse) will be hardware accelerated by the graphics card? OpenGL, OpenVG, OpenCL... And this is all going to be very compatible with any graphics card out there...
    The only issue is that you need Gallium drivers. Nouveau is already focusing on Gallium and there is an experimental r300g branch for R300-R500 cards from Ati. Intel hasn't decided whether they'll ship Gallium drivers yet.

    Just note that binary drivers won't take advantage of this stack.

    Is the OpenCL state-tracker a library? If so then if I would want to code an app and take advantage of OpenCL than would I have to link to the OpenCL lib? And would this be the Right Thing to do?
    Right now, every vendor ships its own OpenCL libraries. You can download an implementation from Ati that runs on the CPU or request access to an implementation from Nvidia that runs on the GPU. AFAIK, OpenCL through Gallium is not available yet.

    The only difficulty is that there is no common OpenCL library as there is for OpenGL (you link -lGL and don't care who implements it). However, as long as there are no ABI issues, you should be able to code your app using a specific OpenCL library and run it on another.

  5. #45
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by V!NCENT View Post
    So we will have an extremely fast desktop and applications (OpenCL) and less burden on the CPU which will in turn be free'ed-up (someone please correct this word for me in proper english) so we will also see a performance increase there as well?
    Only certain types off applications (applications that benefit from parallel data operations) can be sped up and that of course is if that application is coded to take advantage of openCL (and coded in a manner that it actually doesn't hurt performance).

  6. #46
    Join Date
    Aug 2008
    Posts
    77

    Default

    Quote Originally Posted by BlackStar View Post
    This code will work correctly iff the driver follows the revised OpenGL 3.0 specs for glGetString and returns the version directly, i.e. "2.1".

    Right now, Mesa returns a string in the form "1.4 (Mesa 2.1)". Your code will parse this as (major, minor) = (1, 4), when the actual OpenGL version is 2.1 (1.4 is the server version, IIRC).
    You are absolutely mistaken.

    The string returned is "<OpenGL version> Mesa <Mesa version>". The string in the sample code which I posted above is what is returned by the r300 driver: "1.5 Mesa 7.6-devel". This means that the supported OpenGL version is 1.5, provided by a Mesa 7.6 development version.

    This is absolutely parsed correctly by both the atoi code I posted and the code that GLEE uses.

    Note that I'm talking about the string returned by glGetString(GL_VERSION). This is the final supported OpenGL version, which has nothing to do with the GLX version or with how the OpenGL version is negotiated conceptually between client and server.

    Edit: Oh, and there may be some strings in glxinfo which suggest something like Mesa supports OpenGL 2.1 (I can't test right now, not at home) - which is true, but completely beside the point. As long as the hardware driver only supports OpenGL 1.5, OpenGL 1.5 is what you will get, and what the version string correctly tells you. If you try to call 2.x functions, your program will crash. So again, everybody has always parsed the OpenGL version string like this and it has always been correct. I'm really curious where this misconception comes from.
    Last edited by nhaehnle; 09-04-2009 at 10:06 AM.

  7. #47
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by nhaehnle View Post
    You are absolutely mistaken.

    The string returned is "<OpenGL version> Mesa <Mesa version>". The string in the sample code which I posted above is what is returned by the r300 driver: "1.5 Mesa 7.6-devel". This means that the supported OpenGL version is 1.5, provided by a Mesa 7.6 development version.

    This is absolutely parsed correctly by both the atoi code I posted and the code that GLEE uses.
    Checking back to my code comments from 2007, I found the following:

    On Mesa 7.0.1, Mesa/soft returns: "1.4 (2.1 Mesa 7.0.1)". On the same hardware (R500), install fglrx 8.2 and you get "2.1.7281 ...". Change to indirect and you get "1.4 (2.1.7281 ...)"

    So how do you interpret those strings? 1.4 only makes sense as the server version, because Mesa 7.0 sure as hell isn't limited to 1.4 in software rendering - and neither is R500 w/ fglrx.

    Misconception? Maybe.

    For the record, I *did* add a workaround to parse the "2.1" part of the strings above and the program worked correctly.
    Last edited by BlackStar; 09-04-2009 at 10:55 AM.

  8. #48
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by deanjo View Post
    Only certain types off applications (applications that benefit from parallel data operations) can be sped up and that of course is if that application is coded to take advantage of openCL (and coded in a manner that it actually doesn't hurt performance).
    Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?

  9. #49
    Join Date
    Aug 2008
    Posts
    77

    Default

    Quote Originally Posted by BlackStar View Post
    Checking back to my code comments from 2007, I found the following:

    On Mesa 7.0.1, Mesa/soft returns: "1.4 (2.1 Mesa 7.0.1)". On the same hardware (R500), install fglrx 8.2 and you get "2.1.7281 ...". Change to indirect and you get "1.4 (2.1.7281 ...)"

    So how do you interpret those strings? 1.4 only makes sense as the server version, because Mesa 7.0 sure as hell isn't limited to 1.4 in software rendering - and neither is R500 w/ fglrx.
    The version reported by Mesa/soft looks wrong. This may have been a Mesa bug, which has clearly been fixed since then. But a bug is a bug - under the changed specification where only the version is reported, that Mesa version would probably have reported "1.4" instead of "2.1". Then you wouldn't even have been able to add your workaround.

    The fglrx version string looks perfectly okay.

    The indirect version string, it's hard to judge what's going on from a distance.

    Another possible explanation is that when you chose Mesa/soft, you actually got an indirect rendering context instead of a direct rendering software context. Then it might have been a libGL bug. Or maybe it wasn't a bug at all because not all parts of OpenGL 2.1 were properly supported in the GLX protocol? Maybe you were simply lucky in that the subset you used worked correctly.

    In any case, this evidence tends to be in favor of having a non-restricted version string.

  10. #50
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by V!NCENT View Post
    Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?
    No more that the algorithm itself has to be of a parallel nature. For example there are many ways of calculating pi and some algorithms are of a serial nature others are of a parallel nature. You could get these algorithms running on a GPU, but the results maybe slower because the codes serial nature, if you switch to an algorithm that is parallel in nature you can have massive speed gains.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •