Page 6 of 6 FirstFirst ... 456
Results 51 to 56 of 56

Thread: OpenCL Support Atop Gallium3D Is Here, Sort Of

  1. #51
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,130

    Default

    Quote Originally Posted by nhaehnle View Post
    In any case, this evidence tends to be in favor of having a non-restricted version string.
    On the contrary, this proves why glGetInteger(GL_MAJOR) and glGetInteger(GL_MINOR) are superior to any solution involving string parsing.

    These methods can be trivially supported in Mesa 7.6 without advertizing OpenGL 3.0. Nvidia and Ati binary drivers expose these methods even on 2.1 contexts - there's no reason why Mesa cannot do the same.

    Edit: Digging around, I think this post is the root of the GL_VERSION issue. The first number in "1.4 (2.1 Mesa 7.0.1)" is the highest OpenGL version that can be officially supported under the current GLX implementation when using indirect rendering. Which means that I either hit a Mesa bug or was just plain lucky by only using supported methods.

    Still, the whole confusion wouldn't even exist with glGetInteger(GL_[MAJOR|MINOR]).
    Last edited by BlackStar; 09-04-2009 at 12:52 PM.

  2. #52
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by deanjo View Post
    No more that the algorithm itself has to be of a parallel nature. For example there are many ways of calculating pi and some algorithms are of a serial nature others are of a parallel nature. You could get these algorithms running on a GPU, but the results maybe slower because the codes serial nature, if you switch to an algorithm that is parallel in nature you can have massive speed gains.
    Or, of course, if you have to do a lot of independent serial calculations and like to do as much of them at the same time as possible... which is why OpenCL interests me.

  3. #53
    Join Date
    Aug 2008
    Posts
    77

    Default

    Quote Originally Posted by BlackStar View Post
    Edit: Digging around, I think this post is the root of the GL_VERSION issue. The first number in "1.4 (2.1 Mesa 7.0.1)" is the highest OpenGL version that can be officially supported under the current GLX implementation when using indirect rendering. Which means that I either hit a Mesa bug or was just plain lucky by only using supported methods.
    Okay, that does make a lot of sense.

    Still, the whole confusion wouldn't even exist with glGetInteger(GL_[MAJOR|MINOR]).
    You do realize that when that is implemented, glGetInteger(GL_MAJOR|GL_MINOR) would return version 1.4 in the situation where the GL_VERSION string is "1.4 (2.1 ...)", right? You would get exactly what you get with atoi or GLEE-style parsing.

    So yeah, it might be a little more convenient for application developers, but it wouldn't actually change anything.

  4. #54
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,578

    Default

    Hmm, as in with the principle GL_MINOR=1, GL_MINOR=2, GL_MAJOR|GL_MINOR=3, then pull first part for 1, second part for 2, full version with?
    Last edited by nanonyme; 09-07-2009 at 12:56 PM.

  5. #55
    Join Date
    Aug 2008
    Posts
    77

    Default

    Quote Originally Posted by nanonyme View Post
    Hmm, as in with the principle GL_MINOR=1, GL_MINOR=2, GL_MAJOR|GL_MINOR=3, then pull first part for 1, second part for 2, full version with?
    Huh?


    (I need to write at least 10 characters, so yeah: can you elaborate? Because I totally didn't understand your post)

  6. #56
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,578

    Default

    I guess mostly what I had in mind was eg OpenGL version 1.5.x, glGetVersion(GL_MAJOR) returns 1, glGetVersion(GL_MINOR) returns 5.x, glGetVersion(GL_MAJOR|GL_MINOR) returns 1.5.x (as in, bitwise or on 01(bin) and 10(bin) => 11(bin))
    ps. This is useless.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •