Page 5 of 5 FirstFirst ... 345
Results 41 to 47 of 47

Thread: Khronos Releases OpenGL 3.3 & OpenGL 4.0

  1. #41
    Join Date
    Oct 2009
    Posts
    111

    Default

    Quote Originally Posted by deanjo View Post
    Can you please be specific about what openCL bug you have found? I've been using them for quite some time and have yet to come across an issue with them in openCL development.
    "Real" bugs I have found is memleaking of the AMD driver, did not try that with the NVida though.

    Problems I have found are different handling of implicit conversions of AMD and Nvidia. AMD converts implicitly (allowed in the spec) e.g. cos(2) works, while Nvidia does not do that, here you have to convert that yourself.

    Further AMD converts vectors explicitly e.g.
    uint 4 u; ...
    float4 f; ...
    (float4)(u);
    I guess this is wanted behaviour though I would consider it a bug since it is (specifically) forbidden in the spec.

    Just these two examples result in code that will work with AMD devices or Intel CPUs yet not on Nvidia GPUs and that is bad.

    In fact if one knows that Nvidia does no implicit converting and that AMD ignores the spec at that example above one can easily not use these features. Though still that leaves a bad taste in the mouth and makes me wonder what else of the spec is not suppoted by whichever implementation or what is ignored.

  2. #42
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,142

    Default

    We have been having problems getting async readbacks to work on Nvidia, too. They seem to work fine on AMD's implementation, but Nvidia blocks as if it was a blocking (synced) readback.

    Not good.

  3. #43
    Join Date
    Oct 2009
    Posts
    111

    Default

    Another minor thing is that at least here Nvidia does often not return correct error codes as defined in the header file.

  4. #44
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,142

    Default

    Why am I getting a vibe that Nvidia is trying to shoehorn their OpenCL implementation on top of CUDA, similar to what they did with GLSL on top of Cg? Their GLSL compiler used to be god-awful and has caused lots of pain to developers and especially users. It used to accept D3D/HLSL code without an error or warning for god's sake!

    "(user) Hey, your game doesn't work on my Ati card."
    "(dimwitted dev) I don't test on Ati cards, Ati sucks. They don't support GLSL."
    "(user) Well, games X, Y and Z work and they do use GLSL."
    "(dimwitted dev) So what, the code runs fine here."
    "(smart dev) Let me take a look. Hey, you are using saturate(), implicit casts and other HLSL stuff in GLSL. What the hell is wrong with you?"
    "(dimwitted dev) Err... Ati sucks?"
    "(user) Well, uh, ok I'll go play some other game. Thanks anyway!"

    I just hope they know better this time and this is simply caused by immature drivers.

  5. #45
    Join Date
    Oct 2009
    Posts
    111

    Default

    Also

    float4 array[2] = {(float4)(0.0f), (float4)(0.0f)};

    works for AMD but does not for Nvidia.

  6. #46
    Join Date
    Aug 2007
    Posts
    6,625

    Default

    @BlackStar

    Did you ever try GLSL renderer with xbmc?

  7. #47
    Join Date
    Oct 2009
    Posts
    111

    Default

    And another thing that works on AMD but not on Nvidia:
    float2 a = (float2)(0.0f);
    a = pown(a, 3);

    Is this all enough to call it _not mature_?

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •