Page 3 of 6 FirstFirst 12345 ... LastLast
Results 21 to 30 of 53

Thread: AMD Releases OpenCL ATI GPU Support For Linux

  1. #21
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    I can't make sense of that video. NVidia announced C++ support. You claim this won't happen? Sounds a bit strange to me, so I suppose this video is by people who spread too much FUD or something.
    As the previous posters said, the issue is that Nvidia is having troubles producing the actual cards. This video showed the (failed) damage control they are trying to do: the features sound great on paper, but they are useless without the actual hardware to run them - and the hardware is nowhere to be found (edit: the card and the videos they showed are obviously fake).

    Every indication is that they won't release sooner than Q1 or Q2 of 2010.

    Personally, I don't doubt that we'll see some form of C++ running on those GPUs. However, I believe this will be *far* from what C++ will look like on the CPU - I doubt we'll see stuff like multiple inheritance (or even single virtual inheritance), partial template specialization or any of the other features that make C++ more than C-with-classes. Don't expect to take your favorite C++ application (e.g. Firefox) and compile it for the GPU anytime soon (before Larbee at least).

    Note that DX11 already supports C-with-classes, so Nvidia won't be bringing anything new to the table...

  2. #22
    Join Date
    Jan 2008
    Posts
    185

    Default

    Eh, screw C++. I don't need a fully general purpose GPU, I just want apps that make sense to accelerate, to be accelerated. Like video decoding...

  3. #23
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by rohcQaH View Post
    why? Do you suddenly need openCL to live? I'd expect another year to pass before openCL is used by widespread software. If your card can do what you need right now, keep it.
    i hope abaut the opensource driver... i think the opensource driver will support openCL on an hd 38xx

  4. #24

    Question So what?

    "ATI releases OpenCL ATI GPU support for Linux"

    So what? What does this mean? What's OpenCL good for? What applications might start to take advantage of this? Why should I care?

    That's what I'd really like to read in the article.

  5. #25
    Join Date
    Oct 2007
    Posts
    22

    Default

    So ATI purposely screws over it's previous generation yet again. First by not providing stable drivers, now by providing OpenCL support only to the newest cards. ATI's OpenCL implementation should have provided support for all graphics cards that they support in their drivers. Oh well, yet another reason why AMD doesn't deserve any more of my support or money.

  6. #26
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,434

    Default

    Quote Originally Posted by alazyworkaholic View Post
    "ATI releases OpenCL ATI GPU support for Linux"

    So what? What does this mean? What's OpenCL good for? What applications might start to take advantage of this? Why should I care? That's what I'd really like to read in the article.
    If you follow the "AMD Developer Central" link Michael provided, and scroll down to "Related Resources", you will see a number of links providing overview information for OpenCL.

    Quote Originally Posted by LavosPhoenix View Post
    So ATI purposely screws over it's previous generation yet again. First by not providing stable drivers, now by providing OpenCL support only to the newest cards. ATI's OpenCL implementation should have provided support for all graphics cards that they support in their drivers. Oh well, yet another reason why AMD doesn't deserve any more of my support or money.
    I believe the issue here is that OpenCL (an industry standard developed in 2008 to provide an open standard for computing in 2010 and beyond) requires hardware capabilities which were not included in our 2006 and 2007 GPUs in order to be fully compliant. It's probably possible to do some kind of subset implementation, and I'm sure the open source drivers will implement one anyways, but right now I don't know how useful an implementation without the local and global data share memories would be.

    I guess I'd better apologize for DX11 right now before things get any worse
    Last edited by bridgman; 10-14-2009 at 12:22 PM.

  7. #27
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by LavosPhoenix View Post
    So ATI purposely screws over it's previous generation yet again. First by not providing stable drivers, now by providing OpenCL support only to the newest cards. ATI's OpenCL implementation should have provided support for all graphics cards that they support in their drivers. Oh well, yet another reason why AMD doesn't deserve any more of my support or money.
    Did it ever cross your mind that those GPUs may not support the hardware features necessary for OpenCL in the first place? No, probably not.

  8. #28
    Join Date
    Jul 2008
    Posts
    1,725

    Default

    I really don't buy into 'c++ in hardware'. Are you guys sure you don't misinterpret something? Something like 'the cuda compiler can eat c++ and turn it into the instruction stream needed by the card'?

  9. #29
    Join Date
    Jan 2008
    Posts
    772

    Default

    Quote Originally Posted by energyman View Post
    I really don't buy into 'c++ in hardware'. Are you guys sure you don't misinterpret something? Something like 'the cuda compiler can eat c++ and turn it into the instruction stream needed by the card'?
    Well, that would legitimately be "C++ on GPU", but the notion of that somehow being tied to "real applications" running on the GPU makes no sense to me. No matter what language a GPU-targeted compiler supports, a GPU is not going to have the general-purpose libraries, OS facilities, or I/O capabilities demanded by "real applications". To the extent that applications use any kind of C++-on-GPU capability, I don't see how it would be any more or less "real" than using OpenCL, Cg, or whatever.

  10. #30
    Join Date
    Oct 2008
    Posts
    3,096

    Default

    Quote Originally Posted by energyman View Post
    I really don't buy into 'c++ in hardware'. Are you guys sure you don't misinterpret something? Something like 'the cuda compiler can eat c++ and turn it into the instruction stream needed by the card'?
    What NVidia is saying is that certain C++ features weren't possible in previous generations because the hardware lacked the necessary support. See this anandtech article for some more info: http://www.anandtech.com/video/showdoc.aspx?i=3651&p=6

    In previous architectures there was a different load instruction depending on the type of memory: local (per thread), shared (per group of threads) or global (per kernel). This created issues with pointers and generally made a mess that programmers had to clean up.

    Fermi unifies the address space so that there's only one instruction and the address of the memory is what determines where it's stored. The lowest bits are for local memory, the next set is for shared and then the remainder of the address space is global.

    The unified address space is apparently necessary to enable C++ support for NVIDIA GPUs, which Fermi is designed to do.
    Fermi implements a wide set of changes to its ISA, primarily designed at enabling C++ support. Virtual functions, new/delete, try/catch are all parts of C++ and enabled on Fermi.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •