Page 5 of 8 FirstFirst ... 34567 ... LastLast
Results 41 to 50 of 72

Thread: Gallium3D OpenCL GSoC Near-Final Status Update

  1. #41
    Join Date
    Jul 2011
    Location
    florida usa
    Posts
    80

    Default

    i think the worst idea would be to have multiple ir's. its just asking for one ir to be well developed (the graphics one of course) and the other ir to be lagging behind. im not too sure which ir would be best, i like llvm as a compiler and i like the idea of one all powerful solution used all through your system. but the drivers have all been written for tgsi already, and as people have mentioned it may not be capable of describing graphics operations efficiently.

    i think it may just be best to extend tgsi, it was designed with the intent of it being easy for the gpu vendors to easily port their drivers to interface with tgsi, but thats obviously a pipe dream. glsl could be nice since its designed to represent the glsl code well, but then im not sure if that represents the actual hardware capabilities well or if that represents other functionalities of the devices like compute or video decode.

    there is a lot of stuff this ir will need to be used for, obviously graphics, compute, video decode (which is basically using compute for decode and graphics for display).
    also, probably the best way to do remote desktop and accelerated virtual machine desktops is to use a gallium based driver model that passes the ir directly.

    my personal favorite option is to make everything compile to a tgsi like ir that takes into consideration it may be passed through a network layer, basically just an easily streamable ir that is capable of describing everything graphics wise. then the driver backends convert that into the native gpu code and anything that cant be done on the gpu gets conveted to llvm to be done by the cpu.

    i of course dont know if that last part is fully doable. but its kinda done already in a small sense with the i915g drivers. (since they dont have a vertex shader part there is break out in gallium that allows vertex shaders to be sent to a software driver.

  2. #42
    Join Date
    Aug 2009
    Location
    Russe, Bulgaria
    Posts
    499

    Default

    Didn't LunarGLASS (http://www.lunarglass.org/), made already LLVM IR branch already compatible with GLSL ( new instincts and such). Currently they do GLSL IR -> LLVM IR -> TSGI -> Gallium Drv. They do that, because want to evaluate, whether they can make LLVM IR suitable. And they do it successfully till now. Later TSGI may go away, and there will be one IR for graphics and compute. Much like AMDIL. After that, there can be GLSL compiler based on clang, the way there is OpenCL one ( ask Steckdenis). One have to think about the future, though. The future when there will be SIMD GPUs, like AMD HD8xxx family.

  3. #43
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,385

    Default

    Quote Originally Posted by Drago View Post
    The future when there will be SIMD GPUs, like AMD HD8xxx family.
    One minor terminology point -- the AMD GPUs are all SIMD today (SIMD "this way" and VLIW "that way", eg a 16x5 or 16x4 array). In the future, they will still be SIMD, just not SIMD *and* VLIW.

    There are certainly lots of IRs to choose from, each with their own benefits and drawbacks. One more wild card is that our Fusion System Architecture (FSA) initiative will be built in part around a virtual ISA (FSAIL) designed to bridge over CPUs and GPUs, so I think we have at least 5 to choose from -- 4 if you lump LunarGLASS and LLVM IR together since LunarGLASS builds on and extends LLVM IR. I'm assuming nobody is arguing that we should go back to Mesa IR yet

    It's going to be an interesting few months, and an interesting XDC. I might even learn to like compilers, although that is doubtful.
    Last edited by bridgman; 08-20-2011 at 11:19 AM.

  4. #44
    Join Date
    Aug 2009
    Location
    Russe, Bulgaria
    Posts
    499

    Default

    Quote Originally Posted by bridgman View Post
    I haven't had a chance to watch the video yet (I read a lot faster than I can watch video ) but sounds like a combination of API limitations on texture updates and overhead to transfer new texture data to video memory -- so reading between the lines it sounds like he wants to "update small bits of a bunch of textures" on a regular basis and that means lots of little operations because the API patterns don't match what the app wants to do... and lots of little operations means slow in any language.

    Will post back when I have time to slog through the video. People talk so slo...o...o...owly.
    JC talks about MegaTexture technology. The basics are: the texture resides somewhere in the memory. Game engine, can stream out a new version of the texture (or part of it). Currently you have to make a bunch of GL calls. On consoles you have unified memory, so you just can rewrite the memory location. Now this technology is impressing, and it should have more efficient version for PC. Given that APU is coming with unified memory. Or at least GL extension for discrete GPUs.

  5. #45
    Join Date
    Sep 2010
    Posts
    143

    Default

    Quote Originally Posted by bridgman View Post
    Quote Originally Posted by Drago View Post
    Didn't LunarGLASS (http://www.lunarglass.org/), made already LLVM IR branch already compatible with GLSL ( new instincts and such). Currently they do GLSL IR -> LLVM IR -> TSGI -> Gallium Drv. They do that, because want to evaluate, whether they can make LLVM IR suitable. And they do it successfully till now. Later TSGI may go away, and there will be one IR for graphics and compute. Much like AMDIL. After that, there can be GLSL compiler based on clang, the way there is OpenCL one ( ask Steckdenis). One have to think about the future, though. The future when there will be SIMD GPUs, like AMD HD8xxx family.
    One minor terminology point -- the AMD GPUs are all SIMD today (SIMD "this way" and VLIW "that way", eg a 16x5 or 16x4 array). In the future, they will still be SIMD, just not SIMD *and* VLIW.
    A more major point - all GPU shader architectures are SIMD.

  6. #46
    Join Date
    Aug 2009
    Location
    Russe, Bulgaria
    Posts
    499

    Default

    Quote Originally Posted by bridgman View Post
    One minor terminology point -- the AMD GPUs are all SIMD today (SIMD "this way" and VLIW "that way", eg a 16x5 or 16x4 array). In the future, they will still be SIMD, just not SIMD *and* VLIW.
    Any better readings on the topic. I though that they are all VLIW. Recently made jump from VLIW5 to VLIW4.
    Bridgman, you probably will want to empty your private message inbox

  7. #47
    Join Date
    Aug 2009
    Location
    Russe, Bulgaria
    Posts
    499

    Default

    Quote Originally Posted by Plombo View Post
    A more major point - all GPU shader architectures are SIMD.
    Maybe I am mistaken with the term SIMD. I read that there will be major redesign with HD8xxx. Let me check.
    There you go: http://www.anandtech.com/show/4455/a...ts-for-compute
    Last edited by Drago; 08-20-2011 at 11:12 AM.

  8. #48
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,385

    Default

    Quote Originally Posted by Plombo View Post
    A more major point - all GPU shader architectures are SIMD.
    I wasn't sufficiently sure to say "all", but certainly "most" are.

    Drago, mailbox has room now, thanks. The "full" warning is now at the bottom of the page rather than the top

  9. #49
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,385

    Default

    QUOTE=Drago;224103]Maybe I am mistaken with the term SIMD. I read that there will be major redesign with HD8xxx. Let me check.
    There you go: http://www.anandtech.com/show/4455/a...ts-for-compute[/QUOTE]

    The wording in that article is a bit confusing in places -- they talk about going "from VLIW to non-VLIW SIMD" which can be interpreted in more than one way.

    Adding parentheses for clarity, most people would interpret the string as "(VLIW) to (non-VLIW SIMD)", implying that the SIMD-ness is new, although the correct interpretation is actually "(VLIW to non-VLIW) SIMD" ie still SIMD but with each SIMD made up of single-operation blocks rather than VLIW blocks.

    If you look at the diagrams you'll see references to "Cayman SIMDs" and "GCN SIMDs". Getting your head around the idea of a SIMD made up of VLIW blocks is hard, although once you've done that going away from it is easy
    Last edited by bridgman; 08-20-2011 at 12:39 PM.

  10. #50
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by bridgman View Post
    "from VLIW to non-VLIW SIMD" which can be interpreted in more than one way.
    why not call it VLIW+SIMD to RISC+VLIW or CISC+VLIW.

    the CISC can also only be a Firmware stuff and the firmware translate the CISC into RISC.

    modern cpus are also "Complex Instruction Set Computer" with an firmware translator to Reduced Instruction Set Computer or VLIW PLUS + SIMD SSE units.

    i think the r900 will be 1 RISC core instead of a VLIW core and 5 SIMD cores added to the RISC core and maybe an firmware layer to CISC to protect the internal chip logic.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •