Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 31

Thread: LLVMpipe's Geometry Processing Pipeline Kicks

  1. #21
    Join Date
    Jan 2009
    Posts
    88

    Default

    yes, llvm was used this way by Apple in their opengl stack.

    http://www.dzone.com/rsslinks/apples...he_scenes.html

  2. #22
    Join Date
    Jul 2009
    Posts
    31

    Default

    Quote Originally Posted by not.sure View Post
    Wasn't LLVM also planned to be used to compile/optimize/generate shader code for specific GPUs? Or is that something entirely different?
    I want this answered as well, because, aside from the general answer, I'm most curious how this would work for VLIW designs if LLVM has no support whatsoever for none

  3. #23
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,440

    Default

    I'm not aware of anyone using LLVM to generate shader code for GPUs right now, VLIW or scalar.

    LLVM is being used to generate optimized graphics IR, and is also being used to convert that IR into x86 code, but that's it AFAIK.

  4. #24
    Join Date
    Oct 2008
    Posts
    3,102

    Default

    There has been some talk about changing the current

    Gallium IR -> GPU compiled code

    to

    Gallium IR -> LLVM -> Gallium IR -> GPU compiled code

    which would avoid the need for modifying LLVM to work with VLIW architecture but allow the general optimizations to still be done. That would also instantly work for all hardware, instead of requiring new LLVM code for every new card.

  5. #25
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,072

    Default

    Are there plans to make llvmpipe the default software rasterizer?

  6. #26
    Join Date
    Jan 2009
    Posts
    621

    Default

    Quote Originally Posted by wswartzendruber View Post
    Won't r300g utilize this for the parts of OpenGL 3 that require unimplemented functionality?
    r300g won't support OpenGL 3. We try as much as possible not to use any kind of software fallback. A dumb app may suddenly decide to use more features and then the driver would pretty much become a software rasterizer. Nobody wants that. Moreover this article is only about vertex processing using LLVM which cannot be used for GL3 fragment processing. Anyway it appears to be a lot slower than old r500 hw but still faster than swrast.

    Quote Originally Posted by rohcQaH View Post
    openGL-call -> geometry shaders -> vertex shaders -> pixel shaders -> final image
    This is wrong, the geometry shader comes after the vertex shader.

    Quote Originally Posted by curaga View Post
    Are there plans to make llvmpipe the default software rasterizer?
    Well it's logical isn't it.

  7. #27
    Join Date
    Nov 2008
    Posts
    766

    Default

    Quote Originally Posted by marek View Post
    This is wrong, the geometry shader comes after the vertex shader.
    thanks for the correction.

    I haven't found much information about geometry shaders on the web except for dry technical specs. If you got any good links, please share.

  8. #28
    Join Date
    Aug 2008
    Location
    California, USA
    Posts
    196

    Default

    So a Gallium3D driver like r300g can straight-up disallow any software fallback?

  9. #29
    Join Date
    Dec 2007
    Posts
    2,360

    Default

    The idea with gallium is all or nothing. As previously noted, fallbacks are usually slower than just rendering the whole pipeline with the CPU directly so if the GPU can't handle something, just do the whole thing on the CPU.

  10. #30
    Join Date
    Jan 2009
    Posts
    621

    Default

    Quote Originally Posted by rohcQaH View Post
    I haven't found much information about geometry shaders on the web except for dry technical specs. If you got any good links, please share.
    Well the little technical GL_ARB_geometry_shader4 specification is as good as it gets. The most widespread misconception of geometry shaders is that it's a good match for tessellation - it really isn't and has never been, there are specialized shader stages for that in GL4.

    The geometry shader simply consumes one primitive of some type (points, lines, triangles) and emits one or more primitives of another type. It allows for converting point sprites and wide lines to triangles (pretty useless in GL), or generating lines for celshading. Now the very important feature is that for each emitted primitive, you can choose a render target where it should go. This allows for rendering a scene to several textures each time from a different position and orientation in space using *one* draw call, making it possible to render to the whole cubemap or 3D texture in one pass. You have also a read-only access to a couple of surrounding primitives but it doesn't seem to be very useful (you cannot even compute smooth normals with it). There are many applications but most of them are rather non-obvious and generally geometry shaders aren't as useful as they have been claimed to be. Certainly it's the most useless shader stage and I think it's useless in general, ask any professional game engine developer, he will tell you....

    Quote Originally Posted by wswartzendruber View Post
    So a Gallium3D driver like r300g can straight-up disallow any software fallback?
    Currently it's impossible for a gallium driver to fallback to software entirely so there is nothing to disallow. The GL state tracker does have some fallbacks but it's unlikely you would hit either of them really. The meta-driver called failover was originally designed for switching between a hw and sw driver on the fly but it's unmaintained and rotting for a couple of years now.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •