Results 1 to 7 of 7

Thread: GPU Drivers, Crocodile Petting Zoos & LLVMpipe

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    14,759

    Default GPU Drivers, Crocodile Petting Zoos & LLVMpipe

    Phoronix: GPU Drivers, Crocodile Petting Zoos & LLVMpipe

    Zack Rusin has written a new blog post where he compares writing free software graphics drivers to running a crocodile petting zoo and wireless bungee jumping...

    http://www.phoronix.com/vr.php?view=ODM5MQ

  2. #2
    Join Date
    Aug 2008
    Location
    California, USA
    Posts
    196

    Default

    Eh, wouldn't that slow things down a lot?

  3. #3
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,451

    Default

    One possible interpretation of Zack's post is that he is proposing to replace TGSI with LLVM IR as the common language of the driver stack. Drivers could either go directly from LLVM IR to GPU code or could use an existing IR as an intermediate step (eg TGSI, Mesa IR, or something like the "il" we use in the proprietary stack).

    Just a guess, but if that was the case it wouldn't necessarily slow things down but it might make the shader compiler portion of the driver larger and more complex. Hard to say.

  4. #4
    Join Date
    Aug 2008
    Location
    California, USA
    Posts
    196

    Default

    Good thing I haven't gotten around to studying Gallium3D drivers yet.

  5. #5
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,451

    Default

    It's also possible that Zack is talking about keeping TGSI and going back to using LLVM as part of the standard GPU shader compiler, in which case we would have :

    GLSL => TGSI => LLVM IR => GPU shader code

    or, if you're into compute :

    OpenCL C99 => LLVM IR => TGSI => LLVM IR => GPU shader code

    I guess I should stop looking through Evergreen Mesa code and go read some Gallium3D code, but I'd kinda like to get Evergreen support out first.

  6. #6
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,451

    Default

    The thing I don't really understand is how being able to run shaders is going to help without the rest of the hardware driver. On ATI graphics at least you need to set up things like the colour buffer to handle writeback of the results, and once you do that you might as well set up the depth buffer and texture samplers...

    ... and at that point you've pretty much got a GL driver.

    Still, I think the idea is good... looking for the smallest possible bit of porting required to get a modern stack running on new hardware, then add featuers from there. I gues Zack is thinking about running texture processing in shader code, which is an interesting idea if you can get the addressing set up right. It wouldn't be as fast as dedicated hardware but would let you get something running and useable faster.

  7. #7
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by bridgman View Post
    I gues Zack is thinking about running texture processing in shader code, which is an interesting idea if you can get the addressing set up right. It wouldn't be as fast as dedicated hardware but would let you get something running and useable faster.
    very nice idear do this paypass the patent issue of VIA's Texturcompression Patent in OpenGL4 ? ? ??

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •