Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: The R600g Driver May Soon Be Working, But Lacks A Compiler

  1. #1
    Join Date
    Jan 2007
    Posts
    14,901

    Default The R600g Driver May Soon Be Working, But Lacks A Compiler

    Phoronix: The R600g Driver May Soon Be Working, But Lacks A Compiler

    Just days ago we reported on the lack of progress with the ATI R600g driver that intends to provide Gallium3D support for ATI Radeon HD 2000/3000/4000 series graphics cards, but fortunately today there has been some activity in the Mesa Git repository for this open-source driver and a statement issued by the lead developer (Jerome Glisse) about its progress and he also has shared a TODO list...

    http://www.phoronix.com/vr.php?view=ODQ0Mg

  2. #2
    Join Date
    Jul 2007
    Posts
    447

    Default The "classic" r600 Mesa driver does not support S3TC textures.

    Some games (*cough WoW*) will crash unless the driver can advertise S3TC texture extensions, but the r600c driver doesn't seem to handle S3TC textures at all yet. The patent-encumbered libtxc_dxtn.so module doesn't appear to be needed for WoW though.

    So the r300c isn't really an option in my case at all.

  3. #3
    Join Date
    Jul 2007
    Posts
    447

    Default I mean't that the r600c isn't an option, of course.

    Quote Originally Posted by chrisr View Post
    So the r300c isn't really an option in my case at all.
    And for the record, the "administrator" who has specified that I can only edit a post for that single first minute after posting it is an idiot who needs to be shot.

  4. #4
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,630

    Default

    Let's hope that r600c shader compiler offers some speed-up for hacking together the shader compiler for r600g. (unless I've understood wrong, something similar was done for r300 drivers originally)

  5. #5
    Join Date
    Jul 2008
    Location
    Germany
    Posts
    681

    Default

    Quote Originally Posted by chrisr View Post
    Some games (*cough WoW*) will crash unless the driver can advertise S3TC texture extensions, but the r600c driver doesn't seem to handle S3TC textures at all yet. The patent-encumbered libtxc_dxtn.so module doesn't appear to be needed for WoW though.
    In my last WoW test all render correct but there is somewhere an sw fallback an an bug in the driver ( if you enter OG the game crash ) and i dont have installed this libary

  6. #6
    Join Date
    Dec 2008
    Posts
    160

    Default

    I understand what compilers are, but where I'm a bit lost right now is what the state of all the compilers in the graphic stack as it??? Over the years someone has suggested them, started them, dropped them, and there have been a bunch of articles lately talking about compilers for different purposes and stacks, and the response seems to be "yay! compilers!" and it really isn't clear what tactical versus strategic role they are playing.

    Off the top of my head I can think of
    + LLVM based compiler for Gallium3D to generate a faster software OpenGL rendering
    + LLVM compiling of Gallium3D IR code to optimize it before compiling it (using LLVM) into
    + Custom compilers for Shaders and Geometry for OpenGL for various drivers (being put into Mesa directly, or into Mesa or Gallium3D drivers (separately).
    + But why are we writing custom compilers anyway??? If it's for Gallium3D then would there be a common Shader/Geometry compiler for OpenGL in the OpenGL state tracker (common to all drivers). If it's a shader/geometry compiler for the hardware then shouldn't there be some common LLVM-IR based compiler that is fairly common and generic at the bottom of the stack (ie... only writing the hardware translation layer for LLVM /// recognizing there are possibly shortcomings in LLVM for GPU usage that perhaps have only been solved in commercial code bases)

    I guess it's this - everyone's writing a compiler yet we are we not moving towards a common-driver code base that eliminates as much 'custom' as possible???


    Could someone clarify the places for compilers in the graphic stack... I understand what a compiler is... it just seems every day someone is writing yet another compiler for some purpose at different places in the stack

  7. #7
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,463

    Default

    Most of the drivers use two compiler stages - one common, and one hardware-specific.

    The first compiler is implemented in common mesa code, and translates from application shader code to Mesa IR (classic) or TGSI (Gallium3D). Last time I looked the compiler always generated Mesa IR then the Gallium3D glue converted Mesa IR to TGSI before passing to a Gallium3D driver. I'm don't remember if the same compiler is used for assembly and GLSL shaders or if the compilers are different. Intel devs have been working on an improved version of this common compiler - that's the GLSL2 work you have seen mentioned here.

    The second compiler is implemented in each individual hardware driver, and goes from Mesa IR / TGSI to the appropriate instruction set for the target GPU. In the case of r300c/r300g the same compiler is used, by virtue of having the Gallium3D driver convert from TGSI back to Mesa IR before passing to the compiler.

    The shader compiler in r300c/r300g might be called a "real" compiler in the sense that it performs a number of optimization operations on the program before starting to generate code. The shader compiler in the r600 driver is simpler, essentially working on one application shader program instruction at a time, however the nature of the r600-and-higher hardware (variable length instruction words with up to 5 ALU operations per instruction) combined with the relatively high incidence of vector instructions in the shader programs means that the performance impact of a simpler compiler isn't as bad as you might expect.

    Usage of LLVM has been discussed in a number of places in the stack (including the hardware-specific shader compiler) but AFAIK right now it is only being used when running shader programs on the CPU, ie logically it is part of the "hardware driver for the CPU" aka softpipe. I forget where it actually lives in the source tree.

    I believe Ian (Intel) mentioned the possibility of having their new GLSL2 compiler also generate hardware-level instructions in the future, which would essentially skip over the Mesa IR representation and go straight to hardware. Not sure of the details there but presumably there are some additional optimization possibilities to be had.

  8. #8
    Join Date
    Jan 2009
    Posts
    624

    Default

    Intermediate code representations currently follow this scheme:

    Code:
    GLSL2 IR ---> Mesa IR ---> TGSI (Gallium) IR ---> Hardware-friendly IR
    There are lots of IRs in Mesa, indeed. The following scheme depicts various optimizers:

    Code:
    GLSL2 compiler and optimizer --+
                                   +---> Mesa IR optimizer ->-+
    ARB_fp/vp parser --------------+                          |
                                                              |
                                                              |
    R300 hw-specific optimizer <---+                          |
                                   |                          |
    R600 hw-specific optimizer <---+                          |
                                   +----<----- TGSI <---------+
    NV50 hw-specific optimizer <---+
                                   |
    LLVM for software drivers <----+
    Everybody has a different idea about where optimizations should be done. Gallium people would prefer to have a TGSI optimizer (that one is being worked on), while others (Intel) optimize in GLSL IR and Mesa IR.

  9. #9
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,795

    Default

    How about using fglrx's compiler?

  10. #10
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,463

    Default

    We actually use a similar model in the proprietary drivers - all of the API-specific programs are compiled down to a standard intermediate representation called "il" by the API-specific driver, then a common shader compiler turns the il code into a hardware program optimized for the specific GPU.

    For anyone interested, the "il" spec is available on the stream docco page at :

    http://developer.amd.com/gpu/ATIStre...mentation.aspx

    We might be able to provide the shader compiler as an optional blob, but since the il is different from mesa IR or TGSI there would be another translation stage required.

    I don't think it's worth doing at this point though - AFAIK the hardware efficiency of the shader compiler is about #73 on the list of performance bottlenecks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •