Page 3 of 6 FirstFirst 12345 ... LastLast
Results 21 to 30 of 60

Thread: Further Testing Shows More Hope For ATI Gallium3D

  1. #21
    Join Date
    Jun 2009
    Posts
    2,932

    Default

    r600g is a different driver from r300g. The hardware architecture changed so much that a completely new driver was needed.

    Many of these things can be ported to r600g because there are similarities, and the developers understand the problem and how to solve it, but it's not exactly the same infrastructure, as far as I understand it.

    Also, AFAIK, r600 and later chips have a lot more horsepower, but are more complicated to program. r300g is already a mature driver, with lots of optimisations having gone into it. r600g is quite recent, and there's still lots of work to do.

  2. #22
    Join Date
    Nov 2008
    Posts
    149

    Question

    When testing the newest open-source bits ... we disabled the swap buffers wait support.
    Isn't this like cheating to make numbers look better? Shouldn't you be testing with out of the box settings only?

  3. #23
    Join Date
    Jul 2008
    Posts
    565

    Default

    Do any of the Gallium drivers yet allow for tweaking GPU speed as well as configuring things like color correction, anti-alias preferences, etc unless otherwise specified and controlled by 3D apps? I was just wondering if all that Catalyst Control Center can do is exposed yet with the open source drivers or not. Extra fluff like that can wait until later of course, but still thought I'd ask.

  4. #24
    Join Date
    Dec 2009
    Posts
    134

    Default

    Is there a tool that monitor Freq and Temp for ATI GPU that works with r300g?

  5. #25
    Join Date
    Jul 2007
    Posts
    405

    Default

    Isn't this like cheating to make numbers look better? Shouldn't you be testing with out of the box settings only?
    Correct me if I'm wrong, but I think with that feature enabled, frame rates would cap at the display refresh rate, which would be sort of useless for some of these tests.

  6. #26
    Join Date
    Dec 2007
    Posts
    2,395

    Default

    Thermal info and chip clocking is handled by the drm not the 3D driver, see the bottom of this page:
    http://wiki.x.org/wiki/RadeonFeature
    for more info.

  7. #27
    Join Date
    Jan 2010
    Location
    Portugal
    Posts
    945

    Default

    Quote Originally Posted by curaga View Post
    Typo on last page:
    The open-source Catalyst driver
    That's not a typo. That sentence came from the future.

    PS: Holy crap, the open-source drivers are getting fast!

  8. #28
    Join Date
    Dec 2007
    Posts
    2,395

    Default

    Quote Originally Posted by jakubo View Post
    if i remember well the 300g was supposed to be the trainig ground for later drivers right? so keeping that in mind the r600g will surely follow soon. i just wonder why the more powerful r600 cards produce unplayable framerate. if the driver architecture is the same, how can that be?
    r300g has been worked on longer has more hw optimizations implemented (tiling, fast Z clears, etc.); r6xx+ support similar features, but they are not fully implemented yet. r300g also has a more optimized shader compiler. Additionally, there have recently been a lot of driver optimizations in r300g that reduce CPU usage; many of these should be ported to r600g eventually. Finally, r3xx-r5xx is DX9/GL2.x class hardware so it maps more directly to the GL2 APIs. r6xx+ is DX10/GL3+ hardware, so there the hardware doesn't match the older APIs as closely and is more complex (and flexible) in general; as such there are a lot more hoops to jump through in the driver to support those APIs.

  9. #29
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    How much do you guys expect the LLVM IR to increase performance before and after the TGSL ditch, compared to the current TGSL-only IR? Or is this some wild experiment that's not in the pipeline for a very long time?

  10. #30
    Join Date
    May 2007
    Posts
    232

    Default

    llvm is not adapted to GPU and i don't see it being usefull. That being said i am pretty sure shader optimization won't improve much perf in benchmarks from this article.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •