Page 4 of 14 FirstFirst ... 23456 ... LastLast
Results 31 to 40 of 140

Thread: ATI dropping support for <R600 - wtf!?

  1. #31
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    The fglrx driver also contains code shared across multiple OSes, and all the OSes except Linux have strong DRM (Digital Rights Management, not Direct Rendering Manager) requirements. It's certainly possible to pick the driver apart and separate out the bits which can be safely exposed, but the result would look a lot more like ground beef than like a steak

    Note that Intel doesn't release docs on all the hardware in their chips either. That is not meant as criticism; we are all dealing with the same constraints here. I didn't think IBM ever supported open source driver development for their graphics chips; Our FireGL guys might remember -- they used IBM GPUs before switching to ATI).

    In all seriousness, I don't think documentation or sample code is a factor for power management or for higher levels of OpenGL support.

    Power management is primarily waiting for things to settle down in the command submission portions of the driver stack so that power management code can dynamically adjust to drawing workload and display modes etc...

    OpenGL 2.1 theoretically comes for free once Gallium3D drivers are running on 3xx-5xx. Gallium3D in turn is currently built over DRI2 and GEM/TTM, which are also in progress.

    For anyone wondering why we expect the open source drivers to make you happy even if fglrx did not, the reason is simple. The open source drivers are aimed directly at the mix of functionality and performance that most of you are asking for and *only* contain code for that functionality. Fglrx is still aimed primarily at professional workstation users on a small set of enterprise Linux distibutions, and includes well over 10x as much hardware-specific code as the open source drivers.

    You really need all that code to get every last bit of 3D performance out of the GPU, but a *much* smaller driver can provide all the functionality most of you expect along with perhaps 70% of the performance -- and can be tweaked to work well on a wide variety of distros and systems much more readily than our workstation driver.
    Last edited by bridgman; 03-05-2009 at 03:03 PM.

  2. #32
    Join Date
    Jan 2008
    Location
    Radoboj, Croatia
    Posts
    155

    Default

    Quote Originally Posted by bridgman View Post
    In all seriousness, I don't think documentation or sample code is a factor for power management or for higher levels of OpenGL support.
    Great then! So we can expect open source 3D rendering as fast as fglrx?
    Quote Originally Posted by bridgman View Post
    Power management is primarily waiting for things to settle down in the command submission portions of the driver stack so that power management code can dynamically adjust to drawing workload and display modes etc...
    So, I guess this is WIP?
    Quote Originally Posted by bridgman View Post
    OpenGL 2.1 theoretically comes for free once Gallium3D drivers are running on 3xx-5xx. Gallium3D in turn is currently built over DRI2 and GEM/TTM, which are also in progress.
    What about OpenGL 3.0?

    Quote Originally Posted by bridgman View Post
    You really need all that code to get every last bit of 3D performance out of the GPU, but a *much* smaller driver can provide all the functionality most of you expect along with perhaps 70% of the performance -- and can be tweaked to work well on a wide variety of distros and systems much more readily than our workstation driver.
    WTF??? Only 70%? So I guess I won't be playing Nexuiz on high resoultion with all effects turned on on an open source driver?

    I'd understand 90+%, but 70%?? That's just not enough, or is it?

  3. #33
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    258

    Default

    I don't really get why people complain about this move. You can't maintain those things forever, especially when you have a good alternative. It is just a proof that AMD's open source strategy pays off. I mean, effectively, dropping Catalyst support for <R600 means that many Windows users will have to either switch to Linux or buy a new card in order to use the latest technologies. Linux users, on the other hand, will continue to profit from the development of the FOSS driver(s), backed by AMD or community-driven. Also, as someone here pointed out, more people moving to the open source drivers will help squashing bugs etc. So the ones who'll really profit from this move are the Linux users -- I'd be more pissed off if I were using Windows and knew Catalyst had some bug that's never going to get fixed.

    Honestly, I'd have them completely drop Catalyst support for Linux and have a few more people working on FOSS drivers.

  4. #34
    Join Date
    Jan 2009
    Posts
    11

    Default

    Hold on a minute here bridgman, I was kind of under the impression that fglrx shares most of its code with the Windows drivers and you had some "Unified Driver Architecture" type middleware in the mix. Is that simply not true? Is the fglrx driver a completely different codebase specifically written for professional applications, in comparison to the gaming-oriented Windows drivers? And what's this 70% number? Are you telling me that Linux open source solutions will only have just over two thirds the performance of the current fglrx driver, which is already a fair distance behind the Windows blobs? What are we looking at, half the framerate? Excuse me if I find that hard to swallow.

  5. #35
    Join Date
    Dec 2007
    Location
    Merida
    Posts
    1,099

    Default

    The performance of the driver will depend entirely on how much optimization the open source devs. are willing and/or able to put into the drivers.

  6. #36
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Quote Originally Posted by roothorick View Post
    Hold on a minute here bridgman, I was kind of under the impression that fglrx shares most of its code with the Windows drivers and you had some "Unified Driver Architecture" type middleware in the mix. Is that simply not true? Is the fglrx driver a completely different codebase specifically written for professional applications, in comparison to the gaming-oriented Windows drivers?
    The fglrx driver shares big chunks of code with other OSes, and includes code paths for both workstation and consumer products. I call it a "workstation driver" because both the development and test focus of the Linux-specific bits are biased towards professional workstation use, particularly in terms of distro support (RHEL and SLED are not the most common consumer distros).

    Quote Originally Posted by roothorick View Post
    And what's this 70% number? Are you telling me that Linux open source solutions will only have just over two thirds the performance of the current fglrx driver, which is already a fair distance behind the Windows blobs? What are we looking at, half the framerate? Excuse me if I find that hard to swallow.
    AFAIK the Windows and Linux binary driver performance should be pretty much identical today; I think Linux stopped being "a fair distance behind" around the end of 2007.

    I asked our architects what performance levels they felt could be obtained with a "clean, simple, well written driver" but without any application-specific performance optimization and their estimate was an average of between 60 and 70% of proprietary driver performance.

    We have released enough programming info to get to 100% but the driver code size and effort grows exponentially as you go far that last 30%. Getting the last bit of performance optimization out of a driver/GPU combination is extremely time-consuming and just plain hard work -- and none of the devs I have spoken with feel it will be necessary.

    This was based on the assumption of identical performance between Windows and Linux; if you are seeing something different (other than running through Wine) please let me know.
    Last edited by bridgman; 03-05-2009 at 03:43 PM.

  7. #37
    Join Date
    Jan 2008
    Location
    Radoboj, Croatia
    Posts
    155

    Default

    Quote Originally Posted by bridgman View Post
    I asked our architects what performance levels they felt could be obtained with a "clean, simple, well written driver" but without any application-specific performance optimization and their estimate was an average of between 60 and 70% of proprietary driver performance.

    We have released enough programming info to get to 100% but the driver code size and effort grows exponentially as you go far that last 30%. Getting the last bit of performance optimization out of a driver/GPU combination is extremely time-consuming and just plain hard work -- and none of the devs I have spoken with feel it will be necessary.
    So, this means only 2100 out of 3000 FPS with glxgears on my Mobility Radeon X1600. (in 3rd powerstate) I'm not impressed. Sorry. FGLRX gave me 2500+ FPS in 2nd powerstate and over 3000 FPS in 3rd powerstate.

    But, one day I'll learn how to make drivers and I'll optimize my copy of radeon to get 100% of my card. That's the beauty of open source.

  8. #38
    Join Date
    Dec 2007
    Location
    Merida
    Posts
    1,099

    Default

    Quote Originally Posted by DoDoENT View Post
    So, this means only 2100 out of 3000 FPS with glxgears on my Mobility Radeon X1600. (in 3rd powerstate) I'm not impressed. Sorry. FGLRX gave me 2500+ FPS in 2nd powerstate and over 3000 FPS in 3rd powerstate.

    But, one day I'll learn how to make drivers and I'll optimize my copy of radeon to get 100% of my card. That's the beauty of open source.

    Glxgears in not a benchmark. And you can't really blame ATI for the state of the open source drivers... not anymore at least. The info. is available. Developers have the task of making the drivers and optimizing 3D performance.

  9. #39
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Yeah, I wasn't really thinking about glxgears in my posts; I didn't want to have to explain to the architects why everyone uses a benchmark which hardly uses any of the GPU functionality

    The dev community is already working on the top priorities for improving 3D performance in the open drivers :

    #1 - memory manager (needed for GL 1.5 and higher) - GEM/TTM

    #2 - redo the command submission and buffer management code (current driver stack doesn't pipeline CPU and GPU operation as much as it could) - bufmgr, radeon-rewrite

    #3 - shift to a driver model designed around shader-based GPUs rather than fixed-function GPUs (ie Gallium3D)

    Once those are done (and all are making great progress) I think you'll see driver development move back to the incremental improvements you are used to seeing. Right now there is perhaps 18 months of work accumulated in branches and alternate code paths, and all of that should start to show up in releases over the next few months.

  10. #40
    Join Date
    Jan 2008
    Location
    Radoboj, Croatia
    Posts
    155

    Default

    Quote Originally Posted by bridgman View Post
    Yeah, I wasn't really thinking about glxgears in my posts; I didn't want to have to explain to the architects why everyone uses a benchmark which hardly uses any of the GPU functionality
    So, glxgears actually doesn't fully utilize GPU?

    Quote Originally Posted by bridgman View Post
    The dev community is already working on the top priorities for improving 3D performance in the open drivers :

    #1 - memory manager (needed for GL 1.5 and higher) - GEM/TTM

    #2 - redo the command submission and buffer management code (current driver stack doesn't pipeline CPU and GPU operation as much as it could) - bufmgr, radeon-rewrite

    #3 - shift to a driver model designed around shader-based GPUs rather than fixed-function GPUs (ie Gallium3D)

    Once those are done (and all are making great progress) I think you'll see driver development move back to the incremental improvements you are used to seeing. Right now there is perhaps 18 months of work accumulated in branches and alternate code paths, and all of that should start to show up in releases over the next few months.
    OK, now I understand: latest open source drivers are actually in good shape regarding performance, but those latest bits of code still aren't included in most of distributions (what users actually see). So, could we see good open source 3D performance and power management by the end of this year (I mean in Ubuntu 9.10/Fedora 12)?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •