Page 9 of 20 FirstFirst ... 789101119 ... LastLast
Results 81 to 90 of 200

Thread: ATI R600g Gains Mip-Map, Face Culling Support

  1. #81
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,577

    Default

    Quote Originally Posted by bridgman View Post
    There wsn't really much to release until the last week or so... the devs have been working through major rendering problems. I saw glxgears run "ok" for the first time on Friday.
    Out of curiosity: did the system you saw glxgears working have new enough DRM that vsync worked? I noted Michael's system didn't.

  2. #82
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    There wasn't really much to release until the last week or so... the devs have been working through major rendering problems. I saw glxgears run "ok" for the first time on Friday.

    We can't really write useful "here's how to program it" documentation or know what parts of the registers specs need to be included until we've figured out how to make it work ourselves...

  3. #83
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Quote Originally Posted by nanonyme View Post
    Out of curiosity: did the system you saw glxgears working have new enough DRM that vsync worked? I noted Michael's system didn't.
    Richard was running on a slightly older kernel so probably didn't have vsync. Alex put together a patch for Richard's code to make it work on the latest 2.6.35 kernel driver, and I expect we'll release all of that together.

  4. #84
    Join Date
    Jul 2010
    Posts
    46

    Default

    Quote Originally Posted by bridgman View Post
    There wasn't really much to release until the last week or so... the devs have been working through major rendering problems. I saw glxgears run "ok" for the first time on Friday.
    I was under the impression that the code was done and only "IP review" was left (which I assume means someone reads code to check it against some policy). I think someone from ATI said that.

    Perhaps he meant that IP review was in progress in parallel with writing the code?

    That actually sounds smart, since even somewhat broken code might allow someone to start implementing equivalent support in r600g.

    Anyway, good to known that glxgears is working.

  5. #85
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Yep, the code reached the point a month or so ago where starting the IP review made sense. The goal is to have IP review finish at about the same time we have something worth releasing. Alex also has basic 2D functions working now (using the 3D engine), so I think it's all coming together.

    There were a *lot* more annoying little "you have to set this bit differently or everything will be garbled" changes than we expected, however.

  6. #86
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    BTW the review is more along the lines of "read the code, figure out what the substantial new IP areas are, figure out who needs to review each one, send the stuff out for review, discuss with each of the reviewers, change the driver design if it turns out something just can't be released, repeat til done".

    We have policies but given the degree of change from one GPU generation to the next no detailed policy is going to survive even a single round of GPU design.

  7. #87
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,577

    Default

    Quote Originally Posted by bridgman View Post
    Richard was running on a slightly older kernel so probably didn't have vsync. Alex put together a patch for Richard's code to make it work on the latest 2.6.35 kernel driver, and I expect we'll release all of that together.
    Aight. I was testing Gallium3D and classic Mesa with glxgears with vsync at a point and noticed Gallium3D tends to be *exactly* my refresh rate whereas classic Mesa is approximately it. Kind of an interesting phenomenon imo.

  8. #88
    Join Date
    Aug 2008
    Posts
    84

    Default

    Quote Originally Posted by monraaf View Post
    Exactly. That's why we are not only not opening up XvBA but also demand of the chosen few who do have access to the XvBA sdk that any app/library they develop shall remain closed source. Sort of a reverse GPL. All in all, this doesn't score AMD points for open source friendliness.
    Especially given that the developers of, for example, Nouveau don't have this kind of constraint. I'd be entirely unsurprised if we ended up with entirely open source video decoding support for NVidia hardware at some point, which really wouldn't do wonders for AMD's reputation.

  9. #89
    Join Date
    Nov 2008
    Posts
    762

    Default

    Quote Originally Posted by makomk View Post
    I'd be entirely unsurprised if we ended up with entirely open source video decoding support for NVidia hardware at some point
    Not in the next decade. AMD provides hardware documentation except for DRM related stuff. Nvidia does not provide any hardware documentation at all. So how exactly is nouveau at an advantage here?

    VDPAU on nvidia's closed drivers may work better than XvbA on fglrx, but that's completely irrelevant when we're talking about open source solutions.

  10. #90
    Join Date
    Jul 2010
    Posts
    46

    Default

    Quote Originally Posted by rohcQaH View Post
    Not in the next decade. AMD provides hardware documentation except for DRM related stuff. Nvidia does not provide any hardware documentation at all. So how exactly is nouveau at an advantage here?
    The fact that AMD might possibly release specs in the future might indeed make the fact of someone reverse engineering UVDs less likely than nVidia's PureVideos.

    For instance, right now there is no public Evergreen support at all, while there is a prototype Gallium driver for Fermi cards (probably almost totally broken, but it's there, see http://people.freedesktop.org/~chrisbmr/nvc0g.tar.gz).
    That's because there is no significant chance that nVidia will ever release specs anytime soon, while AMD promised specs and code, so everyone decided to just wait for them. On the other hand, once AMD makes good on their promise, r600g will probably and hopefully improve much faster than the Fermi efforts.

    In fact, the "revenge" tool for reverse engineering fglrx doesn't seem to have even been updated for r6xx+ cards, while "renouveau" almost surely works on Fermi.

    So, partial openness does tend to stifle the reverse engineering community that would otherwise pop up. This is in some sense an additional advantage AMD gets from its more open strategy.

    As for video, I guess people care little, since CPUs should be fast enough to decode all sensible video (provided you buy a mid/high-end one), while for 3D GPU acceleration is absolutely a must.

    Reverse engineering seems also be complicated by the high variation in decoding hardware: nVidia is said to have 6 different video hardware decoding interfaces across all their cards.

    Finally, shader based approaches are much easier to code, continue to work on future hardware, and should still be good enough for hardware which isn't totally low-end (and are partially required anyway, since AMD says all post-processing/presentation is done with shaders).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •