Page 6 of 20 FirstFirst ... 4567816 ... LastLast
Results 51 to 60 of 200

Thread: ATI R600g Gains Mip-Map, Face Culling Support

  1. #51
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,441

    Default

    Quote Originally Posted by Agdr View Post
    OK, but what if you released both an open driver without the DRM, and a monolithic blob with DRM?
    Removing DRM from the open driver wouldn't make any difference. An attacker would still be able to use the open code as a guide to attacking the blob.

    Quote Originally Posted by Agdr View Post
    This would only be a problem if contracts specifically said that the source code for any part of the whole driver including DRM cannot be publicly released (which seems pretty draconian, but I guess possible).
    The contracts don't specifically require holdback of source code, but they require specific levels of robustness and immunity from attack which so far have not been achievable if source code is released.

    Put differently, the contract doesn't say that you have to stay in the armoured car but it does say that you have to survive when everyone is shooting at you

    Quote Originally Posted by Agdr View Post
    I thought the idea of nVidia or Intel moving to a VLIW architecture and taking advantage of your ideas/code was considered unrealistic, but evidently you don't, and might be right at that.
    Doesn't have to be an existing competitor. If we say "hey folks, here's everything you need to design & build your own GPU HW/SW without spending hundreds of milions on R&D" we might as well just shut the company down today. A new competitor would not be able to keep up with evolution in the high end of the market but they would easily be able to compete in the high volume portion of the market which actually pays for most of the ongoing R&D.

    Quote Originally Posted by Agdr View Post
    Yes, you would face the risk of an hypothetical slight loss of your competitive edge, which could be perhaps compensated by other advantages.
    Yep, that's the core tradeoff. The problem is that the advantages are ephemeral and hard to quantify, while the costs and risks are very real and easy to quantify. The upside is that people might like us better and we might get some nice contributions. The downside is that we would be betting the company, or at least the graphics business, to a much greater extent than we are today.

    Quote Originally Posted by Agdr View Post
    Another theoretical option would be to attempt to make an agreement with nVidia to both open your drivers, although I suppose that is very unrealistic.
    Talking with your competitors about anything these days brings an extraordinarily high risk of being hammered by government lawsuits. I don't think either of us would want to take that chance.

    Quote Originally Posted by Agdr View Post
    It's more cumbersome for you, but much less cumbersome for third parties, which means you get more contributions and more developer mindshare. Of course whether this is significant is debatable.
    Yep, and again there's a cost trade-off -- is it cheaper to open the code and get a few contributions from outside than to work with our partners and implement the same contributions ourselves ? So far the numbers work out about 100:1 in favour of not opening the code.

    Quote Originally Posted by Agdr View Post
    There is also the Linux market advantage, where a top performing open driver could possibly lead to near total dominance of the Linux enthusiast market, and workstations users might be influenced too once RHEL and other enterprise distribution start advertising better or exclusive support for ATI and Intel cards as opposed to nVidia ones (and the Intel ones are of course useless for the workstation market).
    Again, if we picked up 100% of the Linux enthusiast market it might make us enough money to cover the cost of the current open source work. Any chance of even "breaking even" would have to come from the workstation market.

    Quote Originally Posted by Agdr View Post
    Right now people seem to still buy nVidia cards for Linux use because the open drivers are not competitive and they either deem fglrx inferior to nvidia, or consider them equivalent and go nVidia for other reasons. An open fglrx might change this, and make ATI cards a must, while with the current strategy, this won't happen at least for a few years, if at all (i.e. not until Mesa supports the latest OpenGL release and performance is almost that of fglrx).
    For reasons I don't fully understand, perception of driver quality has much less impact on hardware market share for Linux consumer users than you might expect. There has been a slight change over the last few years, coincident with supporting open source driver development and making massive improvements in the proprietary driver, but all indications are that the shift in Linux market share is actually *smaller* than the shift in Windows market share. Go figure.

    Quote Originally Posted by Agdr View Post
    Obviously it's a much smaller market than Windows, but being recommended by Linux users might somewhat affect the Windows market too, and the fact that Linux is likely more prevalent among developers and GPU compute users might increase its importance.
    Yep, if it wasn't for indirect factors like this it would be really hard to justify spending any $$ on Linux support. Linux is a huge factor in the server market (which is one reason that AMD has always been supportive of open source work) but so far Linux has not shown any signs of being a substantial part of the client PC market outside of the embedded and small-footprint (tablets etc...) segments.

    Quote Originally Posted by Agdr View Post
    Anyway, I guess you already analyzed this, and decided the possible gains would be lower than the risk of reducing your competitive advantage and the cost of the effort.
    Correct, and unfortunately the numbers aren't even close. That said, the world is constantly changing and we are always looking ahead to where things might be a few years from now.

    Quote Originally Posted by Agdr View Post
    I think it would be nice to have the closed source 3D userspace optionally work with the open DRM driver, and an open X driver, enhancing them if necessary beforehand (possibly with Linux-specific code from fglrx if applicable). This should be doable (except possibly for the video DRM stuff, which I think you could just not support in this mode of operation), and would eliminate a lot of the major issues with a closed 3D driver like trouble with newer kernels and X servers, system crashes, security issues, kernel taint, and make it very easy to switch between fglrx and the Mesa/Gallium stack, and even use them side by side. It would be an unique advantage over nVidia and Nouveau and would also possibly allow you to eventually drop the kernel and X driver, and focus only on the proprietary OpenGL/OpenCL userspace.
    Yep, that was one of the first options we looked at, and still seems like one of the most attractive. The downsides are (a) we would need to refactor a big chunk of proprietary code so that the low-level fglrx code we released into the open stack would not put our proprietary DRM at risk in other OSes, and (b) the open drivers are community-controlled not AMD-controlled and so far the community is not real enthusiastic about constraining what they do with an open kernel driver in order to avoid breaking a proprietary user-space driver. We can fork the open kernel & userspace X driver code and ship our own (slightly different) version with the proprietary stack but then we lose a bunch of potential gains from having a common open driver.

    There are also a number of places where proprietary drivers over-write chunks of the common framework in order to add proprietary features, where the complexity/functionality tradeoff makes it hard for the community to justify adding that functionality to the common code. Features like Multiview and Crossfire both involve a lot of code outside the 3D driver as well as the obvious 3D changes. This will get easier over time, however -- Eyefinity does reduce the importance of Multiview, for example, and ever-increasing 3D performance may mean we can compete without Crossfire at some point in the future -- but IMO we also need to get to the point where the open source driver community is willing to live with at least a subset of the constraints that come from hosting a proprietary 3D stack on the same code, and I don't feel like we are there yet. The solution may be as simple as branching and re-merging the kernel & X driver code at the right times; it feels do-able anyways.

    The bigger task would be re-factoring, opening and merging in the proprietary low-level code from fglrx into the open drivers (basically memory management and command submission) and having that accepted by the open source driver community for use with the open drivers as well.

    Anyways, bottom line is that we are always looking at this stuff, that passage of time makes some of it easier, that we are heading in the direction you want, and that it's a lot harder than it appears at first glance

  2. #52
    Join Date
    Jul 2010
    Posts
    46

    Default

    Quote Originally Posted by bridgman View Post
    Removing DRM from the open driver wouldn't make any difference. An attacker would still be able to use the open code as a guide to attacking the blob.
    Yes, but given the high skill of crackers, and the fact that you can trace (and alter) the GPU command stream anyway (see renouveau and revenge), it's not obvious that it makes a significant difference (especially if the open code is just a minimal command submission/resource manager layer).

    Quote Originally Posted by bridgman View Post
    Doesn't have to be an existing competitor. If we say "hey folks, here's everything you need to design & build your own GPU HW/SW without spending hundreds of milions on R&D" we might as well just shut the company down today. A new competitor would not be able to keep up with evolution in the high end of the market but they would easily be able to compete in the high volume portion of the market which actually pays for most of the ongoing R&D.
    Yes, that could be a concern.
    It's interesting to note though that in the x86 market you don't even need a software stack, and the architecture is very well documented (including performance aspects and sometimes architecture details), yet no one manages to compete with AMD and Intel outside perhaps of very low-end markets (e.g. VIA).

    Quote Originally Posted by bridgman View Post
    For reasons I don't fully understand, perception of driver quality has much less impact on hardware market share for Linux consumer users than you might expect. There has been a slight change over the last few years, coincident with supporting open source driver development and making massive improvements in the proprietary driver, but all indications are that the shift in Linux market share is actually *smaller* than the shift in Windows market share. Go figure.
    Interesting.
    Perhaps the reason is that the perception of the drivers has actually improved much less than the drivers themselves? (due to the open drivers still being primitive and fglrx's past inferiority).

    While trivial, perhaps renaming "fglrx" to "Catalyst" and favorable independent comparisons with nVidia could help shed those old perceptions.

    Quote Originally Posted by bridgman View Post
    Yep, that was one of the first options we looked at, and still seems like one of the most attractive. The downsides are (a) we would need to refactor a big chunk of proprietary code so that the low-level fglrx code we released into the open stack would not put our proprietary DRM at risk in other OSes,
    Would you really need to release substantial code?
    Isn't the current open Radeon kernel driver already good enough at least for supporting fglrx in single GPU configurations where KMS already works?

    Or maybe you have a different and better kernel architecture (e.g. userspace vs kernel command submission) and thus would need to open that?

    Quote Originally Posted by bridgman View Post
    and (b) the open drivers are community-controlled not AMD-controlled and so far the community is not real enthusiastic about constraining what they do with an open kernel driver in order to avoid breaking a proprietary user-space driver. We can fork the open kernel & userspace X driver code and ship our own (slightly different) version with the proprietary stack but then we lose a bunch of potential gains from having a common open driver.
    Why not just adapt to the changes in the open drivers?
    The Linux kernel has an approximately 3 month release cycle, which should give time to prepare at least a Linux-only update for any ABI changes.
    Also, incompatible changes to the ABI tend to be frowned upon (see the Nouveau ABI break debate for instance).

    Quote Originally Posted by bridgman View Post
    ever-increasing 3D performance may mean we can compete without Crossfire at some point in the future
    Really? I would expect multi-GPU instead to be more prominent in the future, as software support improves and compute gets more prevalent (with "GPU SMP" systems becoming standard for compute servers).

    Quote Originally Posted by bridgman View Post
    Anyways, bottom line is that we are always looking at this stuff, that passage of time makes some of it easier, that we are heading in the direction you want, and that it's a lot harder than it appears at first glance
    Great, thanks

  3. #53
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,441

    Default

    Quote Originally Posted by Agdr View Post
    Yes, but given the high skill of crackers, and the fact that you can trace (and alter) the GPU command stream anyway (see renouveau and revenge), it's not obvious that it makes a significant difference (especially if the open code is just a minimal command submission/resource manager layer).
    The problem is that we have to "stay well back from the abyss"... it's not like we can keep opening things up until something bad happens then back off a bit

    Quote Originally Posted by Agdr View Post
    Yes, that could be a concern.
    It's interesting to note though that in the x86 market you don't even need a software stack, and the architecture is very well documented (including performance aspects and sometimes architecture details), yet no one manages to compete with AMD and Intel outside perhaps of very low-end markets (e.g. VIA).
    Getting into x86 isn't that simple. Not my area to talk about though. There is a non-trivial software stack, BTW, it's just between OS and hardware rather than app and hardware. I wasn't really aware of the CPU software stack until we joined up with AMD.

    Quote Originally Posted by Agdr View Post
    Interesting.
    Perhaps the reason is that the perception of the drivers has actually improved much less than the drivers themselves? (due to the open drivers still being primitive and fglrx's past inferiority). While trivial, perhaps renaming "fglrx" to "Catalyst" and favorable independent comparisons with nVidia could help shed those old perceptions.
    It is actually called Catalyst for Linux, or something like that. I just don't like being nitpicky by correcting everyone who calls it fglrx (and fglrx is way easier to type ).

    Quote Originally Posted by Agdr View Post
    Would you really need to release substantial code?
    Isn't the current open Radeon kernel driver already good enough at least for supporting fglrx in single GPU configurations where KMS already works? Or maybe you have a different and better kernel architecture (e.g. userspace vs kernel command submission) and thus would need to open that?
    I don't have actual test results, but I suspect that the current Mesa driver over our libdrm/kernel code would be *much* faster than the current proprietary 3D driver over the open libdrm/kernel code.

    That doesn't mean the devs don't know how to write a fast kernel driver, just that the focus right now is still on functionality & robustness rather than performance optimization. Any of the devs working on the kernel driver can rattle off a list of all the things they would like to improve given time.

    Quote Originally Posted by Agdr View Post
    Why not just adapt to the changes in the open drivers? The Linux kernel has an approximately 3 month release cycle, which should give time to prepare at least a Linux-only update for any ABI changes. Also, incompatible changes to the ABI tend to be frowned upon (see the Nouveau ABI break debate for instance).
    We would have to do that, of course, but it would be all to easy to end up in a situation where kernel driver changes that make Mesa faster also make the proprietary driver slower, for example. What is "the right decision" in that scenario ?

    Quote Originally Posted by Agdr View Post
    Really? I would expect multi-GPU instead to be more prominent in the future, as software support improves and compute gets more prevalent (with "GPU SMP" systems becoming standard for compute servers).
    Multi-GPU support for compute is less of a problem since the nature of the workload makes it easier for the split across multiple engines to be exposed at an application/API level. Graphics is a tougher challenge because you have to pretty much invisibly emulate a single GPU.

  4. #54
    Join Date
    Dec 2008
    Posts
    985

    Default

    Quote Originally Posted by bridgman View Post
    The problem is that we have to "stay well back from the abyss"... it's not like we can keep opening things up until something bad happens then back off a bit
    Exactly. That's why we are not only not opening up XvBA but also demand of the chosen few who do have access to the XvBA sdk that any app/library they develop shall remain closed source. Sort of a reverse GPL. All in all, this doesn't score AMD points for open source friendliness.

  5. #55
    Join Date
    Jul 2010
    Posts
    46

    Default

    Quote Originally Posted by bridgman View Post
    It is actually called Catalyst for Linux, or something like that. I just don't like being nitpicky by correcting everyone who calls it fglrx (and fglrx is way easier to type ).
    The kernel module is still called "fglrx.ko", the X driver is called "fglrx_drv.so" and they both print messages with the "fglrx" string.
    It should be possible to rename to "catalyst" and keep compatibility with symlinks and module aliases.

    Quote Originally Posted by bridgman View Post
    We would have to do that, of course, but it would be all to easy to end up in a situation where kernel driver changes that make Mesa faster also make the proprietary driver slower, for example. What is "the right decision" in that scenario ?
    Given that you are willing to release GPU documentation, I guess it should be possible to come to a technical agreement over what is best, or support two options if really necessary (both of which could eventually be useful to the open driver too).

  6. #56
    Join Date
    Jul 2010
    Posts
    46

    Default

    Quote Originally Posted by monraaf View Post
    Exactly. That's why we are not only not opening up XvBA but also demand of the chosen few who do have access to the XvBA sdk that any app/library they develop shall remain closed source. Sort of a reverse GPL. All in all, this doesn't score AMD points for open source friendliness.
    Isn't the VA-API support through xvba-video good enough for the closed driver? (I haven't tried it personally, so I have no opinion)

  7. #57
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,441

    Default

    XvBA is a bit of a different story - it was developed for a market where everything is closed anyways, so the idea of making the API public would be the last thing any of our customers would want. That doesn't do much for the consumer PC client market, of course, so that's the next thing we need to address. For now, gbeauche's VA-API to XvBA adapter is a real nice way to try out the code while we finish development.

    We also found that there was a "middle ground" embedded market where everything looked just like a traditional embedded product but it used GPL apps, which made the idea of a closed API kind of problematic. We're hoping that the solution for consumer client PC will also work for that "not quite embedded" market.

  8. #58
    Join Date
    Jun 2009
    Posts
    2,927

    Default

    So XvBA is getting addressed?

    At least this is good to know.

  9. #59
    Join Date
    Jul 2010
    Posts
    46

    Default

    Quote Originally Posted by pingufunkybeat View Post
    So XvBA is getting addressed?

    At least this is good to know.
    By the way, the oldest available version of xvba-video has an only 40KB binary, which decompiles using the IDA Pro Hex Rays decompiler to only 5450 lines of C code containing 147 functions.

    It even includes asserts and error messages, and both x86-64 and i386 versions are available allowing to more easily determine what is a pointer and what an integer.

    Overall, it looks quite easy to reverse engineer to produce open documentation of the XvBA API (although possibly partial due to xvba-video possibly not using all of it).

    Not sure however if that provides any advantage over just using it in binary form (given that one needs to rely on closed source code anyway for the XvBA implementation).

  10. #60
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,441

    Default

    Quote Originally Posted by pingufunkybeat View Post
    So XvBA is getting addressed? At least this is good to know.
    All I can say with 100% certainty is that it hasn't been forgotten and that we're trying to push it ahead.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •