Page 5 of 16 FirstFirst ... 3456715 ... LastLast
Results 41 to 50 of 152

Thread: Mixing open and closed source

  1. #41
    Join Date
    Mar 2007
    Posts
    17

    Default

    That's interesting that you notice Phoronix discussions to be more reasoned. You are probably right -- I haven't read many Phoronix discussions -- but I do find LWN discussions to be well-informed. I tend to think of the LWN posters as being programmers and the Phoronix posters as being users. (With no disrespect intended, of course.)


    I stopped reading slashdot long ago (I started reading slashdot when it was new -- a regularly updated news source targeted at geeks! It was just another miracle of that thrilling year.) but I have always been pleased at how the level of discourse on LWN.net has never descended to slashdottiness.


    Hm... Actually I left this reply written but unposted overnight, and while I was away from my computer it occurred to me that you might find the Phoronix posters to be more "reasonable" because they are saying things that agree more with your prejudices. That's completely natural of course, but it is the kind of thing that you want to be wary of if you are trying to learn about previously unknown markets -- you don't want your filters to be too good at excluding "unreasonable" customers from your view of the world.


    Regards,

    Zooko Wilcox-O'Hearn

    P.S. I hope that you looked at those two LWN discussions that I referenced. You replied saying that LWN rarely focussed on GPU issues, and you are right, but those two discussions from last week did focus on GPU issues.
    Last edited by Zooko; 02-03-2008 at 10:20 AM.

  2. #42
    Join Date
    Sep 2007
    Location
    Athens-Hellas
    Posts
    255

    Default

    Hm I 've read the full discussion here and really found it very interesting...

    @bridgman:

    I can understand AMD worries about all those DRM,BD,HD restricted stuff but don't forget that Linux and generally open source system by that way of thinking shouldn't even play commercial DVD's!
    And I bet that DRM and so will be hacked extremely easy and fast from our hackers.
    As far as I can understand DRM is an M$ (and generally movie industry's) weapon to open source communities cause they are really aware that they are loosing more and more costumers by the every day growing Linux and rest open communities.

    I happen to know a lot of people really hate and become very angry with the fact that laptop OEM companies ship their products with M$ windows and have to pay about 100 or more Euros for a really crappy OS like Vista is.
    Can you tell me why they don't sell their laptops without an OS installed??

    Can there be closed parts within the open source based driver for the specific restricted problematic formats and patented designs?
    Do we have to maintain a fully closed driver in order to play some restricted video formats (which would be restricted for very little time in Linux)??

    And finally John great thanks to you my friend cause now we have someone from the company to discuss our thoughts and of course help as much as we can too. And AMD for change the policy about Linux users!
    But also consider bringing up a driver for our FreeBSD brothers...

    Best Regards
    Jim

  3. #43
    Join Date
    Oct 2007
    Location
    Lithuania
    Posts
    35

    Default

    Quote Originally Posted by djdoo View Post
    But also consider bringing up a driver for our FreeBSD brothers...
    Jim
    FreeBSD can already use radeonhd.

  4. #44
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,458

    Default

    Quote Originally Posted by Tillin9 View Post
    My 2 cents are that the DRM issue should be pushed till later. The major area where work should go is into 3D support.
    No argument there. This thread is not about working on DRM today. There was a (perfectly reasonable) proposal from a few people that we re-architect fglrx to use a 3d blob over an otherwise open source driver, but that causes us problems in markets like workstation, or if we are required by our customers to implement DRM in the future, so I suggested that we stay away from that discussion for a while and focus on specific things like 3d and video.

    Quote Originally Posted by Tillin9 View Post
    I think getting Windows games running under Linux is a much bigger priority than BlueRay since games sell graphics cards, not movies.
    My sense is that games sell midrange and high-end chips and movies sell low end, high volume chips, but I agree with you about Wine. When we had a totally workstation focus Wine wasn`t a consideration, but for general desktop users it does seem important.

    Quote Originally Posted by Zooko View Post
    That's interesting that you notice Phoronix discussions to be more reasoned. You are probably right -- I haven't read many Phoronix discussions -- but I do find LWN discussions to be well-informed. I tend to think of the LWN posters as being programmers and the Phoronix posters as being users. (With no disrespect intended, of course.)
    I agree with the split, although lwn seems to be a mix of developers and slashdot-style ranters. The devs are fine (and you can see them trying to keep discussion on track) but for example if you look at the Intel article there was more ranting than reasoned discussion there. I agree that if you just read the articles it`s a real good site... I just don`t find the comments to be of the same quality you get here.

    Quote Originally Posted by Zooko View Post
    Hm... Actually I left this reply written but unposted overnight, and while I was away from my computer it occurred to me that you might find the Phoronix posters to be more "reasonable" because they are saying things that agree more with your prejudices. That's completely natural of course, but it is the kind of thing that you want to be wary of if you are trying to learn about previously unknown markets -- you don't want your filters to be too good at excluding "unreasonable" customers from your view of the world.
    Always a risk, and something I constantly look out for. One of the reasons for being here (and other boards) is to make sure that doesn`t happen.
    Last edited by bridgman; 02-03-2008 at 12:16 PM.

  5. #45
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,458

    Default

    Quote Originally Posted by givemesugarr View Post
    then nvidia would support it and production companies would push out everything to that boards. only an agreement between hw producers to not support it would lead to its removal, but since amd, nvidia, ati, intel and other companies were the founders of drm...
    Hold on there buddy

    We are *not* the founders of DRM. We are the companies that are required to implement it in order to sell chips. Intel developed HDCP because DRM on the outputs was a requirement from content providers in order to play HD or BD on PCs. Macrovision and (I think) CGMSA came from outside the PC industry. The rest of the standards (CSS, ACSS etc.) came from content providers and consumer electronics groups.

    Quote Originally Posted by givemesugarr View Post
    ... the problem is that the one who will get out from it would simply lose market share and will die, unless it would come out with something that would lead community to stay tuned to it.
    Bingo

    Quote Originally Posted by givemesugarr View Post
    the new specs would mean more driver work than implementing new code for the radeonhd driver, because from a programmer view it's simpler to write new code than modifying old code that wasn't written by you to have it fit to your needs.
    The problem with starting fresh for 2d and 3d is that you effectively duplicate the code even though the HW is largely unchanged. Better IMO to have code splits based on hardware splits as you say.

    Quote Originally Posted by givemesugarr View Post
    why not concentrating for lets say 2 weeks on identifying the stuff that is equal in the chips and what is different. then take the equal stuff based on what it does and put it into a module, put another stuff into other module and so on. make a diagram of the various features and how they were implemented or not implemented yet, then take the driver and split it based on this new diagram.
    We did that at the start of the project. That`s how we ended up with separate code for display and common code for everything else. I even have slides

    Quote Originally Posted by givemesugarr View Post
    from what i've read around the future is a single radeon driver that would be ok for all the boards, but with the differences in functionality and hw implementation this would require a lot of work and superfluous code just for compatibility issues. the same fglrx could benefit from this developing by removing igp features and board detection from it.
    Here`s where it gets tricky. The display block is totally different between 4xx and 5xx, so it needs new code, but display implemented the way we wanted will be the smallest part of the driver stack once the other parts of the driver stack are added. Since display was implemented first the issue seems simple today, but once 2d, 3d and video get in the amount of similar hardware will be bigger than the amount of different hardware.

    Quote Originally Posted by givemesugarr View Post
    for the igps, if their code is so different there's a good point of splitting them from the fglrx and readeon stuff and putting them as a single big module separated from the others.
    Actually, IGPs are not much different. The memory management setup is a bit different (which is a huge pain if you are RE-ing but otherwise not a big deal) and the 3d engines need vertex processing in software, but everything else is the same.

    Quote Originally Posted by givemesugarr View Post
    this would also allow amd to easily implement modules on top of various modules for a future implementation of hw hd decoding or of other features that wouldn't be opensourced if they'd want to go on that road in the future.
    It`s harder than that. Everything below the decoding would need to be closed source and tamper proofed. Between that and the closed source 3d everyone seems willing to consider, there isn`t much left.
    Last edited by bridgman; 02-03-2008 at 12:21 PM.

  6. #46
    Join Date
    Jun 2007
    Posts
    406

    Default

    Quote Originally Posted by bridgman View Post
    Hold on there buddy

    We are *not* the founders of DRM. We are the companies that are required to implement it in order to sell chips. Intel developed HDCP because DRM on the outputs was a requirement from content providers in order to play HD or BD on PCs. Macrovision and (I think) CGMSA came from outside the PC industry. The rest of the standards (CSS, ACSS etc.) came from content providers and consumer electronics groups.
    sorry, i was confusing this with tcg and tcpa. i think that about all the users know what it is and what the real problem is with this stuff. you're right instead about drm, which is a microzoz patent (strange). if someone doesn't know what tcg and tpca are, they're the same organization and wikipedia has articles about it http://en.wikipedia.org/wiki/Trusted_Computing_Group and about the chip that "should protect and not invade our rights" called tpm http://en.wikipedia.org/wiki/Trusted_Platform_Module


    Quote Originally Posted by bridgman View Post
    Bingo


    The problem with starting fresh for 2d and 3d is that you effectively duplicate the code even though the HW is largely unchanged. Better IMO to have code splits based on hardware splits as you say.
    yep, but i was suggesting that the basecode should be equal and that the split codes would be loaded by this core module based on the identification of the board. it should be easier to use and also maintain. also, maybe some devs that see a code to do something that is done in a bad way would rewrite it better. this means that new codebase would also clean out old "brutal" code present in the old codebase.

    Quote Originally Posted by bridgman View Post
    We did that at the start of the project. That`s how we ended up with separate code for display and common code for everything else. I even have slides
    have you showed them to the devs?!
    i'm joking on this.

    Quote Originally Posted by bridgman View Post
    Here`s where it gets tricky. The display block is totally different between 4xx and 5xx, so it needs new code, but display implemented the way we want is the smallest part of the driver stack. Since display was implemented first it seems a lot more black-and-white now, but once 2d, 3d and video get in the amount of unchanged hardware will be bigger than the amount of different hardware.
    wouldn't the splitted module help more on the cleaness of code?! also if the various different chips get maintained in different small modules is faster to find out where the problems are when modifying stuff. i'm for the split theory since i think that putting many small modules is more proficient also in the memory use, not only in the code maintenance. also if we now know that old boards cannot have better code then it should be better to split them into modules so that these modules wouldn't be touched by new code written for the newer boards. porting the new code to the old modules would be simple since there would only be the need for testing the new modules on older boards and see what happens. that should help you avoid problems like the ones with firegl boards ot working. if they'd have own dedicated modules from older releases they'd have still worked even if the new code lines for the new boards would prevented them to work.
    but if the base settings and initialization of the board is different then this doesn't apply anymore.

    Quote Originally Posted by bridgman View Post
    Actually, IGPs are not much different. The memory management setup is a bit different (which is a huge pain if you are RE-ing but otherwise not a big deal) and the 3d engines need vertex processing in software, but everything else is the same.
    so what they really need is just to reset the hw registers and set up the correct ones (what it seems for catalyst to do on windows at startup) to work better?!

    Quote Originally Posted by bridgman View Post
    It`s harder than that. Everything below the decoding would need to be closed source and tamper proofed. Between that and the closed source 3d everyone seems willing to consider, there isn`t much left.
    i haven't said that you should do it right away. i was telling out that having the same core module for fglrx and radeon at startup would mean that in the future the 2 drivers could somehow share stuff if amd would consider the thing as a good street to walk on in the future. what problem would be to have a fglrx 3d module only that does not only 3d accel but also uvd and other stuff, but using opensource 2d module instead of fglrx one?! obviously this won't work on newer boards that emulate 2d via 3d. also since drm is present also in the kernel why not using directly the xorg drm also for fglrx?! is it so different?! having this might also get drm stuff to be better supported into the kernel itself and could also make linus consider some way of middle way between supporting these "life intrusion" mechanisms as drm and similar stuff and not supporting them at all. this is what happened when the selinux tried to get supported into the kernel and a new security layer was created. this could also happen, under the right circumstances, with this stuff. a user for example would not have this compiled by default, but if he wanted, he could compile the modules and use bd and other stuff on his pc.
    i'm talking about this because i'm planning to build a multimedia home-tv center to be attached to my lcd tv and i'd like to put inside a bd viewer to be able to take advantage of the full-hd definition films. but to do that i'd need to use windows and pay a lot of money for windows tools, to have them do what they want and not what i want them to do, while i have myth-tv and a lot of other useful stuff on linux and i wouldn't be required to pay a single chip if i don't want to donate to the various projects and i'd have the apps configured and working the way i like and the way someone else likes.

  7. #47
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,458

    Default

    Quote Originally Posted by givemesugarr View Post
    yep, but i was suggesting that the basecode should be equal and that the split codes would be loaded by this core module based on the identification of the board. it should be easier to use and also maintain. also, maybe some devs that see a code to do something that is done in a bad way would rewrite it better. this means that new codebase would also clean out old "brutal" code present in the old codebase.
    On the first implementation I expect the split code will be tiny and the base code will be 90+%. Most of the cruft is in the base code that doesn't change from one chip to another. I figure we're going to want to do a fresh implementation from scratch once Gallium stabilizes anyways, so why do it twice ?

    Quote Originally Posted by givemesugarr View Post
    also if we now know that old boards cannot have better code then it should be better to split them into modules so that these modules wouldn't be touched by new code written for the newer boards. porting the new code to the old modules would be simple since there would only be the need for testing the new modules on older boards and see what happens. that should help you avoid problems like the ones with firegl boards ot working. if they'd have own dedicated modules from older releases they'd have still worked even if the new code lines for the new boards would prevented them to work.
    I agree -- if there aren't resources to test and fix issues on older boards then separate modules may be the way to go, even if they contain largely duplicated code. Downside of this is that you stop enhancing support for older boards and the owners come looking for you with flaming torches and pitchforks. If you always push changes from one module to the other then having separate modules is a waste of time. This issue is tough enough with fglrx but I don't think open source drivers should leave the older chips behind, particularly when 90% of the code will be the same. Fair question though.

    Again, if I thought there would be significantly different code for the different ASIC generations then I would be arguing for separate modules. It's not that we don't understand where separate modules can be useful, it's that so much of the code *is* common across the generations.

    Quote Originally Posted by givemesugarr View Post
    but if the base settings and initialization of the board is different then this doesn't apply anymore.
    Drivers for the current mesa architecture have a lot of code that isn't specific to the ASIC. One of the things Gallium brings is that the drivers are a lot more "pure", ie they contain relatively more chip-specific stuff and relatively less common stuff. I figure that's the time to clean house.

    Quote Originally Posted by givemesugarr View Post
    (IGP) so what they really need is just to reset the hw registers and set up the correct ones (what it seems for catalyst to do on windows at startup) to work better?!
    That's one part. The other part is that someone has to write the code that makes up for the lack of vertex shaders. Dave Airlie did the initial work and got 3d running on RS4xx, but more work is still needed.

    Quote Originally Posted by givemesugarr View Post
    i haven't said that you should do it right away. i was telling out that having the same core module for fglrx and radeon at startup would mean that in the future the 2 drivers could somehow share stuff if amd would consider the thing as a good street to walk on in the future. what problem would be to have a fglrx 3d module only that does not only 3d accel but also uvd and other stuff, but using opensource 2d module instead of fglrx one?! obviously this won't work on newer boards that emulate 2d via 3d.
    Simple. Protecting the decoded HD/BD images requires DRM code in the display and kernel drivers, not just the video decode/render bits. That doesn't leave much to open source.

    Quote Originally Posted by givemesugarr View Post
    also since drm is present also in the kernel why not using directly the xorg drm also for fglrx?! is it so different?! having this might also get drm stuff to be better supported into the kernel itself and could also make linus consider some way of middle way between supporting these "life intrusion" mechanisms as drm and similar stuff and not supporting them at all. this is what happened when the selinux tried to get supported into the kernel and a new security layer was created. this could also happen, under the right circumstances, with this stuff. a user for example would not have this compiled by default, but if he wanted, he could compile the modules and use bd and other stuff on his pc.
    The kernel DRM will need a full implementation of TTM at minimum (including management of video memory and migration between pools) before it can support our client code. A lot of the features added by TTM & Gallium have been in proprietary drivers for a while. That's one of the reasons I'm saying "let's not have this discussion now".

    If it was just "leaving stuff out" that would be OK, but the stuff that stays in pretty much needs to be double-implemented depending on what lower level functions are present. We also need to tamper-proof any of the secure code, and that means everything below needs to be tamper-proofed as well.

    If you believe in separate code for separate functions, that argues for separate implementations of the secure/nonsecure paths, and once you do *that* you end up with an open driver which is the collection of nonsecure modules, and a closed driver which is the collection of secure modules. Plus or minus 20%, anyways.

    We went through many of these discussions internally, including consideration of various open/closed hybrids, before settling on the current "two driver" plan. I think there are places where we could use open source effectively in fglrx, particularly installation and initialization/startup, but going much further than that starts to put real constraints on what we can do with fglrx in the future.
    Last edited by bridgman; 02-03-2008 at 02:24 PM.

  8. #48
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,458

    Default

    My fingers are getting tired. Can we stop talking about DRM for a while ?


  9. #49
    Join Date
    Jun 2007
    Posts
    406

    Default

    Quote Originally Posted by bridgman View Post
    On the first implementation I expect the split code will be tiny and the base code will be 90+%. Most of the cruft is in the base code that doesn't change from one chip to another. I figure we're going to want to do a fresh implementation from scratch once Gallium stabilizes anyways, so why do it twice ?
    correct observation. this means that for the moment the right thing is to continue adding features and do a full cleanup when gallium has stabilized.

    Quote Originally Posted by bridgman View Post

    I agree -- if there aren't resources to test and fix issues on older boards then separate modules may be the way to go, even if they contain largely duplicated code. Downside of this is that you stop enhancing support for older boards and the owners come looking for you with flaming torches and pitchforks. If you always push changes from one module to the other then having separate modules is a waste of time. This issue is tough enough with fglrx but I don't think open source drivers should leave the older chips behind, particularly when 90% of the code will be the same. Fair question though.
    well, a workaround for this split vision might be to have the various older modules ready and to have them inserted when there are compatibility problems. for example, you release a new version but for some reason this new version doesn't work anymore on the rs400 series. at this point, you push out the oldest module that worked fine for these chipsets with the name rs400 and the driver would use that one and will use the new one for the other chips. this could be a workaround for the problems that a split duplicate code would insert.

    Quote Originally Posted by bridgman View Post
    Again, if I thought there would be significantly different code for the different ASIC generations then I would be arguing for separate modules. It's not that we don't understand where separate modules can be useful, it's that so much of the code *is* common across the generations.
    well, this is good to hear, since this means that the docs released could be helpful also for older boards and that the process for implementing new features in radeonhd could be somehow faster because of already written code.


    Quote Originally Posted by bridgman View Post
    Simple. Protecting the decoded HD/BD images requires DRM code in the display and kernel drivers, not just the video decode/render bits. That doesn't leave much to open source.
    well, maybe in the future someone could think of a solution for this problem...

    Quote Originally Posted by bridgman View Post
    The kernel DRM will need a full implementation of TTM at minimum (including management of video memory and migration between pools) before it can support our client code. A lot of the features added by TTM & Gallium have been in proprietary drivers for a while. That's one of the reasons I'm saying "let's not have this discussion now".

    If it was just "leaving stuff out" that would be OK, but the stuff that stays in pretty much needs to be double-implemented depending on what lower level functions are present. We also need to tamper-proof any of the secure code, and that means everything below needs to be tamper-proofed as well.

    If you believe in separate code for separate functions, that argues for separate implementations of the secure/nonsecure paths, and once you do *that* you end up with an open driver which is the collection of nonsecure modules, and a closed driver which is the collection of secure modules. Plus or minus 20%, anyways.

    We went through many of these discussions internally, including consideration of various open/closed hybrids, before settling on the current "two driver" plan. I think there are places where we could use open source effectively in fglrx, particularly installation and initialization/startup, but going much further than that starts to put real constraints on what we can do with fglrx in the future.
    ok, now i've understood why actual base merging for fglrx and radeon isn't possible, but from how you've described it when gallium arrives that will most likely happen and maybe at that time we would be able to switch from radeon to fglrx without much pain.
    i'd like to thank you for the time you've spent around here giving answers to our questions and for helping us understand better how the things are going inside the developing process and on what to expect from amd in the near future and in the long term one.

  10. #50
    Join Date
    Oct 2007
    Location
    Mumbai India
    Posts
    39

    Default

    Quote Originally Posted by Kano View Post
    Well decoding is possible using the ffmpeg - mplayer can use the built-in or external ffmpeg. Also there is a patch to use coreavc as decoder.

    http://code.google.com/p/coreavc-for-linux/

    No decoder is hw accellerated. But not only decoding would be nice to have, also encoding. Currently mpeg4 encoding is fast enough (could be always faster howerver) but h264 is really slow.

    Maybe somebody implements it for CUDA (NVIDIA 8 series), that would be a possibility.

    well i have tried coreAVC but it doesnt work for me
    mplayer crashes as soon as you use it

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •