Page 1 of 5 123 ... LastLast
Results 1 to 10 of 44

Thread: The Issues With The Linux Kernel DRM, Continued

  1. #1
    Join Date
    Jan 2007
    Posts
    14,808

    Default The Issues With The Linux Kernel DRM, Continued

    Phoronix: The Issues With The Linux Kernel DRM, Continued

    Yesterday Linus voiced his anger towards DRM, once again. But not the kind of DRM that is commonly criticized, Digital Rights Management, but rather the Linux kernel's Direct Rendering Manager. With the Linux 2.6.39 kernel it's been another time when Linus has been less than happy with the pull request for this sub-system that handles the open-source graphics drivers. Changes are needed...

    http://www.phoronix.com/vr.php?view=OTI0OQ

  2. #2

    Default

    IIRC a radeonhd developer was proposing to merge the components of the open source drivers some time ago but nobody cared about it...

  3. #3
    Join Date
    Oct 2008
    Posts
    31

    Default

    I feel sorry for those GPU developers. That stuff is a nightmare.

  4. #4
    Join Date
    Jun 2010
    Posts
    46

    Default

    It seems like it would have been better for DRM developers to stay outside of kernel at least until DRM would really become stable.

  5. #5
    Join Date
    Mar 2009
    Posts
    141

    Default

    Jerome's reply essentially mirrors my commentary the last time around - this is a symptom of forcing a one-size-fits-all development cycle upon a group of projects with fundamentally different requirements. What's being realized in the web browser wars is similar. Developing a category of software in which the best available implementations cover only a fraction of the spec isn't even the same sport as maintenance of a mostly complete implementation.

    The vanilla kernel isn't exactly a high-assurance system built following formal methods to standards of provable correctness. Small pieces of it might be - by downstream users putting Linux on their ICBM guidance systems, but not every last component of every driver. They need to work together to discover an approach with variable flexibility for variable requirements.

  6. #6
    Join Date
    Oct 2009
    Location
    .ca
    Posts
    403

    Default

    Don't we (and the few DRM developers) already have enough problems/work with the graphics stack that we don't really need Linus being a diva permanently bitching about code being two weeks late?

  7. #7
    Join Date
    Dec 2008
    Posts
    35

    Default

    The argument about DRM stuff taking too much time to get to the end user seems kind of bogus. I mean, nothing is stopping them from releasing their development code before it lands into Linus' tree, if some distribution wants it sooner. Distribute an out of tree driver with the current fixes to the distros that want it and then, at the next merge window, you merge it upstream.

  8. #8
    Join Date
    Jan 2009
    Posts
    1,678

    Default

    i hope that this discussion will lead the devs to a solution for the development model. The manpower issue will probably remain.

  9. #9
    Join Date
    Dec 2008
    Posts
    315

    Default

    Don't care. I'd rather the stuff work 1/2 way than not at all. I ended up with one of my nvidia gpu's messed up (the g98 gpu 8400gs) because it went in and was trying to do hardware acceleration and not using the gpu right. When I switched to a completely untested unready radeon 5550 it worked again. It uses the software rasterizer and is a step back but at least I don't have to reboot, switch back to the onboard gpu, reboot again. Unplug the monitor from add in card to on board gpu. Power back on and get back into linux.
    I think I ran 3 x servers and 3 different open source video drivers during the Fedora 10 cycle. One of the x servers screwed up so bad I had to switch back to text mode run 3 and do a yum downgrade on it. This was like 3 months after release of fedora 10. Now they won't do that junk any more. You can't get them to try a new video driver or x-server more than 1 or 2 and fixes during alpha and beta. The new guy running the feodra project sucks compared to the guy they dumped last year.

    I'd play more with arch linux if I wasn't enjoying my little 5550 maxing out graphics on games. I love that frikkin card. I never even considered HIS until I tried this card.
    Quit trying to safe up the place. Either shovel us the untested crap or put us all on safe boring no drama stable stuff. I got really dissenchanted during the 13 cycle when they stopped shoving us 2 kernels and 3 x-windows. Which leaves kernel 2.6.36 and .37 without anybody testing anything on it. Because it came out mid releases. So of course .38 and .39 are going to be full of untested crap. At least Ubuntu is going to work over .37 I think.

  10. #10
    Join Date
    Sep 2007
    Location
    Edmonton, Alberta, Canada
    Posts
    119

    Default

    I think Linus may be getting a little too far away from his hacker roots here. So what if you merge fresh code into mainline, I don't think it's intended to be stable in the same sense as something RedHat would push to RHEL customers anyhow. It's gonna be even harder to attract developers if you run things all formally like a company - I think a lot of volunteer coders like hacking on Linux in the evenings so they can push code out the door and escape that kinda repressive crap from their day job.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •