Page 1 of 5 123 ... LastLast
Results 1 to 10 of 50

Thread: The Challenge In Delivering Open-Source GPU Drivers

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    13,422

    Default The Challenge In Delivering Open-Source GPU Drivers

    Phoronix: The Challenge In Delivering Open-Source GPU Drivers

    As illustrated today by the release of Intel's "Sandy Bridge" CPUs there is a new desire by Linux users: open-source drivers "out of the box" at launch. Over the years the expectations of Linux users have gone from simply wanting Linux drivers for their hardware to wanting open-source Linux drivers (read: no binary blobs) to now wanting open-source drivers in the distribution of their choice at the time the hardware first ships. This is a great problem to now be experiencing, as since starting Phoronix seven years ago, the Linux hardware experience has improved a great deal where it's no longer a question if there will be Linux support but when. Some hardware vendors, such as Intel, are now working towards this goal of same-day open-source Linux support -- and in some cases achieving it -- but for open-source Linux drivers for graphics it's a particularly tall hurdle to jump...

    http://www.phoronix.com/vr.php?view=ODk3MQ

  2. #2
    Join Date
    May 2010
    Posts
    190

    Default

    Charlie seems technically incompetent.

  3. #3
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,532

    Default

    Quote Originally Posted by snuwoods View Post
    Charlie seems technically incompetent.
    Seems? Thought that was taken for granted.

  4. #4
    Join Date
    May 2010
    Posts
    190

    Default

    does anyone take him seriously?

  5. #5
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    If the Linux desktop is to ever go truly mainstream, there will need to be "out of the box" support in modern Linux distributions or at least a sane way to update the driver stack without the risk of hosing your entire Linux installation or forcing the distribution vendors to make choices that could have negative consequences on other users.
    Out of the box support is highly unlikely. You have to lead the release of the hardware by an absolute minimum of 6 months. More likely closer to a year. And that's of course ignoring the LTS distros that don't update their kernels or drivers for years at a time.

    A stable API would be much more useful. A stable ABI isn't so necessary, at least for FOSS drivers. Something like DKMS is more than sufficient; recompile the driver is the kernel is updated. Not super ideal, but it works. If installation CDs have a compiler on them and the system has enough temporary memory, it means you can even get a driver installed via USB stick or somesuch when your SCSI/RAID/SATA controller is missing the driver it needs to install. A stable API is sufficient to ensure that the user can grab a driver and install it on recent Linux distributions.

    A stable ABI would be most user-friendly, but is literally never ever going to happen. The Linux developers actively despise making users' life easy because it requires them to spend more than 2 seconds thinking about their kernel interfaces.

  6. #6
    Join Date
    Aug 2010
    Posts
    24

    Default Download drivers from the internet

    Porbably easier to download pre-build backported drivers from the internet that didn't come with the CD.

  7. #7
    Join Date
    May 2007
    Location
    Nurnberg.
    Posts
    302

    Default

    Quote Originally Posted by elanthis View Post
    Out of the box support is highly unlikely. You have to lead the release of the hardware by an absolute minimum of 6 months. More likely closer to a year. And that's of course ignoring the LTS distros that don't update their kernels or drivers for years at a time.

    A stable API would be much more useful. A stable ABI isn't so necessary, at least for FOSS drivers. Something like DKMS is more than sufficient; recompile the driver is the kernel is updated. Not super ideal, but it works. If installation CDs have a compiler on them and the system has enough temporary memory, it means you can even get a driver installed via USB stick or somesuch when your SCSI/RAID/SATA controller is missing the driver it needs to install. A stable API is sufficient to ensure that the user can grab a driver and install it on recent Linux distributions.

    A stable ABI would be most user-friendly, but is literally never ever going to happen. The Linux developers actively despise making users' life easy because it requires them to spend more than 2 seconds thinking about their kernel interfaces.
    First off, this is about the most sensible post in this whole thread. You put your finger on the issue correctly with the last sentence.

    That being said; the API does not need to be stable, it should evolve naturally. But it will automatically and naturally become more stable and future-proof if developers do spend more than 2 seconds thinking about their interfaces. Everything else comes from there, naturally, and almost painlessly (it is less painful than the current situation, that's for sure).

    Everybody could win, if only the supposedly more clued people would spend some time using that bigger brain that they are supposed to have. Or put differently: by continuing to refuse to work like this, those refusing only prove that they are not deserving of the status that they like to claim for themselves.

  8. #8
    jbarnes is offline PCI Maintainer for Linux, X.Org developer, DRM developer, Mesa developer
    Join Date
    Feb 2010
    Posts
    21

    Default

    Quote Originally Posted by elanthis View Post
    Out of the box support is highly unlikely. You have to lead the release of the hardware by an absolute minimum of 6 months. More likely closer to a year. And that's of course ignoring the LTS distros that don't update their kernels or drivers for years at a time.
    No, this is our job, and we blew it for Sandy Bridge. We're supposed to do development well ahead of product release, and make sure distros include the necessary code to get things working (we have separate teams to do development and aid distros in backporting, though most of them can handle it by themselves these days).

    I could give you all sorts of explanations as to why this is (Sandy Bridge is a big architectural change, we made some mistakes in defining our development milestones, and we didn't work hard enough to get our changes backported), but really there's no excuse. Fortunately we've learned from this and are giving ourselves more time and planning better for Sandy Bridge's successor, Ivy Bridge.

    As for a stable ABI, yes it would definitely help situations like this and make lives easier for distros, device manufacturers, and probably users. But it would make life harder for developers, and as we all know, open source development is driven by developers, and we're a whiny and lazy bunch. (Yes, this is a flippant remark, don't take it too seriously since I omit much of the complexity behind the "life harder for developers" part that has big implications for users, distros, and device manufacturers; it's a complicated issue.)

  9. #9
    Join Date
    Jan 2007
    Posts
    459

    Default

    Quote Originally Posted by jbarnes View Post
    No, this is our job, and we blew it for Sandy Bridge. We're supposed to do development well ahead of product release, and make sure distros include the necessary code to get things working (we have separate teams to do development and aid distros in backporting, though most of them can handle it by themselves these days).

    I could give you all sorts of explanations as to why this is (Sandy Bridge is a big architectural change, we made some mistakes in defining our development milestones, and we didn't work hard enough to get our changes backported), but really there's no excuse. Fortunately we've learned from this and are giving ourselves more time and planning better for Sandy Bridge's successor, Ivy Bridge.

    As for a stable ABI, yes it would definitely help situations like this and make lives easier for distros, device manufacturers, and probably users. But it would make life harder for developers, and as we all know, open source development is driven by developers, and we're a whiny and lazy bunch. (Yes, this is a flippant remark, don't take it too seriously since I omit much of the complexity behind the "life harder for developers" part that has big implications for users, distros, and device manufacturers; it's a complicated issue.)
    never mind Jesse, these things happen i guess.

    if you/other intel dev's ever get bored one day or need to clear your Linux head for a while , you might like to take a look at the AROS OSS code http://aros.sourceforge.net/ and port some Intel related gfx to their hidd drivers etc, you Did own and have some FUN writing code for the amiga OS once right

  10. #10
    Join Date
    May 2011
    Posts
    15

    Default Not just Sandy Bridge

    Quote Originally Posted by jbarnes View Post
    No, this is our job, and we blew it for Sandy Bridge.
    Not just Sandy Bridge. This just brought ongoing issues to a head.

    I've been frustrated enough to recommend to our procurement teams that Intel-based hardware be avoided entirely until things were sorted out.

    Your own Intel Desktop Boards have been mildly unstable for years under linux. It's relatively easy to provoke them to Segfault X sessions and in some cases completely lock up the system (especially the older 8xx and 9xx series boards such as the DQ965GM but I can enumerate several other examples)

    I work at a major UK university and we can't afford this kind of problem. Linux gets the blame when it's clearly a hardware/firmware fault and that plays right into the hands of MS's FUD generators.

    Having your devs state in irc sessions that they don't care about a X stability problem triggered by Firefox because the product is unsupported is not encouraging for a board less than a year old, etc - in all liklihood if you go back and address the issues with your older hardware it's likely you'll cover a lot of problems with the newer ones too.

    What's very clear is that a culture change is needed at Intel. It's intolerable that products are being certified as working with linux (and specific distros such as redhat) when they clearly don't work properly in the real world. Perhaps less reliance on software Development Vehicles and some more hands on experienxce with real world hardware is a good starting point.

    I'm trying hard not to flame, but Intel really needs a wake up call.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •