Page 1 of 9 123 ... LastLast
Results 1 to 10 of 160

Thread: Ryan Gordon Is Fed Up, FatELF Is Likely Dead

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    14,787

    Default Ryan Gordon Is Fed Up, FatELF Is Likely Dead

    Phoronix: Ryan Gordon Is Fed Up, FatELF Is Likely Dead

    The news just keeps rolling in today... Besides VIA trying again to submit their kernel DRM, learning about KDE 4.4 features, announcing AMD's UVD2-based XvBA finally does something on Linux, the release of the Linux 2.6.32-rc6 kernel, and GNOME 3.0 likely being delayed to next September, we also have news this evening from the well-known Linux game porter Ryan Gordon (a.k.a...

    http://www.phoronix.com/vr.php?view=NzY3Mg

  2. #2
    Join Date
    Aug 2008
    Posts
    233

    Default

    FatELF is a bad idea for everyone, and in the future, FatELF will be a worse idea than now.

  3. #3
    Join Date
    Jan 2009
    Posts
    206

    Default

    IMHO, Kernel devs are usually big boys working for good money, they don't really give a crap about you or anybody else.

  4. #4
    Join Date
    Jun 2006
    Posts
    3,046

    Default

    Quote Originally Posted by hax0r View Post
    IMHO, Kernel devs are usually big boys working for good money, they don't really give a crap about you or anybody else.
    Not QUITE. However, they don't abide by what they deem as inefficiency or what they deem as inelegant- or provides a solution to a problem that's nonexistent or almost so. The big stopping block with FatELF is that it really didn't solve the problem we face the way Ryan believed it would.

    The main reason you don't have 64-bit binaries is not a packaging reason (though that doesn't help...)- it's that you have to build the binaries for the differing architectures, and FatELF doesn't fix that problem.

    It doesn't resolve issues within your code for endianness. It doesn't resolve issues within your code for byte alignment. It doesn't resolve the issues from poorly written code that presumes a void pointer is equivalent to int- and you have issues with that going to a 64-bit world.

    All FatELF did was allow you to make universal binaries...after you resolve all the other problems. The ones that actually stymie most commercial vendors from doing anything in something other than X86-32.

    And, knowing what I know about the kernel crowd and of Ulrich...heh...I saw this little turn of events coming from a mile away. Sorry to see him disillusioned, but it happens...Lord only knows, I've been there a time or two for similar reasons myself.

    This is not to say that it's not a nice idea, mind...it's just that the resistance is going to be high on it and that it doesn't resolve a few crucial issues that need to be sorted out "better" before solving the particular problem he tried to solve.
    Last edited by Svartalf; 11-03-2009 at 10:53 PM.

  5. #5
    Join Date
    Jun 2009
    Location
    Elsewhere
    Posts
    89

    Default

    Quote Originally Posted by Svartalf View Post
    The main reason you don't have 64-bit binaries is not a packaging reason (though that doesn't help...)- it's that you have to build the binaries for the differing architectures, and FatELF doesn't fix that problem.

    It doesn't resolve issues within your code for endianness. It doesn't resolve issues within your code for byte alignment. It doesn't resolve the issues from poorly written code that presumes a void pointer is equivalent to int- and you have issues with that going to a 64-bit world.

    All FatELF did was allow you to make universal binaries...after you resolve all the other problems. The ones that actually stymie most commercial vendors from doing anything in something other than X86-32.

    And, knowing what I know about the kernel crowd and of Ulrich...heh...I saw this little turn of events coming from a mile away. Sorry to see him disillusioned, but it happens...Lord only knows, I've been there a time or two for similar reasons myself.

    This is not to say that it's not a nice idea, mind...it's just that the resistance is going to be high on it and that it doesn't resolve a few crucial issues that need to be sorted out "better" before solving the particular problem he tried to solve.
    Agreed. I'm sorry as well Mr Gordon got that disappointed but to me FatElf was deemed to fail.

    Quote Originally Posted by deanjo View Post
    Not dodging the questions at all, had old games such as loki games used such a approach I would still be able to use those games for example in a modern distro. Right now you try to run some of those old games and they *cough* *puke* and fart trying to find matching libs. In a "universal binary" approach this wouldn't be the case.

    [...]

    Ryan is just trying to make a solution that would allow commercial developers to develop for linux without having to worry about each distro's "nuance" in order to get their product to run on each persons flavor of linux.
    ... without playing by GNU/Linux rules, which are there is a place for everything and everything has its place. I see FatELF as a big waste of machine power (and storage space) as it would best result in taking only the least common factor found in every machine without taking the most of CPU power. There are too many differences between arches to care only for what they have in common -- it's like... running 8088 code on a Core i7 (or rent a Boeing 747 to ship a single box of pills)... and I'm not talking of non-Intel arches!

    Universal binary itself denies the reason why applications *must* be compiled against the target CPU. That won't be solved that way.


    Quote Originally Posted by deanjo View Post
    This is a sore point that does hold linux back from being mainstream (as well, like it or not the lack of commercial apps).
    No. Either you haven't understood what GNU/Linux is or you're trolling.
    Last edited by VinzC; 11-05-2009 at 06:45 AM.

  6. #6
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,584

    Default

    Quote Originally Posted by VinzC View Post

    No. Either you haven't understood what GNU/Linux is or you're trolling.
    Then count Linus in that crowd too, a he also admits that linux's past present and future success has always been in a "shades-of-gray". If you don't think that the availability of commercial apps on linux isn't a MAJOR handicap that prevents widespread adoption then your simply in denial or wearing horse-blinders for a narrow tunnel vision view. Why do you think Linus despite other kernel devs trying to force blob solutions to be locked out were promptly rejected.

    On Wed, 13 Dec 2006, Greg KH wrote:
    >
    > Numerous kernel developers feel that loading non-GPL drivers into the
    > kernel violates the license of the kernel and their copyright. Because
    > of this, a one year notice for everyone to address any non-GPL
    > compatible modules has been set.

    Btw, I really think this is shortsighted.

    It will only result in _exactly_ the crap we were just trying to avoid,
    namely stupid "shell game" drivers that don't actually help anything at
    all, and move code into user space instead.

    What was the point again?

    Was the point to alienate people by showing how we're less about the
    technology than about licenses?

    Was the point to show that we think we can extend our reach past derived
    work boundaries by just saying so?

    The silly thing is, the people who tend to push most for this are the
    exact SAME people who say that the RIAA etc should not be able to tell
    people what to do with the music copyrights that they own, and that the
    DMCA is bad because it puts technical limits over the rights expressly
    granted by copyright law.

    Doesn't anybody else see that as being hypocritical?

    So it's ok when we do it, but bad when other people do it? Somehow I'm not
    surprised, but I still think it's sad how you guys are showing a marked
    two-facedness about this.

    The fact is, the reason I don't think we should force the issue is very
    simple: copyright law is simply _better_off_ when you honor the admittedly
    gray issue of "derived work". It's gray. It's not black-and-white. But
    being gray is _good_. Putting artificial black-and-white technical
    counter-measures is actually bad. It's bad when the RIAA does it, it's bad
    when anybody else does it.

    If a module arguably isn't a derived work, we simply shouldn't try to say
    that its authors have to conform to our worldview.

    We should make decisions on TECHNICAL MERIT. And this one is clearly being
    pushed on anything but.

    I happen to believe that there shouldn't be technical measures that keep
    me from watching my DVD or listening to my music on whatever device I damn
    well please. Fair use, man. But it should go the other way too: we should
    not try to assert _our_ copyright rules on other peoples code that wasn't
    derived from ours, or assert _our_ technical measures that keep people
    from combining things their way.

    If people take our code, they'd better behave according to our rules. But
    we shouldn't have to behave according to the RIAA rules just because we
    _listen_ to their music. Similarly, nobody should be forced to behave
    according to our rules just because they _use_ our system.

    There's a big difference between "copy" and "use". It's exatcly the same
    issue whether it's music or code. You can't re-distribute other peoples
    music (becuase it's _their_ copyright), but they shouldn't put limits on
    how you personally _use_ it (because it's _your_ life).

    Same goes for code. Copyright is about _distribution_, not about use. We
    shouldn't limit how people use the code.

    Oh, well. I realize nobody is likely going to listen to me, and everybody
    has their opinion set in stone.

    That said, I'm going to suggest that you people talk to your COMPANY
    LAWYERS on this, and I'm personally not going to merge that particular
    code unless you can convince the people you work for to merge it first.

    In other words, you guys know my stance. I'll not fight the combined
    opinion of other kernel developers, but I sure as hell won't be the first
    to merge this, and I sure as hell won't have _my_ tree be the one that
    causes this to happen.

    So go get it merged in the Ubuntu, (Open)SuSE and RHEL and Fedora trees
    first. This is not something where we use my tree as a way to get it to
    other trees. This is something where the push had better come from the
    other direction.

    Because I think it's stupid. So use somebody else than me to push your
    political agendas, please.

    Linus

  7. #7
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,584

    Default

    Quote Originally Posted by VinzC View Post
    ... without playing by GNU/Linux rules, which are there is a place for everything and everything has its place. I see FatELF as a big waste of machine power (and storage space) as it would best result in taking only the least common factor found in every machine without taking the most of CPU power. There are too many differences between arches to care only for what they have in common -- it's like... running 8088 code on a Core i7 (or rent a Boeing 747 to ship a single box of pills)... and I'm not talking of non-Intel arches!

    Universal binary itself denies the reason why applications *must* be compiled against the target CPU. That won't be solved that way.

    What? You are kidding right. Do you think every commercial app out there (or even opensource) is compiled specifically for one processor. Hell no. You have completely lost the grasp of the situation. Optimization for a processor, I am sorry, goes a lot further then a simple recompile of the code. Hell if you wanted a compiler to take full advantage and produce the tightest code for you i7 you wouldn't be using GCC period but intels own compiler suite. Distro's compile apps to support a lowest common denominator for an arch, some have runtime detection of cpu capabilities upon launch and will take advantage of extra instruction sets and improvements might be seen depending on the code if it can take advantage of such instruction sets. For a vast majority of executables no performance gain is seen. Here is a newsflash for you, most linux users don't compile their OS from scratch for their system and no where were we talking about compiling from scratch, we are however talking about pre-compiled solutions. Your argument on this is reaching for straws at best and I can only assume you have no or very limited of coding, packaging and compiling experience to try to make such an misinformed argument.
    Last edited by deanjo; 11-05-2009 at 10:13 AM.

  8. #8
    Join Date
    Jan 2009
    Location
    UK
    Posts
    331

    Default

    Who else here is sick of the constant UT3 remarks in every single icculus article?

  9. #9
    Join Date
    Sep 2008
    Location
    Netherlands
    Posts
    510

    Default

    I'm really sorry to see him get burnt like that, but that was to be expected. This is a major undertaking. He created something that changes the fundamentals of Linux, spanning many projects. It's very difficult to get consensus for that. It really has to be absolutely clear that this is a necessity before it can take off.

    And I don't think it's that good an idea. It sounds very cool, a binary that works everywhere. So I like it in a gut-feeling way. But when you look at it objectively, when are you going to need this? Why not just put two binaries in the same package and install the correct one? Yes, there is a lack of a unified package manager, but that is being worked on.

    I think Ryan underestimated the difficulty of getting other people to buy into your idea. Look at how many people thought about solutions to the package management problem. You've got RPM in the LSB, you've got Autopackage, you've got PackageKit, you've got PackageAPI... will one of those be the final solution? Maybe we'll know in 10 years.

  10. #10
    Join Date
    Jun 2007
    Location
    The intarwebs
    Posts
    385

    Default hmm

    .deb already supports multiple archs in a single file, doesn't it?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •