Page 1 of 4 123 ... LastLast
Results 1 to 10 of 36

Thread: Third-party software installation for any Linux distribution

  1. #1
    Join Date
    Nov 2007
    Posts
    1,024

    Default Third-party software installation for any Linux distribution

    Quote Originally Posted by Thetargos
    @elanthis

    As a Linux user for the past 15 years, I'm very interested on seeng more game projects come to fruition on the platform. I do agree, though that there should be a better, much better, way to get third party software installed on Linux.

    We should probably start a new thread on the topic alone, even though PackageKit and many different distributions are working to get this project "a common face to installing software" (which is a good idea), there is still one pesky detail: How to get third party software installed... There should be an API/GUI for PackageKit to actually allow software distributed in .run/.bin encapsulated shell scritps register with the package manager software of the host OS. Much in the same way, with varying degrees of success, the Loki installer worked, only this time getting it right, binding the binary executable packages to PolicyKit and PackageKit so that these two would allow administrative privileges and registration with the main software database... So you could also remove the software with the standard tools (uninstalling has never been much of a problem, though). Of the examples you gave, the paramount of this was precisely NWN with a massive tarball and cryptic instructions which sure enough seasoned users could easily follow and which caused endless frustrating encounters with Linux to many less experienced users. Food for thought.
    I'm opposed to .bin packages, because they alone add all kinds of dependencies on the installer before the installer even gets involved. Shipping a .exe on Windows works only because Windows is such a fiercely static target, and those executable usually have just about every dependency they could use statically compiled into them to make sure the installer can run.

    In fact, even on Windows I'm irritated that people keep shipping custom .exe installers instead of using MSI. The primary reason for that is really just a set of limitations in the MSI format, which I think can (and should) be fixed in MSI and any kind of Linux format.

    So basically, here's what I'm thinking. We have several core concepts used for an application installer.

    (1) The required platform(s). These are essentially the dependencies of the application. However, instead of specifying dependencies on each individual library, it's just way way way easier for third-party developers to work with platforms. In some cases, these are already well defined. We have the LSB, we have GNOME, we have KDE, and some others. In other cases, we may need to define some new ones. For instance, an "SDL platform" might make sense to include all of the common SDL libraries, including SDL_net, SDL_image, SDL_mixer, and SDL_ttf. Yes, a big bundle is less flexible, but flexibility doesn't always help, especially when it just adds a huge pain in the ass that nobody wants to deal with. In terms of implementation, these can literally just be native packages with dependencies and a "Provides: installer-platform-foo-MAJOR.MINOR" (or the equivalent for non-RPM systems).

    (2) The applications to be installed. These would be things like OpenOffice.org, Firefox, or Doom 3. An application is composed of components, which are explained further below. The actual application in terms of the installer framework is just a set of metadata, including the name and description, license (which may be a regular license or a contract-like EULA), an update URL, icons, any images or other data to display while the installer is running, the list of components, and the preferred location of those components. In terms of what we have today, these are kind of a merger of repository, Fedora's "comps", and meta packages / package groups. There is no file specifically for an application; they are always delivered as part of a bundle. Application metadata may be signed.

    (3) Components of applications. These can be both mandatory components and optional components. For example, OpenOffice.org might have one component for each application in the office suite, plus a "common" component that has all of the shared libraries and such that the other components used. Components may be hidden, which means they aren't even shown to the user, and are just used for organizational purposes. Components are roughly equivalent to what an RPM or DPKG is usually used for today. They contain the actual payload data to be installed. The installer may register components into the native file management system or it may use its own separate installed application database, depending on how amenable the native system is to the first approach. Components can either be their own distinct files or they can be a part of a bundle. Components may be signed.

    (4) Bundles of components and an application. The application metadata is stored in a bundle. A bundle would be the main file type that a user would click/download to install an application. The bundle can optionally contain components. The application can also contain URLs or media references for components. When a bundle is opened, the installer will first look for any referenced component in the bundle and, if not found, it will then look for it in using the URL/media-ref specified in the application metadata. The purpose of a bundle is to make it easy for distributors to build online installer, DVD-based installers, or stand-alone installers from the source installer data. Stand alone tools should also allow users to "collect" any missing components from their source URLs/media-refs into a single bundle, allowing a sysadmin for example to turn a web installer into an offline installer for distributing to a network of machines he maintains. Bundles are not signed, as there is nothing security-sensitive about them; they are essentially just a tarball/zip of application metadata and components, which are individually signed.

    (5) The actual installer application and related tools. It opens bundles, finds the application metadata, checks for the components, and allows the user to install the application. It will display the license and, if a EULA/contract is present, require acceptance of the text. It will allow the user to select the desired components. It should show if any component requires internet access to download (not in the bundel, has a URL to access), is on a media device and which media device (e.g., FooBar DVD #2), and how much space it will take upon install (which must be stored in the application metadata). Once the user selects the components, it begins installing them. It will either register the components with the native package system or register them in custom package system if necessary. It will also register a meta-package for the core application, which will include an update repository for the application, which will almost certainly not be the same as the native package system's repository format (because even distros using RPM may be using any number of different repository/updating tools). There should be a CLI version and silent GUI versions, with tools to inspect the application's components and metadata, view the license, and manage bundles. The GUI installer should allow passing a flag to just show the progress and to auto-accept the license and to use the default component set or a user-specified set via the command line flags (this is useful for systems administrators who want to deploy applications across a network). Application and components will have their signing checked. We could support GPG, but for third-party software using the non-distributed SSL-style signing is just way more convenient and practical. It will also handle updating an application from a bundle (in case the user downloads a new version), or allowing the user to reconfigure an application by adding or removing optional components.

    (6) The updater application. This is ideally a PackageKit backend; I'm not sure if PackageKit supports multiple backends at once, but if not, it would need to be added. This is another set of metadata that basically gives a URL that a specific application is updated from, which is checked periodically for updates. If an update is availale for an application, only that single application should be shown to the user in PackageKit; it does not make sense to update only some components (at least, we won't ask software developers to guarantee that mix-n-match of component versions will actually work). The updater will download and component updates or any new mandatory components in the application. This should absolute support delta updates, because it's not at all uncommon for games to have many GB of data and for a patch to only affect a few tens of megabytes of that data.

    (7) The uninstaller. This may even just be another mode to the installer application. It's basically just a GUI to remove components from an installed app or remove all components and the application metadata entirely.

    (8) Packaging tools for compiling, signing, and publishing the installers and related files, including the updates repository.

    So, all of that functionality, there's plenty of things I'm NOT interested in supporting, at least not in a version 1.0:

    - no support for dependencies between applications. when you start doing that, you're not really talking about applications anymore, you're talking about frameworks.

    - no product keys. enter those when the application is first run. there are a billion formats for product keys, and actually validating them requires a lot of custom logic in the installer. better to just make it part of the app. that does leave one question though: on a multi-user system, what's the preferred way for applications to only require entering the product key once? world-writable files are best avoided, after all. is that our problem?

    - no format/interface optimization for system utilities or core system components. although it'll be strictly possible to install something like Bash or whatnot with this, it's not quite the right user experience. use the native package format for that.

    - no fine-grained dependencies. already explained why.

    - no DRM. aside from the fact that I'd love for DRM to kiss my ass and die, actually implementing DRM is a Herculean effort even on a closed-source OS, and is probably damn near impossible on an open one. if an application wants to saddle its users with that crap, it'll have to figure it out on its own.

    Using a very rough and only marginally-educated guess, I'd put all of the above at about three man-months of effort, from initial specification through initial complete usable system. Someone with a lot of experience with Linux GUI apps and with PackageKit may be able to do it in less; it really shouldn't take much more, though, even for a less experienced developer.

    The part I haven't already gotten figured out is how to get such a system packaged into the various distribution's repositories. It doesn't even need to be part of the default install, because modern distributions use PackageKit to look up a package to handel unknown file types; so if a user clicks on a ".install" file (that extension is purely for exposition) PackageKit will install the installer to install the application. (Let's just hope the user isn't trying to install some other installer, because then he'd need to install the installer to install the installer, and that's just crazy.) We have to illustrate the usefulness of the system to the community at large -- especially the Free Software folks.

    That largely means getting some large and popular Free/Open applicatiosn on board to publish and maintain files in our new format, and then showing people how it's much easier to centralize the packaging and let users update to the latest version on their own, without needing to have a small army of packages duplicate the same packaging effort over and over and over for every distribution while being doomed to constantly be out of date with upstream.

    If the only use case we can present is installing proprietary games, we'll never get distributions like Fedora or Debian on board. Even if it is highly useful for Free/Open software, it's still going to be a huge uphill battle simply because of how it enables proprietary software, which a number of people in the Linux distribution communities are vehemently opposed to.

  2. #2
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Oh, and before I head out:

    also need to figure out some of the lower-level policies, such as where in the filesystem these applications will be installed (/usr/local? /opt? /app?), how components will be restricted during installation (say, their files are always placed within their application's installation directory, except for the .desktop files which are special-cased), and whether to support per-user installation.

    For per-user installation, I'm kind of okay with punting that, not allowing it, and worrying about it in a subsequent version. In theory it's easy, but it puts a small burden on the application developers to make sure their apps are neutral to installation location. That's a good thing in general, but Linux natively has absolutely no support for that the way Windows does. A whole separate (and much smaller) project might be to create a nice very-permissively-licensed library for applications and other libraries to determine their installation location and to generate loader scripts that set up LD_LIBRARY_PATH and such appropriately when starting an application. I believe the AutoPackage guys actually already developed something like this, which may be usable. It should be packaged up for distros and made a dependency of the installer itself when packaged up for the distros.

    In the case that a distro will not natively package the installer framework, my gut instinct is to just say that distribution is user-hostile and make sure our downstream users (the third-party developers) know that they can't support that distribution unless they go out of the way. So long as we get Ubuntu, Fedora, SUSE, Mandrake, and Gentoo on board, I think things will be good. Most of the smaller distributions will probably come along as soon as there's demand for it, and the few holdouts will hopefully just be niche distributions that we can safely ignore.

  3. #3
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,034

    Default

    Quote Originally Posted by elanthis View Post
    That's a good thing in general, but Linux natively has absolutely no support for that the way Windows does. A whole separate (and much smaller) project might be to create a nice very-permissively-licensed library for applications and other libraries to determine their installation location and to generate loader scripts that set up LD_LIBRARY_PATH and such appropriately when starting an application.
    Check out physfs, and man ld -> origin.

  4. #4
    Join Date
    Nov 2010
    Posts
    394

  5. #5
    Join Date
    Nov 2010
    Posts
    394

  6. #6
    Join Date
    Jun 2006
    Posts
    3,046

    Default

    Quote Originally Posted by elanthis View Post
    Oh, and before I head out:

    also need to figure out some of the lower-level policies, such as where in the filesystem these applications will be installed (/usr/local? /opt? /app?), how components will be restricted during installation (say, their files are always placed within their application's installation directory, except for the .desktop files which are special-cased), and whether to support per-user installation.

    For per-user installation, I'm kind of okay with punting that, not allowing it, and worrying about it in a subsequent version. In theory it's easy, but it puts a small burden on the application developers to make sure their apps are neutral to installation location. That's a good thing in general, but Linux natively has absolutely no support for that the way Windows does. A whole separate (and much smaller) project might be to create a nice very-permissively-licensed library for applications and other libraries to determine their installation location and to generate loader scripts that set up LD_LIBRARY_PATH and such appropriately when starting an application. I believe the AutoPackage guys actually already developed something like this, which may be usable. It should be packaged up for distros and made a dependency of the installer itself when packaged up for the distros.

    In the case that a distro will not natively package the installer framework, my gut instinct is to just say that distribution is user-hostile and make sure our downstream users (the third-party developers) know that they can't support that distribution unless they go out of the way. So long as we get Ubuntu, Fedora, SUSE, Mandrake, and Gentoo on board, I think things will be good. Most of the smaller distributions will probably come along as soon as there's demand for it, and the few holdouts will hopefully just be niche distributions that we can safely ignore.
    In response to your rant...

    Packaging doesn't fix that. I think the Puppygames "oops" of .debs should show that NONE of this is handled unless the vendor handles it for you- and .debs and .rpms do NOT change any of what you disclose as a problem. (Do keep in mind that even if it DID fix it, you'd have to get each distribution's rules, etc. right which means you'd have to make a package for each distribution. Not going to happen, just so you know...)

  7. #7
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    A lot of the packaging issues could be handled fairly easily if they would setup a suse build server, it may not cover all distros but it certainly covers the most commonly used distros, Fedora, openSUSE, *buntu, debian and mandriva.

  8. #8
    Join Date
    Apr 2007
    Location
    Mexico City, Mexico
    Posts
    900

    Default

    It took me a while to go through it all in your post... I does seem you have it well thought through, however a few comments:

    Why create a separate entity to handle the installation? (installer) with all that it would require (special file format, fetch the app, etc)

    The way I see it, as a start it could be made much simpler and maybe easier if it would be possible to:

    Have the actual installer be part of PackageKit, the .bin/.run/whatever should only call it with the "right" parameters and "credentials" to start the install process, the host OS PackageKit would take care of the GUI (GTK+, QT, TCL-TK, nCurses, etc); whatever the host OS has installed as part of PackageKit and the environment (GNOME, KDE, LXDE, XFCE, etc). For Internet distributed packages, this would make much sense.

    For the case of media distributed programs it could also be made possible to have a single install.sh file with only the required arguments and the paths to fetch the files from the local media and parse that to PackageKit to handle that.

    There are a few tricky parts as you have exposed, like product validation and DRM, but I deffinitely think that spawning a different entity other than PackageKit would not be such a great idea in the long run, like the case you made with the installer to install the installer for app XYZ. Also relying on PackageKit the developers could easily rely on host OS libraries more effectively and only supply a few mandatory ones, or even install multiple instances and have the application use one specifically (though this wouldn't be much different from Windows where many apps install their supplied VC runtime libraries, for example). It could be made that for example, look first for the native packages and if a required version is not found for the distro, supply the remote repository/local media where it may be found.

  9. #9
    Join Date
    Apr 2007
    Location
    Mexico City, Mexico
    Posts
    900

    Default

    Ohh... And just to round up my idea...

    The necessary metadata for "package registration" with the host OS package management system (dpkg/rpm) could simply be found in the archive with all the requirements, description, components, manifest, etc, leaving PackageKit figure out how to register the app in the host OS. Also maybe have an option for local Vs "world wide" installation where if local, a single user could install onto their $HOME directory leaving out super user rights, and if system wide installation, bind to PolicyKit to get the necessary credentials.

    I know (and Svartalf would probably comment a bit further on this) that it would not be always possible to use "native" packages for some of the application's dependencies as the app may require some very specific version and symbols, hence providing the necessary dependency themselves (see the case with libstdc++ in many games)... However, it would be nice to use native libs whenever possible and letting native tools handle the dependency resolution (with the necessary metadata information provided by the app installation framework, of course).

  10. #10
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by Svartalf View Post
    Packaging doesn't fix that. I think the Puppygames "oops" of .debs should show that NONE of this is handled unless the vendor handles it for you- and .debs and .rpms do NOT change any of what you disclose as a problem. (Do keep in mind that even if it DID fix it, you'd have to get each distribution's rules, etc. right which means you'd have to make a package for each distribution. Not going to happen, just so you know...)
    I'm sorry, I'm not really sure what points you're responding to. I'm explicitly saying to use something other than .rpms and .debs to be cross-distro, to use a standardized set of platform definitions shipped with the installer framework that ensure that a well-defined set of libraries/ABIs are available (these are not shipped with the individual applications, because that leads to breakages like old apps shipping old SDL that don't mesh with newer Linux audio frameworks) to make dependency resolution cross-distro, and to maintain strict policies in the installers about file locations and the like (except for the .desktop files which are handled by the installer framework) to be cross-distro.

    Quote Originally Posted by Thetargos
    Why create a separate entity to handle the installation? (installer) with all that it would require (special file format, fetch the app, etc)
    It's possible to do that, sure. PackageKit's GUI would need a large amount of additions and updates for various features that it does not support now (as it's really just a veneer over apt/yum/etc. currently). If the PackageKit folks are game, that's probably a fine way to go.

    I have a strong suspicion though that PackageKit would rather have that installer be a separate binary that just uses the PackageKit libraries, though. In fact, PackageKit itself has no GUI at all. The GNOME and KDE GUIs you see are separate sub-projects of PackageKit itself.

    Quote Originally Posted by deanjo
    A lot of the packaging issues could be handled fairly easily if they would setup a suse build server
    So instead of a single DVD, they have to ship 20 DVDs each containing a variant RPM/DPKG of the same 2GB of game file data? And that 20 DVD set will be out-dated and may no longer work within six months when all the distros refresh and swap up the default library install sets? Or, if this is a pure-Internet distribution, the distributor has to find hosting and maintain space for all those versions of the exact same binaries and data files?

    The per-distro packages are completely non-maintainable. Many of the actual distributions today are already having a lot of growing pains maintaining their repositories. Building the packages is just one tiny little part of the massive problem that the status quo of Linux installation has. Distributing, maintaining, debugging, and supporting the myriad of files necessary is very much a big part of the problem, and tools that just generate multiple packages don't solve any of that.

    The space issue is also a huge problem with the distro silos. Let's pretend some big AAA game went fully Open Source. Would Fedora really be cool with having all of their mirrors add a 4GB game data file? And not just once; each version of Fedora would end up having that file duplicated to all the mirrors. Twice for each version: once for the base repository and again for the updates repository, assuming any patches for the game data are released. If the game were added to Fedora in version 15 then within two years that game alone would be consuming between 20GB and 40GB of space on every Fedora mirror.

    And then duplicate that for each version of Ubuntu, Debian, SUSE, Mandrake, Arch, etc. etc. etc. Most of the common mirrors hosting Linux distros now will throw a fit and probably start refusing to host the distros, or they'll force the distros to split up their repositories and add yet more maintenance burden. Instead, let the game project site deal with the data. Let them mirror it. Let them figure out the logistics, and only really need a small handful of mirrors for just that data that anyone of any distro can use.

    Going back to the application publishers hosting their own packages, this is still a problem if they need to repackage their data 20 times over. You have to pay for hosting space as a commercial entity, and Open/Free projects just need to find a willing set of mirrors. If you need to host multiple gigabytes of data because of artificial barriers to compatibility imposed by the packaging systems when you natively only have a few hundred megabytes of data, that's going to be a huge pain in the ass.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •