Errr... because the rest is employed by or supported by Red Hat which happens to be competition.
Utter BS, how the organization's set-up/run is completely different to that of Mir (or at least compared to how Mir admin has been described so far)...
You know that, it's been explained countless times in threads relating to this topic, but by all means continue to ignore it simply because you of your own bias.
Either back up your argument or I call you a fanboy retard.
Correct me if I'm wrong, but, top contributors of Wayland:
Kristian Hoegsberg (Intel)
David Herrmann (student)
Jonas Ådahl (Opera)
Pekka Paalanen (Collabora)
Ander Conselvan de Oliveira (Intel)
Tiago Vignatti (Intel)
Daniel Stone (Collabora)
Red Hat does not even have any employees actively working on Wayland ATM. This anti Red Hat FUD is completely ridiculous - the only reason they are being attacked is they make certain other vendors look bad.
openSUSE is simply too buggy for desktop. I regretted the day when I almost lost Linux adoption in the family when installed openSUSE instead of Ubuntu :P
Arch can break any time you do pacman -Su
And Debian is way too rusty.
Well, ubuntu too, have their down time, right? And right now, I think they're getting worse with each release, IMO. I've tried opensuse from 12.1 to 12.3 and the experience is good.
With Debian, when you want to use package that more uptodate, use unstable (sid) repo. Ubuntu too, compiled from Debian unstable (sid). If you want more stable version, use testing (yes, stable is too old. But stable).
As for arch, yes, you've to be carefull when doing pacman -Su. But if you read their news, infos and such, I think it's quite safe.
there are many arguments that could come into my mind against mir. but THAT above is... well it is not an argument against.
since when is competition bad? it is exactly what many project actually lack! especially display server for linux. i can't image a more absurd and false argument than that one i quoted.
Errr... Do you see competition inside Apple against Quartz or their WindowServer? Do you see the same happening on the Microsoft camp against their WindowMngr (is it still called that?, lost track after Vista). Diversity and options are good for APPLICATIONS, not so much for INFRASTRCUTURE which, if not THE reason, is one pretty damn high there as to why it lacked support (a somewhat historical resemblance):
One kernel (Linux per se) + one set of system tools (GNU) = Functional distribution, command-line based.
Distribution + (Graphics environment (X11) + Desktop (GNOME, KDE, XFCE, LXDE, [Unity], et al)) = "Desktop distribution"
"Desktop distribution" + multimedia (drivers (OSS Vs ALSA) + sound server (ESD Vs aRts Vs [OSS] Vs Jack Vs [PA]) + media framework (GStreamer Vs Xine Vs MPlayer Vs [VLC Vs FFMPEG]) = "Multimedia desktop"
"Multimedia Desktop" + Full blown 3D hardware support (None Vs Proprietary (nVidia, AMD, Intel (Poulsbo)) Vs OpenSource) = "Modern media desktop"
Bracket elements denote the most recent ones, [OSS] as of version 4 also offers a sound server for software mixing.
You see, there are a LOT of elements in what we call a modern day desktop computer, and for Linux this has come with LOTS of pain.
In the infrastructure department outside of GNU and the Kernel ever since the introduction of OSS as the sound system (drivers) back up to the old 2.4 kernel days, there's been conflict. Then it came a switch to ALSA (initially developed by SuSE) which (THANK THE *NIX GODS) has remained uniform and all hell broke lose sound-wise... However in regards to desktop support the drivers can only take you so far... Most of the sound support, regardless if the hardware actually supports it or not, most hardware is supported with a single stream, duplex sound (i.e, one playback, one record) with notable hardware exceptions (such as Creative Labs Live (original) and Audigy (1&2) line of cards, and a few others), which is why a sound server is required, as ALSA/OSS would allow only ONE source to produce sound, i.e only one program = one stream. What happened was the development (very early on) of the so called sound servers, which would act as an intermediate between the program layer and the hardware (driver) layer to allow sending only one stream, while catching other streams and then mix them before sending them to the drivers. Just for clarification Windows and Apple Mac have the very same functionality (again we do not see multiple implementations of such a basic need there) in the form of the older WSS (Windows Sound Server, its new name ever since Vista eludes me) and Apple's CoreAudio, whose function is/was exactly the same... What happened on Linux? We got one for every desktop! (yeah! ) Gnome got theirs borrowed from the Enlightenment project (yes, its been around THAT long!!) in the form of ESD, the Kids from KDE developed their own in the form of aRts up until KDE 3.x, Phonon in KDE 4.x; others had the good sense to avoid reinventing the wheel and used either or both of them... But then the sound enthusiasts complained about the high latency of these older sound servers, and here comes Jack! It all was a bit stable (most of the time ESD did not work as well in GNOME and aRts caused all sorts of problems with mixed environments, that many of us got used to having only one sound source for quite some time...) Then Red Hat's Fedora debuted PA in Fedora 8 (IIRC), and the next distribution to use it was Ubuntu. It was like a bucket full of cold water in your back at the time, it did not work for most people, and caused all kinds of problems, high latency and sound corruption, it was utter chaos and nightmare-ish.
On the video front fortunately Linux has been much more stable, in that while there has been changes, these have not been so dramatic or wrecking as other subsystems... Back with the introduction of XF86 v4.0, video acceleration was moved to the X11 server through the special interface of DRM/DRI in the late 1990s / early 2000s, but then suddenly the project decided to change their license and in a freak moment all distributions jumped ship and fled to Xorg (where we've been so far) who back then acted as a kind of steering comettee/entity behind the development of the X11 protocol. Back then were many discussions about the need to change the whole X infrastructure, and that X11 was too old anyways, and what not, and that the move to Xorg was going to be in the "mean time"... History has proven that it did not happen, until the "recent" announcement of Wayland, which I, like many others, I'm sure, thought that would be the legitimate successor of Xorg and X11 when I first learnt about its existence. Now its been more than four years since the original announcement of Wayland, and there has not been a single implementation working in any of the many Linux distributions, and there seem to be a lot of things to still implement and many more bugs to crush. Is all good and peachy to me, but the fact that development of Wayland has decelerated, I think was due to the success in other fronts: The advancement in the OSS drivers for AMD, nVidia and Intel hardware, the advancement in the support from Gallium3D drivers, DRM/DRI improved support for hardware, and the general development of Xorg proper which worked out many of the shortcomings of the older releases which prompted the spawn of an alternative in the first place (not that these had been worked out... yet, anyway).
At the most basic level, though there have been some problems in the past as you can see, these have not been "disastrous", as what has been in the applications arena... Where fragmentation is much more noticeable than "under the hood", and is where the actual lack of support from many vendors may stem from. While targeting a specific toolkit (and hence some times a specific desktop environment) is the most obvious of the reasons, that is not the case, at least in my experience, as one could code an application in whatever toolkit you'd want and not make it integral to any specific desktop environment, as is the case where there are applications coded in Qt for Windows, Linux and Mac, or in GTK+ all working well and in harmony in their respective environments, no... the fragmentation comes from a totally different beast, one which each distribution, even if sharing package formats, do differ: Package manager and worse, dependency hell... Why? Well, obviously if you allowed the distribution's package manager to satisfy all your application's dependencies that wouldn't be a problem, problem is that in order to do that you'd have to target one specific distribution (sounds familiar, people?), if you want to deploy in as wide as possible you stumble into dependency hell, and you'd better start shippin' all your stuff self contained! Problem is that some times even self contained applications do require to link to system libraries every now and then... And that's the belly of the beast. Pretty much every linux distribution and its cousin has a different naming scheme for their packages and some times even files, not to mention the heterogeneous locations that can be used (what do you think LSB tries to accomplish, ), or in the case of some distributions, the lame support they have for multi-lib support (32 & 64-bit libs, all living under /usr/lib, instead of /usr/lib and /usr/lib64); that's what more often than not is the reason why many vendors call the Linux echosystem to be "fragmented"; and they'd be totally correct! Not that it's not possible, otherwise there wouldn't have been any proprietary software before on Linux in the past, which has been the case, only that providing support is what they are scared the most.
More often than not, only on certain distributions, proprietary software vendors offer support, for instance, some professional software programs are made available only to Red Hat Enterprise Linux, Oracle Linux and other compatible Linux distros due to this very issues, and not to the general Linux desktop distributions. When others have made available their products to the Linux masses, they've done so with the disclaimer of no support (id, BioWare, etc), and all had worked just fine, just the same. But those are games and gamers know where to get support from fellow gamers. Productivity and other more specialized kind of software would require more support... And we are looking at just that every day now that Steam is available for Linux, either in the distribution's specific fora or the SteamPowered ones: In the bulk of the posts there are cries for help. Nothing bad with that, there's always the good samaritan willing to aid... And if a single application (one used by "savvy people" at that) can wreck havoc among users of a SINGLE distribution, imagine what a more generic for regular users one would do... Hint, you may not hear the screams, as they might not even know WHERE to scream for help on-line, but rather take up the phone and call... Technical support, the vendor's nightmare. We know that's not the actual scenario, as most likely they'd call the friend who originally helped them to get their Linux distro on their computers in the first place. Imagine that in a much broader scale with more than only ONE distribution and the miriad of distros out in the wild...
Back to the topic at hand, add injury to insult and in the present circumstances where Linux (despite being Ubuntu the most recognized of its faces) has risen awareness of its viability, given the above arguments many vendors still refuse to officially support it, creating a chicken-and-egg condition: Support will be difficult, but if you do not release, the issues will never get fixed... and that's just what Valve did. And it seems to be turning quite alright, as there is a mixture (even if the majority use the most famous distro) of distributions where Steam has been successfully deployed.
What would happen if Mir, this new infrastructure component gets forced down the throats of all these users? First, I truly don't think that Canonical could risk to ship it as default any time sooner than the next two years, and that is giving them the benefit of trusting they'd advance it twice as fast as Wayland to get it to the point where Wayland is now (i.e no real implementation yet), so while this adds fragmentation, there is much at stake for them to force it like that, and if they are successful, and come with a new piece of infrastructure that would heal the sortcomings of X11 and implement what Wayland could not and tackle all that maintaining compatibilitiy with the current driver infrastructure, the better... But until they have something to actually show (other than a few technical videos, more like a live image or something), I'll remain skeptical.
Red Hat got involved cause RH workers had bad impression of Canonical workers.
It was more about:
You did stuff, than abandoned it, and now we have to maintain it..
Nothing specific about Wayland/Mir.
Though I have no idea why Canonical would leave any maintenance of Mir related code to 3rd party. Imagine Qt reverting Mir support because Canonical do not want to support their own code any longer... That would kill Canonical Mir efforts. It aint happen. So this dilemma is (almost) moot. But RH/Canonical could exchange opinions what their employees think about each other. :P
Red Hat does not even have any employees actively working on Wayland ATM. This anti Red Hat FUD is completely ridiculous -
Thanks for sparing me the time wasted on these poorly informed users who see things only through their fan goggles.
Never thought I'd see fanboys pop-up in the area of display servers, quite sad really...
Originally Posted by Togga
Yes, after trying Mir I will get a clue and you do not have to curse and badmouth me anymore? What is wrong with this picture?
Apparently you still don't seem to get what's wrong with it....
The fact that you still think you must "pick a team" is the problem here, thereby validating any previous labels attached.