I see he commits some stuff to XServer, but who knows if these commits were part of Canonical's business strategy.
I'm sure they have a policy which allows developers to fix bugs when they encounter them. Judging from the size and type of these commits, there is no indication for a real direction.
If you value those petty bugfixes as real contributions, then you're right.
But look at other distributions: Gentoo developers for example work at their own devfs in userspace (eudev), Debian has an own Linux-Kernel team and Red Hat is the largest contributor to Xorg.
It is an insult to every one of them when somebody like you tries to bring Ubuntu on par with them.
The Ubuntu-developers may not "never" contribute upstream, but compared to other distributions, it's a bloody joke.
And you know that.
BTW: Stop being so cocky. Take this as a friendly advice.
Sometimes, forking is a good way to contribute to upstream, especially when the forked project goes in a bad direction (Xonotic, Mage+, LibreOffice, ...).
The problem with udev is that they are heading to be systemd-specific. The lack of interest by most distributions is derived from the fact that they are using systemd anyway.
As I don't consider init-system-specific solutions to be ideal, forking is a valuable contribution to the project itself.
Gentoo is specifically interested in eudev, because it is one of the few distributions which allow you to use multiple init-systems.
Please read into the topic before giving unqualified statements.
As pointed out, if other distros don't care about it, then it's ultimately not upstream contribution, and it's no different than Mir or Upstart.But look at other distributions: Gentoo developers for example work at their own devfs in userspace (eudev)
Likewise...BTW: Stop being so cocky.
...uh... wait... How come then that the currently most selling hardware platfrom (ARM) aren't even x86 compatible?
If we follow your logic, any computing device should only be a x86_64 variant running some custom Windows built.
Well the situation has changed a lot since the 80s (and early 90s). Back them every single computer platform, had a completely different set of innards, *BUT ALSO* each one ran a completely different set of operating systems and software, all almost completely hand-written assembler for the peculiar type of CPU in that machine and optimized for its hardware quirks.
Today, thanks to opensource (with source available everywhere and most of the software being written cross-platform using common languages), getting Linux running on anything is usually mostly only a compile away.
Linux runs on x86 & x86_64 (most popular on desktop & laptops), but also on ARM (most popular on smartphones/tablet/ultra-light-netbooks), but also on MIPS (very popular on router/modems) but also on PowerPC (Playstation, some servers) and several other platforms (other server CPUs like Sparc, etc.).
Not only that, but more and more software doesn't even care what the CPU is:
- software compiled into bytecode (Android is built around a Java-like Dalvik).
- Even Windows 8 (though not opensource): since they started offering also ARM platforms, they strongly recommand and support cross platform application, either in HTML5 or compiled into .NET bytecode.
In short, in modern days, architecture don't impact that much. You'll still get your Linux flavour for that one. (You already have x86_64, ARM and MIPS which are *very* widespread).
Whereas in the old days, it wasn't only Z80 vs 6502 vs 8088/x86 vs 68000, but also MS-DOS vs. CP-M vs. AMOS vs. STOS vs. C64 BIOS, etc. all with a bunch of different hardware to directly talk to.
Leon", a SPARC based CPU whose VHDL has a LGPL license. Sun themselves have also released a few cores under an opensource license as OpenSPARC. There's also the OpenRISC.
There are opencores out there. What is needed is a whole market for them. (Not much interest beyond academia, for now).
You can play stuff one-shot:
You can load a small audio file and simply tell the sound chip to play it. (That's what happens when a desktop applications plays a small sound effect).
You can also do something which looks like double buffering:
You fill a buffer with audio, send that buffer to the sound card, then while it is playing, fill another buffer, the when the card has finished the previous one send that current one to play, then proceed to the next buffer, etc. (That's what happens when you play a long audio sequence: a big MP3 file, stream a web radio, or mixing several sound source with a software mixer).
Note that this kind of buffereing requires making compromise between un-interrupted autio playing and latency. Either you use BIG buffer (with each 1s worth of sound in them) and the chance is very low that the playing will get interrupted (you alsways have a few 1sec buffers of headroom before reaching the point when you have nothing left to play), but have a huge latency (if an app want to play a sound effect, it will only be added in the next buffer being processed and thus will only be heard in a few seconds, once the previous buffer already in line have finished playing). Or you have the opposite of this (CPU usage is high as it is constantly filling small buffers, and sound glitches have a high risk of happenning in case of bad scheduling of threads, but at least since the buffers are small, latency isn't that bad).
You can also do something which looks like a ring buffer:
audio is continuously looping over the same buffer, pulse audio is filling this buffer continuously with a varied amount of "ahead" time. If you're listening to a radio, pulse will completely fill all the the buffer ahead, then put the CPU to sleep, then wake up a half second later and append half a second-worth of audio the go back to sleep. At any point of time there's a lot of headroom between which part of the buffer are playing and which part are getting filled). If suddenly an immediate sound is needed, with low latency, pulse will start re-writing the buffer which a few samples ahead of the "pointer" where sound is read. Pulse is only slightly ahead and finishes write audio, almost before it's getting played. Pulse is almost feeding the audio real time as it is played. The latency is minimal (though the CPU usage gets higher but only during this time).
Thus unlike the previous solution, you don't need to make compromises. Pulse is constantly tuning it self by varying how much ahead of the currently played sample it is in the circularly playing buffer.
This last one is a perfectly "normal" mode of work. The problem is: Pulse is the only piece of software that works this way under Linux. Every other sound system use exclusively one of the first 2 methods.
So even if this mode is "supposed to work", you might find bugs in the audio driver that pulse is the only one to hit because it's the only software functioning that way. You though that alsa is functioning perfectly, whereas actually it is not. Alsa is buggy, but it happens so that only pulse finds the bug. Or maybe the driver is technically correct, but your piece of hardware is half broken. It works under windows because it's driver is accordingly twisted to adapt to the quircks of the hardware, but it doesn't under Linux because the circumventions are known/are there. Except that, the weirdness only happens with Pulse. Probably that either the other modes of play where fixed before because people noticed the problem, or the problems only arises when circular buffer is used and nobody noticed until pulse.
In the end, there are needed fixed that should go in Alsa, but aren't there. Pulse can't do much (no matter what the programmers of pulse do, they are stuck. There nothing you can do, if the underneath ALSA stack can't correctly return the "currently playing" pointer).
Now the thing is, this feature (low latency when playing realtime sound, or conversly the ability to put the CPU to sleep and save power when it's just predictable audio playing) is actually important. Low latency is really important is several end-user scenario (mostly for VoIP calls, and for games), and keeping CPU usage low is important (while playing music, specially if the device in question is portable and runs on a batterie).
What Pulse tries to do is to emulate the functioning of a hardware mixer (mixes sound with very low latency and low power). The problem, is that such mixer are getting rarer: the current tendency is to just put a chip that is basically a multi-channel duplex DAC/ADC and do everything in software. (How many people in this thead have Audigy sound card with hwmix vs. how many people are just using their on-board "Intel HDA" chip ?) So you can't pretty much run away from pulse, it's the only viable way to have sound in games, skype and webradios.
But... as with any newer technology, it will require testing, fixing broken drivers, circumventing broken hardware, etc.
There are good distros making a decent job when packaging pulse (my opensuse seems to be one). There are also very bad distro which tend to think things along the line of "hey, pulse version 0.0.1-prealpha is out! Let's put it as an obligatory requirement!".
That's the behaviour which is bringing problems to pulse (which would otherwise be an useful piece of technology). Same thing happened also with KDE4 (with several distro switching to the "technological preview" without much thinking). And the same will very likely happen in the future with Wayland, with on one side distro taking great pain to make sure that provide a well intergrated preview experience and also provide a decent fall back for users prefering to wait. And you'll have probably a bunch of distro just throwing in whatever is the current version deemed releasable (you'll find KDE5 preview running on Wayland beta and the whole things crashing like it is windows 9x).
is literally incomplete and caused crackling and all sorts of other problems. (Obviously incomplete, even!) The patch has since been reverted and this will presumably be included in the next release - along with a bunch of new regressions and bugs because they never actually stop changing things for long enough to stabilise their code. More annoyingly, I'm not sure it was even the source of the crashes I was seeing in the resampler.