Quote Originally Posted by 89c51 View Post
Paul could you also answer why mac os x does a better job with a single "stack", why we can't have the same in linux and also why there is no effort trying to merge jack and PA (lack of manpower was the reason last time i heard about it).
Its complicated. OS X does a better job with a single stack because it has very clever design that we unfortunately missed (read: didn't know about) back in the early days of ALSA. i don't feel so bad about this - windows didn't get anything remotely resembling this idea till years later either. The crux of the matter is that CoreAudio is NOT dependent on the flow of interrupts from the device to be able to know with fairly high precision where the current audio interface read and write locations are in memory. This means that applications with entirely different latency requirements can write into a shared memory buffer without requiring that they all do so synchronously (the way that JACK requires). An application that wants to write 8 samples at a time can do so, and at the same time, another application that wants to deliver 4096 or even 64k samples at a time can also do so - CoreAudio can ensure that both of them end up putting audio in the "right" place in the mix buffer so as to match their latency requirements. It does this using an engineering design called a delay locked loop (DLL), which actually makes the latency slightly worse than on Linux - CoreAudio drivers have a "safety buffer" to account for possible errors in the estimates that the DLL provides for the current read/write pointer locations, and this adds to the overall latency. Its fairly small though - typically 8-32 samples. Now, there is nothing fundamental stopping ALSA from adopting this kind of design. But nothing stopping it isn't the same as making it happen - that would require a significant engineering effort.

In addition, even without this, you still have to decide where sample rate conversion and sample format conversion will take place - in user space, or in the kernel. OS X does not have a prohibition on floating point in kernel space (which is contributes a teeny bit to why their kernel is slower than linux for many things). So historically, they have done some of this in the kernel. they won't talk much about the details, but it appears that in recent OS X releases (Lion & Mountain Lion) they have moved away from this and now have a user space daemon that is conceptually similar to Pulse and/or JACK through which all audio flows. this is slight speculation - you can see the server running with ps(1) but apple have never said anything about it. Its also not clear whether the shared memory buffer into which applications ultimately write their audio data is in user space or kernel space, and this may also have changed recently. The key point is that even with the DLL-driven design in the kernel, there are still tricky, fundamental aspects of API design that you have to tackle, and that even on OS X, the answers are not fixed in stone.

interestingly note that PulseAudio has (or was going to have) this DLL-driven design too - lennart calls it "glitch free" - but adding to pulseaudio (or JACK) doesn't do anything what goes on at the ALSA layer.

as for merging JACK + PulseAudio, manpower remains an issue, but more importantly, the goals of the two projects are not really that similar even though to the many idiot-savants who post to reddit and slashdot, they sound as if they should be. there are ways that it could happen, but it would require a huge level of desire on the part of all involved, and given the difficulties we have internally with two different jack implementations, it just seems unlikely.