There my post says it. But lets educate everyone here a little.
Firstly, Wayland is only a protocol, nothing more really. It specifies, essentially, the following:
1) How a program (a wayland client) requests offscreen buffers from the compositor
2) various messages to say "present" this buffer
3) some message protocol for events
There is the reference compositor Weston. Now how those buffers are created and how they are drawn to is a function of the system, the integration and the compositor. Nothing more. Over on the RasberryPi, there is a compositor that uses the display controller to present the window contents (i.e not even using OpenGL or OpenGL ES). What is key for a Wayland based system to get performance is that the buffers written to are the buffers being presented. A dumb, slow as molasses, implementation will make the buffers generic shared memory and the compositor would use glTexSubimage2D to upload the image data to GL. That is terrible and one of the worst ways to get the job done, and yet it is done in some places. Shudders.
Over on freedesktop.org there is an EGL extension written up for this buffer image sharing, note that the extension is not in the OpenGL ES registry though. This is not really a big deal, but what is important is this: any system using Wayland needs to have a means to allocate cross-process image buffers where a client (program) writes to it and the server (Wayland compositor) draws them. That is it. Nothing more.
On the issue of Mir. The main stink for Canonical for Wayland was event handling. In a nutshell, Wayland did not have a rich enough system for user events. If you take a look at http://wayland.freedesktop.org/docs/html/, you will see that Wayland's event system is lacking and not particularly rich. There was a long blab on the Wiki of Mir about the why, it was not a NIH issue, there were serious concerns with respect to input.
Going on, I noticed a comment about OpenGL being too linked/reliant on X to use with Wayland. Epically wrong. The main issue is creating a GL context. Now, for X, one uses GLX (gee). However, EGL has a provision to create OpenGL (not just OpenGL ES) contexts as well.
The post about memory consumption was a waste of bytes. Indeed, those numbers are so much tinier than how much RAM is on the system as to not worth noting; the real issue to look at is how much bandwidth does each system consume. That is where the real action with respect to performance is. On embedded systems, X has worked. Indeed both the N900 and N9 from Nokia have X. I do not think it was a good idea, but it did work and memory consumption was kept reasonable. The main stinks for X are the following:
1) making good X.org drivers is a royal pain. Getting accelerated compositing working is another royal pain.
2) X does lots of things, and in truth using X to get anything done is painful. It's input APIs suck, it's window API's suck, it has lots of API's that all suck to use
3) and yet, most application draw all their content themselves (be it via Cairo, Qt, whatever) and do not use Xlib to draw anything. All they want is a surface to which to draw and a way to grab events
Looking at the above, one sees that all one needs is something to allocate image buffers, notify on what buffer to present and to receive events. Wayland did the first fine, but the events just are not rich enough.
At any rate, on desktop, the only player in Linux land that can get us out of X is Canonical. They are the only ones with enough clout to convince IHV's (namely NVIDIA and AMD/ATI) to make it possible to create GL contexts and such without X. Wayland has been around for several years now, and still nothing has been made for it that is used a great deal. My thinking of the why is essentially those folks are not in a position to push it to the correct people at the correct IHV's. There is a place for Wayland on embedded devices, but in truth for mobile, Android and iOS rule it... Maemo/Meego/Tizen look pathetic... there are some Tizen devices out there, but I am not impressed with Tizen at all.
As I understood the question, they could at some point stop developing it under GPL, say make mir2 closed source (which would not change anything for previous code). It may be a misunderstanding on my part, but that is what I thought was the question. What I meant with PR speak, then - why is the answer not just "we have no plans going closed source with mir, it will always be GPL just like pretty much everything we do?" What is the problem with properly addressing the question (which was "do you have plans to go closed source at some point")? It is a general beef I have with communication in politics, with corporate statements etc. It makes me suspicious. If I misunderstood the license things and saying "it is GPL" means "we will never go closed source at some point", fair enough.Originally Posted by dh04000
(actually I believe the "fear" of Canonical taking mir closed source is stupid - why would they do that? I can think of no benefits of such a move)
But that means that proprietary drivers would not run on compositors which require KMS.
So either there will be other compositors which won't depend on KMS or AMD/Nvidia/etc will
need to add support for KMS (is that actually possible? GPL symbols, etc.).
And yes, I know that the blobs all do kernel modesetting, but here KMS refers to the
internal Linux KMS API which is not supported by the blobs.
http://wayland.freedesktop.org/docs/html/ is not Source Code...... ohh look at the Date the Wayland open-source community November 2012