Once in a while there pops up a hacker or a group of hackers somewhere with really stupid utopic ideas, like "let's create a better MS-DOS and kill Win7 with it", "let's work on Hurd", "let's optimize the telegraph and try to outcompete the e-mail", "let's supercharge the AVI container and kill MKV".
Originally Posted by Kayden
In this case another hacker (read idiot) decided to supercharge ancient technology (read crap) instead of being realistic - it's nothing new, there's lots of stupid people with lots of time, I should know I did stupid shit myself until I started to value my time.
Its no supprise they can do this. X.org performance is absolutely horrendous in many cases.... and this is markedly worse the older the hardware you try to use it on. I would love to use something like this on my Transmetta Crusoe based latptop since it just doesn't have the oomph for X.org.
That said this is completely worthless to me... I run custom kernels on my old machines so the kernels can be a tad slimmer/optimal. I like the idea of what they are doing but the execution is pretty poor imo. They've been sitting around with this tech for years... with little to show for it becauses they think that they just have to remain proprietary... well look at all the other failed proprietary X11 solutions there are still a few around but only because they are backed by hardware companies.
Low end Kiosk hardware have plenty of memory these days. Even a raspberry pi Type-B has 512MB and an Ouya ($99), has 1GB. In fact, most media players these days have at least 512MB of ram..
Originally Posted by siride
And, there are already plenty of RAM-efficient X implementations (such as TinyX by Keith Packard), http://www.pps.univ-paris-diderot.fr...re/kdrive.html
And, any application where speed would be beneficial, stability would be a factor (in a kiosk, kernel panics would be embarrassing). Sorry, but, I honestly cant think of many scenarios where performance is important, and RAM is important. It would also likely introduce significant security issues long term.
Its a cool project & idea (I'm particularly interested simply because it tells us what a near perfect/efficient Xserver implementation could achieve), but, in practice, the disadvantages outweigh the benefits, especially since GOOD hardware is cheap.
Watch carefully the video. Performance with standard X is worse, check. CPU utilization is higher, check. But have you noticed the absolute lack of tearing in X versus the tearfest with microXwin?
I would actually love to see it in action.
Originally Posted by dh04000
Congrats are in order... I didn't know someone could make a display server even more horribly stupid and misguided than Mir.
I'm afraid this testing is not representative because we haven't seen the X.org config and we cannot tell how the X.org server renders things. Is it rendering on CPU or using VMware's XA state tracker passed to host via Gallium3D (where it very well may end up rendering in software anyway)?
How will it behave on real hardware? I believe software rendering is faster than actual GPU driver on everything except Intel GPUs and maybe on Nvidia with proprietary driver.
Also, vsync is an issue. All those lightweight compositors are cool and stuff but have no real use cases because of tearing. MicroXwin seems to be rinding the bandwagon of lightweight-but-useless things. Vsync may also account for the X.org overhead.
It is proprietary software so it is totally irrelevant and useless.
Also, we're getting Wayland which is superior.
They should contribute back upstreams.
Technically Wayland mandates compositing which this thing does not even support. So I guess if compositing is costly on given hardware, Wayland is not really applicable.
Current Xorg+Radeon opensource driver, playing a networked GL game in composited desktop (Xorg) and receiving GPU bug (fixed) that results in GPU hang; the driver is capable to successfully recover in a way that the game continues to run flawlessly; even if multiple crashes follow.
Originally Posted by Auzy