It is not that difficult to provide two backend for doing compositing. Enlightenment use the same backend as any EFL application for the compositing. That means both software and OpenGL (also GL ES) are provided. Software backend is faster than OpenGL in many scenario and also more stable (has it put less pressure on the driver). It is possible to use a Pentium at 600Mhz to do software compositing on 1024x800 screen without any speed issue. In fact, OpenGL is not a 2D API and is really not as efficient as a software implementation could be.
All to support a tiny minority of users who haven't upgraded their machines in many years, or who are running oddball setups that aren't standard fair for a desktop stack anyway.
** On the note of WARP, it's a very interesting design. It provides a very high performance fully D3D11-compliant software rasterizer. It was designed to mimic the performance of real hardware in such a way that you can test your app's performance characteristics on WARP and have it translate well to how it'll scale on real hardware (that is, find the bottlenecks of your app; obviously you'd expect the app to run faster overall on real hardware, but you shouldn't expect that a call that took 2% of the render time on WARP will suddenly take 10% of the render time on hardware). It's also being extended/reused in the upcoming enhanced PIX tools from Microsoft, giving you a way to step through and inspect shader code in your unmodified app, without needing specialized hardware setups (like NVIDIA's tools or the tools for the XBox360 ATI graphics chip) to do it. That's the kind of developer-oriented tools that the FOSS graphics stack should be focusing on ASAP; Linux's biggest draw on the desktop is as a developer machine, but yet it seems its developer tools are lagging further and further behind. Vim and GDB aren't exactly the cutting edge of developer tools anymore.
That is not at all true. OpenGL has a lot of problems, but there's no reason it can't do standard 2D rendering faster than software. 2D rendering on the CPU is just (in simplified terms) a big loop over a 2D grid doing a lot of SIMD operations. That's _exactly_ what the GPU is designed to excel at, and OpenGL is just an API that exposes that hardware to software.Quote:
In fact, OpenGL is not a 2D API and is really not as efficient as a software implementation could be.
That most of the FOSS drivers have major problems and that precious few FOSS desktop developers have the first clue how to actually use OpenGL properly to do fast 2D, that is possibly quite true. The vast majority of GUI toolkits and higher level 2D rendering APIs are written like it's still the 1990's and don't actually have a render backend that's capable of working the way a GPU needs them to work. It's entirely 100% possible to make one, though, and the speed benefits can be _massive_, not to mention the battery life benefits for mobile devices (letting your GPU do a quick burst of work is more efficient than having your CPU chug through it for a longer period of time).
The primary problem is a complete lack of batching (that is, many GUI toolkits do one or more draw calls per widget/control, where as it is possible to do a handful or even just one for the entire window) and a lack of understanding on how to use shaders to maximum effect. Even a lot of game-oriented GUI frameworks (for editor and such, or even ones designed for in-game HUDs and menus systems like ScaleForm) get this terribly wrong.
You have to design your render backend for a 2D system around how modern hardware actually works. You can't just slap an OpenGL backend into a toolkit that assumes it's using a software renderer.
That's for the biggest cost. There is other case where OpenGL isn't as fast as a software implementation, but that just impact the size and complexity of the area you can update before the brute force of the GPU catch up.
which is in an HP ProLiant DL360 G7 with release date: 2011-06-21
But who runs a Linux Desktop on that? :confused:
If the WM does not understand those hints, they'll be simply not displayed. If a Plasma theme has not been developed with that in mind, it can look awkward at times but it'll still work!
KWin could remove all non-GL code right now and KPD would still work with OpenBox, TWM, etc.
The most retarded party in that area is Canonical. They already had a Unity version (Unity2D) that was not a WM plugin. They could simply have said: “Look, we develop Unity[2D] with Compiz in mind. If Compiz doesn’t work, it'll fall back to another WM. We don’t support this, we won’t write special fixes if Unity under Metacity will look strange at time, etc. It is just meant as a stopgap until you can install proper drivers.”
But no, they decided to throw away the superior technology to concentrate on the plugin-based version.
1- ARM needs a decent opensource graphic driver no matter what, this is a requirement and not just for DE's but in order for linux to suceed in ARM territory... I fail to see how can chrome os (ubuntu 10) be hardware accelerated on arm and so is android and there being no driver, open or bin to use on linux distros... this is pure stupidity.
2- The linux desktop environments are A JOKE. compiz is a joke, 3d effects on a DE are a JOKE. woobly windows and cubes lube them and stick them up your ass.
The linux desktop should be 1.clean 2. functional 3. professional looking, what you have now is a bunch of DE's that either look drawn by retards with crayons (KDE) or that are having an existential crisis and don't know what they are supposed to be (gnome).
I was running lxde when I had that piece of shit HP with unsupported ati gfx AND guess what: now I have intel hd gfx that are really nicely supported, AND I'M STILL running lxde. Look at lubuntu 12.10 now improve it and THAT'S WHAT A LINUX DESKTOP SHOULD LOOK LIKE.
3- This lvmpipe etc it's a joke... trust me I have here a bunch of old laptops with old ass mobile radeons. YOU THINK UNITY 2D OR GNOME FALLBACK WERE A GOOD EXPERIENCE?? trust me when I say this if you happen to have old ass radeons or any unsussported gfx card, lvmpipe or no lvmpipe, YOU AIN'T GONNA BE RUNNING ANY MODERN LINUX DISTRO
unless you waste hours upon hours editing and messing with xorg.conf and shadowfb and noaccel and all that shit.
this lvmpipe is sand in the eyes, throw a bone to see if they shut up, but the reality remains
old hardware +modern linux distros = forget about it
and I know more about this than all of yous
Most of the server boards I go for have things like Aspeed AST2050. These are very weak chips when it comes to graphics, but they have iKVM. Think of it as hardware-VNC. Basically, the machines can be unbootably fucked, and yet I can manipulate them remotely as if I were actually sitting right at them.... including, but not limited to, accessing the bios setup program.
There are no 3D drivers for the AST2050, nor would I want to have any kind of complex composited graphics environment running on servers I have to access over iKVM. The graphics load over the network would be a very VERY bad thing. Older non-composited UI's are fine, because they do the blink-on-off thing, which keeps the screen updates light. Fading in and out, and all the various animations are far too heavy for running over the network.
Actually a lot of server boards use very basic IGP's still and won't use one of those 'Fusion-bridge' cores just so they can show a bios that is maybe seen once in its lifetime'. I bet google and it's 1mil. server won't care about the integrated GPU's.
Don't even say GPGPU, someone on this forum even said, that the IGP's are simply not powerful enough to do anything meaningful.