So the question to those stating the obvious that GC'd languages and platforms may use more memory on average. What is the alternative? Abandon GC and let everyone manage memory manually?
There are trade offs when choosing a language. No language is good for every use case.
Manually manage memory when deterministic memory management is required.
GC'd languages that offer productivity improvements where deterministic memory management is not important.
Now, Ubuntu Phone is starting with QML/C++ in the first place, so the memory footprint and speed would be better in the first place. This new API comes in handy when you want to boost the speed of the application you are currently running, as everyone else would be forced to clear their caches and there would be no more need to swap (on ARM this is a real problem). So the "use as much memory as possible" pattern in Linux is even more proeminent. This is furthered by the fact that on a mobile device because you never really close applications, just add more applications to the current open application stack.
It is not all black and white.
You mentioned QML/C++ should make better use of memory. Remember QML is also garbage collected and QML currently introduces quite a bit of overhead (Should be reduced when they switch to the V4 engine).
I would argue that C#/Mono would also fit well on mobile. In the same way that you can use C++ for performance critical code in C++/QML world, you can use C++ for the same in C#/Mono world (https://github.com/mono/cppsharp : Note this is not like traditional p/invoke). You can also optimize code by using SimD and inline assembly in Mono. You can also disable the GC for critical code segments (AFIK: this is not possible (yet) in QML).
You could then compile the C# code to a native binary [ngen in windows, full aot in Mono] (bypassing the virtual machine and and any jitting). The only overhead in this case should come from garbage collection and whatever horrible code one might have written. This would put you in pretty much the same situation as C++/QML.
I am not saying that C#/Mono is better than C++/QML [I actually prefer QML to anything else out there], but dismissing them as bad for devices based on generalizations made about GC'd languages is incredibly shortsighted.
I'm not sure if this is a solved problem o feel free to point it out if it is. Using Qt/QML would introduce a lot of overhead for developers if developing for a platform the scale of android. This overhead would come from requiring a lot more testing (Native code is not equal on all ARM processors), increase in development time.
Requiring multiple binaries to distribute for different ARM chips.
C#/Mono would have the same problem when compiled to native binaries.
I think that the main difference is that the programmer has to make a constant check, like having a specific thread running which does a madvise call like every 20 seconds and sleeps in between.
While here, it would be abstracted by the use of a (common) signal system, which I find to be more elegant.
I am not sure about the testing bit, but I don't think it should be any problem really. You have native code running on different x86, x64, Intel, AMD, whatever architectures running just fine. And ARM has matured enough by now that it can take most of the stuff (reference needed). I think what varies in an ARM SoC is not the CPU core itself, but all the stuff that you would normally find on a PC motherboard: connectivity, bridges, decoders, GPU, etc.
What do you mean by that, why so?Using Qt/QML would introduce a lot of overhead for developers if developing for a platform the scale of android
What boilerplate do you mean? There is the "var" type for those who don't like strong typing. And for those who don't, are there many languages that don't GC either?I use C# over .NET at work and I find it a pain to do even the most basic thing, especially of all the boilerplate you have to write. Sure, it has a brilliant object model, but being strongly typed is a big turn-off for me.