Ok, so according to http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport, we have :
Originally Posted by elanthis
GCC -- 100% (46/46 features implemented)
Clang -- 96% (44/46 features implemented)
Intel C++ -- 74% (34/46 features implemented)
MSVC -- 65% (30/46 features implemented)
IBM XLC++ -- 50% (23/46 features implemented)
EDG eccp -- 39% (18/46 features implemented)
Embarcadero C++ Builder -- 35% (16/46 features implemented)
Sun/Oracle C++ -- 22% (10/46 features implemented)
HP aCC -- 20% (9/46 features implemented)
Digital Mars C++ -- 17% (8/46 features implemented)
[*] This does not count the features in Clang SVN/3.3 as it is not currently released. This will be 100% then. It is also excluding Concepts which is on that list, but is not part of C++11.
So according to this, the Intel and Microsoft compilers are ahead of the proprietary compilers in terms of C++11 support and the open source compilers are/will be feature complete.
There is Go ( http://en.wikipedia.org/wiki/Go_(programming_language) ). Google's system language. It encourages you to use thousands of parallel routines, if it is meaningful for your application.
Originally Posted by Ericg
LOL, right... And the user will simply have to wait until the video is decoded before watching it, assuming 1080p(6MB per decoded frame) or even 2160p/4320p maybe at 60fps or more?
Originally Posted by pingufunkybeat
I am surprised this BS is still spread around. Compression is inherently serial. It can not be done on GPU efficiently, this is the reason for dedicated hardware! All you can do is run image transforms and motion estimation in parallel. But that doesn't give you much as the heavy part is the decompression.
Originally Posted by erendorn
No way, unless you have TBs of RAM to store the decompressed video on a cache, there is no point in decoding all the video in advance.
Originally Posted by log0
P.D: This was meant as a response to pingufunkybeat.
Guys, the idea about parallelizing video mentioned "render", not encode/decode.
Maybe he meant to generate the original video source data, e.g. in blender where it first needs to be created, only then encoded with a codec. In the rendering stage, the frames could be independent.
I thought we were talking about ENCODING? Or even rendering, where each frame (each pixel, even) is calculated separately.
Originally Posted by log0
In the case of decoding, you're right, but it's perfectly possible to parallelise to block or pixel level. It won't be a linear speedup, but you can do loads if you parallelise properly.
Last edited by pingufunkybeat; 04-20-2013 at 09:49 AM.
^ (10 characters)
Originally Posted by aceman
I'm sure database-based apps can very well use many threads. Also parallelizing code makes use of threads to handle the various concurrent operations going on.
Originally Posted by wargames
By default it's all single-threaded. You have to manually set GOMAXPROCS to make it use more than one thread.
Originally Posted by Drago
It may encourage goroutines, but they are not threads.