09-27-2012, 11:05 PM
. Perhaps more users should ask their respective distro's package managers to have them included, like I did -- that resulted in the state trackers being made available for openSUSE 12.2, but just not installed by default ... they are also available from factory and the xorg repos.
Originally Posted by blackiwid
As for mpeg2 not being used any more or for its decode being a cakewalk, I explained in that original message "While MPEG2 decode support might seem rather pedestrian to some, it is beneficial to many others (Example: the ATSC digital TV system uses MPEG2 formatting for channels streams). In addition, users of hardware supported by the r300g and nouveau drivers would also be able to make use of the state trackers." ... plus the obvious benefit to low power but anaemic CPU or embedded devices which happen to have a decent GPU (and which may be supported by a respective OSS driver)
09-28-2012, 09:49 AM
The technical review states that HSA compatible GPUs will support context switching and preemption. I suppose there will be some sort of software scheduler. Or will it be handled completely by hardware?
Given the former case, are there already plans how to implement this in the linux kernel?
09-28-2012, 11:31 AM
Thanks! I won't understand everything and due to my workload not read all of it, but I guess this will be an interesting read.
Originally Posted by entropy
09-28-2012, 01:14 PM
And more generally: what are the plans on the open-source side?
Originally Posted by olesalscheider
Will the HSA capable hardware feature full documentation (apart from UVD)?
10-03-2012, 07:39 PM
To do the basics, not long. To fully optimize the decoder, I'd budget a couple months full time. I worked 40+ hours/week during my masters thesis' programming phase (~4 months) and 3 months full-time on the WebM OpenCL decoder, and I managed to finish/optimize about ~1/2 of it.
Originally Posted by blackiwid
Some of that time was spent learning OpenCL and video decode algorithms, but I'd still budget a bit of time.
10-04-2012, 01:14 PM
As Christian said, If you can run it in OpenCL, you can probably do it with SSE. The real question becomes if you can break the workload up into enough parallel threads to overcome the transfer latency over PCI-e and the kernel/shader start-up overhead.
Originally Posted by 89c51
WebM is similar to H.264 in many ways, and the detokenizing/decompressing of the input stream is a very serial process. Currently, the WebM decoder detokenizes/decompresses each Macroblock immediately before running the subpixel filtering/idct/dequant stages. There has been some work recently on frame-level multithreading, but nothing has landed yet.
The best bet for OpenCL+WebM might be to rewrite the input decompression to run for an entire frame at a time, and then splitting the rest of the processing stages into a separate piece. Someone has been working on this, and has been talking about it on the WebM Devel mailing list. If you can split the serial portion of detokenizing/decompressing off into a separate thread that is only responsible for decompressing the input stream a frame at a time, that could open up some possibilities for task-level parallelism (1 thread to decompress, 1+ to decode), as well as making the CL portion of frame reconstruction simpler.
10-15-2012, 12:40 PM
John Bridgman is no longer managing these efforts.
Last edited by Will00ard10; 10-15-2012 at 12:45 PM.
10-20-2012, 01:35 PM
very well,thank you for your sharing
Last edited by Mi7ch3a2el; 10-20-2012 at 01:38 PM.