The best thing about this is that if the underlying technology behind this ever takes off there will be a significant base of prior art to invalidate all the inevitable patents people will try to claim around it. You can't get them all, but at least there should be a basis for something decent without them.
Originally Posted by scionicspectre
I hope this ends better than Theora did.
Xiph actually have a pretty good track record of industry acceptance. Vorbis audio was quickly adopted by chip manufacturers and was on any cheap (generic chinese) mp3 player back in the day. Opus is expected to be the next standard for audio and it's showing immediate adoption.
Originally Posted by silix
This new thing coming from Xiph + Mozilla + independent developers (to the level of Jason Garrett-Glaser aka. Dark Shikari with x264 fame) has to happen. I think it has all the odds on it's favor and a great team. I have a lot of respect for Monty. If he says it's gonna happen and already has a proof implementation, then it's gonna happen.
If Jason (and hopefully other x264 devs) then I have even more faith in this project.
Originally Posted by jntesteves
There has been a lot of talk about this idea over the years, but the problem is that it has NEVER been pushed past PARTIAL and/or THEORETICAL. There was some partial GPU assistance on some older video cards, but all in all, video decoding has always been done either in software or on dedicated hardware.
Originally Posted by plonoma
Now that being said, this new side-transition may be more suitable for general purpose opencl acceleration. Of course, that's at the expense of the massive power consumption typical of all GPUs.
Newer graphics cards can do all the heavy lifting.
OpenCL on the GPU is something that is somewhere in between.
GPU's are made for doing graphical work and are also more efficient when used to decode video.
Not as efficient as an ASIC but much more efficient than using the CPU.
The implementation of GPU encoders and decoders is advancing.
There is a big effort to do more with the GPU nowadays.
Seen the release notes from recent adobe products? Lots of stuff that's moved to the GPU.
Even more nicely, you can use opengl 4.3 compute shaders to do all the decoding without any of the painful memcopys that plague opencl.
Originally Posted by plonoma
Look to the various dxva2 levels (notice AT tests quicksync separately, so dxva2 isn't using the intel provided hardware decoding) and madvr (the original madvr release seems like it was mostly like xvideo, but it seems to offer far more now). Not for linux, but apparently tremendously efficient.
Originally Posted by droidhacker
With opencl you should be able to do similar things on linux, I'd imagine, but it just hasn't been done b/c there hasn't been sufficient interest from the right people.
DXVA is the MS equivalent of VDPAU or VAAPI. It's not shader based decoding, beyond the standard post-processing effects.
Originally Posted by liam
GPU hardware is not h264 decoding friendly, no matter what kind of API like OpenCL you use.
Something doesn't make sense. According to the link, they were using dxva2 (with two different rendering options) with haswell. Three test variations were made, and two with dxva2, and one using quicksync. Since qs is how you accelerate video on intel, what was dxva2 using when it wasn't using quicksync.
Originally Posted by smitty3268
That link says that it says it can use off-host acceleration of certain parts of a codec, implying that it will accelerate what it can. So it has various entry points, similar to vdpau/vaapi, as you say. So, you can use dxva without targetting dedicated decode hardware. Moreover, from what Bridgman has said, and from processing pipelines i've seen, it seems like the only part of the decoding that can't be handled well on the gpu is the entropy coding (which can be high, admittedly). That is what it seems like is being done in the AT article.
I'd never heard of dxva prior to that article so bear with me if I misunderstand.
Last edited by liam; 06-26-2013 at 02:59 AM.