Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 25

Thread: Daala: A Next-Generation Video Codec From Xiph

  1. #11
    Join Date
    Oct 2008
    Posts
    3,246

    Default

    Quote Originally Posted by scionicspectre View Post
    Doesn't hurt to have an option. I don't think Xiph has ever expected devices to quickly adopt any of their new technology. It'll come around eventually, so long as it's truly an improvement. There's more open software in consumers' homes today than ever before, so there's a chance it will make in-roads someday. Aside from that, it could be interesting to experiment with for the potential applications, so I'd say it's worth the effort.
    The best thing about this is that if the underlying technology behind this ever takes off there will be a significant base of prior art to invalidate all the inevitable patents people will try to claim around it. You can't get them all, but at least there should be a basis for something decent without them.

  2. #12
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,666

    Default

    I hope this ends better than Theora did.

  3. #13
    Join Date
    Jul 2009
    Location
    São Paulo, Brazil
    Posts
    74

    Default

    Quote Originally Posted by silix View Post
    the only thing i'm worried about is, what kind of acceptance this can hope for, given that:
    -it's not an industry-wide standard (as in, embraced by sw AND appliance vendors)
    -it's based on different techniques, thus doesnt rely on the same processing "blocks" (eg DCT) that chips with video decoding capabilities can usually handle
    -desktop computing is on the decline, more and more replaced by portable, small device, computing - but hw based video decoding (offloading) matters on those devices...
    Xiph actually have a pretty good track record of industry acceptance. Vorbis audio was quickly adopted by chip manufacturers and was on any cheap (generic chinese) mp3 player back in the day. Opus is expected to be the next standard for audio and it's showing immediate adoption.
    This new thing coming from Xiph + Mozilla + independent developers (to the level of Jason Garrett-Glaser aka. Dark Shikari with x264 fame) has to happen. I think it has all the odds on it's favor and a great team. I have a lot of respect for Monty. If he says it's gonna happen and already has a proof implementation, then it's gonna happen.

  4. #14
    Join Date
    Sep 2008
    Location
    Seattle, WA, US
    Posts
    122

    Default

    Quote Originally Posted by jntesteves View Post
    Xiph actually have a pretty good track record of industry acceptance. Vorbis audio was quickly adopted by chip manufacturers and was on any cheap (generic chinese) mp3 player back in the day. Opus is expected to be the next standard for audio and it's showing immediate adoption.
    This new thing coming from Xiph + Mozilla + independent developers (to the level of Jason Garrett-Glaser aka. Dark Shikari with x264 fame) has to happen. I think it has all the odds on it's favor and a great team. I have a lot of respect for Monty. If he says it's gonna happen and already has a proof implementation, then it's gonna happen.
    If Jason (and hopefully other x264 devs) then I have even more faith in this project.

  5. #15
    Join Date
    Oct 2009
    Posts
    2,145

    Default

    Quote Originally Posted by plonoma View Post
    @silix
    About hardware acceleration mattering on mobile.
    Most high-end and mid range mobile GPU's are starting to support OpenCL,
    which could provide for a good basis for video decoding.
    Allowing things like Daala be done good enough (fast enough video decoding for fluent playback) without having to add extra hardware.
    There has been a lot of talk about this idea over the years, but the problem is that it has NEVER been pushed past PARTIAL and/or THEORETICAL. There was some partial GPU assistance on some older video cards, but all in all, video decoding has always been done either in software or on dedicated hardware.

    Now that being said, this new side-transition may be more suitable for general purpose opencl acceleration. Of course, that's at the expense of the massive power consumption typical of all GPUs.

  6. #16
    Join Date
    Sep 2010
    Posts
    491

    Default

    @droidhacker
    Newer graphics cards can do all the heavy lifting.
    https://www.google.be/#q=video+decod...pletely+on+GPU

    OpenCL on the GPU is something that is somewhere in between.
    GPU's are made for doing graphical work and are also more efficient when used to decode video.
    Not as efficient as an ASIC but much more efficient than using the CPU.

    The implementation of GPU encoders and decoders is advancing.
    There is a big effort to do more with the GPU nowadays.
    Seen the release notes from recent adobe products? Lots of stuff that's moved to the GPU.

  7. #17
    Join Date
    Dec 2012
    Posts
    586

    Default

    Quote Originally Posted by plonoma View Post
    @droidhacker
    Newer graphics cards can do all the heavy lifting.
    https://www.google.be/#q=video+decod...pletely+on+GPU

    OpenCL on the GPU is something that is somewhere in between.
    GPU's are made for doing graphical work and are also more efficient when used to decode video.
    Not as efficient as an ASIC but much more efficient than using the CPU.

    The implementation of GPU encoders and decoders is advancing.
    There is a big effort to do more with the GPU nowadays.
    Seen the release notes from recent adobe products? Lots of stuff that's moved to the GPU.
    Even more nicely, you can use opengl 4.3 compute shaders to do all the decoding without any of the painful memcopys that plague opencl.

  8. #18
    Join Date
    Jan 2009
    Posts
    1,496

    Default

    Quote Originally Posted by droidhacker View Post
    There has been a lot of talk about this idea over the years, but the problem is that it has NEVER been pushed past PARTIAL and/or THEORETICAL. There was some partial GPU assistance on some older video cards, but all in all, video decoding has always been done either in software or on dedicated hardware.

    Now that being said, this new side-transition may be more suitable for general purpose opencl acceleration. Of course, that's at the expense of the massive power consumption typical of all GPUs.
    Look to the various dxva2 levels (notice AT tests quicksync separately, so dxva2 isn't using the intel provided hardware decoding) and madvr (the original madvr release seems like it was mostly like xvideo, but it seems to offer far more now). Not for linux, but apparently tremendously efficient.
    http://www.anandtech.com/show/7007/i...-perspective/5

    With opencl you should be able to do similar things on linux, I'd imagine, but it just hasn't been done b/c there hasn't been sufficient interest from the right people.

  9. #19
    Join Date
    Oct 2008
    Posts
    3,246

    Default

    Quote Originally Posted by liam View Post
    Look to the various dxva2 levels (notice AT tests quicksync separately, so dxva2 isn't using the intel provided hardware decoding) and madvr (the original madvr release seems like it was mostly like xvideo, but it seems to offer far more now). Not for linux, but apparently tremendously efficient.
    http://www.anandtech.com/show/7007/i...-perspective/5

    With opencl you should be able to do similar things on linux, I'd imagine, but it just hasn't been done b/c there hasn't been sufficient interest from the right people.
    DXVA is the MS equivalent of VDPAU or VAAPI. It's not shader based decoding, beyond the standard post-processing effects.

    GPU hardware is not h264 decoding friendly, no matter what kind of API like OpenCL you use.

  10. #20
    Join Date
    Jan 2009
    Posts
    1,496

    Default

    Quote Originally Posted by smitty3268 View Post
    DXVA is the MS equivalent of VDPAU or VAAPI. It's not shader based decoding, beyond the standard post-processing effects.

    GPU hardware is not h264 decoding friendly, no matter what kind of API like OpenCL you use.
    Something doesn't make sense. According to the link, they were using dxva2 (with two different rendering options) with haswell. Three test variations were made, and two with dxva2, and one using quicksync. Since qs is how you accelerate video on intel, what was dxva2 using when it wasn't using quicksync.
    http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx
    That link says that it says it can use off-host acceleration of certain parts of a codec, implying that it will accelerate what it can. So it has various entry points, similar to vdpau/vaapi, as you say. So, you can use dxva without targetting dedicated decode hardware. Moreover, from what Bridgman has said, and from processing pipelines i've seen, it seems like the only part of the decoding that can't be handled well on the gpu is the entropy coding (which can be high, admittedly). That is what it seems like is being done in the AT article.

    I'd never heard of dxva prior to that article so bear with me if I misunderstand.
    Last edited by liam; 06-26-2013 at 03:59 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •