NVIDIA Releases Standalone VDPAU Library
Phoronix: NVIDIA Releases Standalone VDPAU Library
While NVIDIA developed VDPAU (the Video Decode and Presentation API for Unix, one awesome way of accelerating HD video playback with great results) for use in their proprietary graphics driver, the API itself is open and has been well adopted by multimedia applications. VDPAU has worked out so well and has received critical mass that there is a VDPAU back-end for Intel's VA-API and work is underway on bringing VDPAU support directly to Intel's graphics driver. VDPAU may one day end up being used in other open-source drivers too...
So what are the odds of having VDPAU support in xorg-driver-ati one day in the future?
Last time I checked, the documentation released by AMD lacked any info for the video decoder. Also, as Bridgman pointed out earlier, the early HD Radeons had the acceleration implemented in a way such that documenting it would compromise HDCP (or something along those lines).
Originally Posted by DoDoENT
I'd say video acceleration for R600/R700 is simply not coming to opensource drivers (not from official sources), we might see it in fglrx if we're lucky.
Such is the sorry state of linux graphics - my choices are intel (drivers in quantum state, getting all the features to work with adequate performance is next to impossible for a casual Linux user), AMD (no opensource 3D for cards less than 4 years old, proprietary driver has problems with some basic functionality (Xv) and switching between computer power modes), nvidia (no opensource driver worth using yet, proprietary driver mostly works, but lacks some common features (XR&R1.2), and there's always the creeping shadow of nvidia's 'Bumpgate'). Other graphics vendors have no drivers/hardware worth using.
I don't believe that that is an entirely accurate statement. There are different levels of video acceleration... the difference is in how much of the decode process is accelerated. Right now we DO have acceleration -- though only very basic Xv. Playing a full-HD video right now *does* peg any CPU that isn't at least a fairly recent 2-core or better. Offloading a -- lets call it a -- "significant chunk" over to the GPU (even without using the video decoder junk in the GPU) will take a significant chunk of the processing off the CPU to hopefully make HD playback stable on even older 2-core processors (maybe even 1-core's).
Originally Posted by myxal
Now the question you need to ask yourself is this: how much acceleration do you really need? My "tv computer" is an older X2-3800 that I recently picked up for free + an RHD3650 ($40). HD video playback goes like this;
720P single threaded: fairly OK with the occasional chop. Very watchable.
720P multi-threaded: perfect.
1080P single threaded: unwatchable, drops about 50%.
1080P multi-threaded: fairly OK with the occasional chop. About the same as 720P single threaded.
So how much acceleration do *I* need on this "$40" computer to make 1080P perfect? The answer is *not much*. And that's on old junk.
Here's what bridgman has to say about video decode acceleration:
You make a good point here. We shouldn't spend more than 50 bucks if all you want is to watch HD content.
Originally Posted by lbcoder
I think the problem is with people that spent 150 or more and want to get the most out of their hardware.
On the other paw developers seem to think we don't need anything more. (at least for some level of video decoding acceleration) They'd be doing it with shaders. Someone just has to write it in.
Originally Posted by myxal
Edit: Never mind, didn't read until the end. Apparently bridgman did say this in the other thread.
Actually, MPEG-2 and H.264 video decode acceleration for Intel G45 (GMA4500HD et al.) is now being developed as VA API driver.
Originally Posted by phoronix
Since according to Wikipedia you can use VDPAU as an VA API backend, that should not make a significant difference, I think...
Originally Posted by gbeauche
This means you need an application supporting VA API to use it.
Originally Posted by nanonyme