Page 1 of 3 123 LastLast
Results 1 to 10 of 30

Thread: AMD's X-Video Bitstream Acceleration

  1. #1
    Join Date
    Jan 2007
    Posts
    15,186

    Default AMD's X-Video Bitstream Acceleration

    Phoronix: AMD's X-Video Bitstream Acceleration

    In early September we shared that UVD2 and XvMC is coming to Linux and that two new library files had begun shipping with the ATI Catalyst driver: AMDXvBA and XvBAW. Earlier this month the Unified Video Decoding 2 (UVD2) support was then enabled by default in the Catalyst 8.10 driver. These video acceleration improvements to the ATI Linux driver aren't exactly end-user friendly yet, but today we have information on how those interested can begin using the X-Video Motion Compensation extension with their ATI hardware along with what the XvBA extension will provide users in regards to advanced video acceleration that is very similar to Microsoft's DirectX Video Acceleration.

    http://www.phoronix.com/vr.php?view=13034

  2. #2
    Join Date
    Jun 2007
    Posts
    260

    Default

    Well, it sounds interesting to say the least. By how much will it reduce the CPU usage if I'm playing an HD video? (For example, a 1280x720 H264 video)
    It still wouldn't matter to me as I don't have any 4000 card.

  3. #3
    Join Date
    Jul 2007
    Posts
    405

    Default

    If it does bitstream acceleration, it will take virtually the entire video decode load off your cpu. In other words, the only work your cpu will be doing is formatting the bitstream to send it to the gpu and doing audio decode.

  4. #4
    Join Date
    Jun 2008
    Posts
    61

    Default

    good news. With such a tempos, official AMD driver for Linux will soon achieve its Windows' performance ;-)

  5. #5
    Join Date
    Jun 2008
    Posts
    61

    Default

    I have one question about DirectX/OpenGL... To my vision, both are high-level APIs, and the functions in them, perform some other call to the video driver, then.
    E.g.:
    [App] -> D3D call -> [D3D Lib] -> another call, acceptable by driver -> [DRIVER] -> final bits, sent to HW -> [Video card]

    Is it correct understanding?

    What I would like to learn is that "another call, acceptable by driver" -- is its format common for all the video cards (I believe so, because nobody requires me to re-install DirectX, if I buy another card, issued _after_ the installed version of Direct3D was released...)

    Another question: how much abstraction is there in the "another call, acceptable by driver" -- is it too close to the binary HW calls?...


    I just wondering why Wine emulates Direct3D via OpenGL? Isn't it's simpler just to send the same calls to the Driver, which are sent by MS's DirectX?
    Last edited by mityukov; 10-29-2008 at 11:06 AM.

  6. #6
    Join Date
    Feb 2008
    Location
    127.0.0.1
    Posts
    89

    Angry

    It is more then sad if i understand it correctly.

    What I understand is that today, there is no option to decode HD movies on GPU.
    Recently i have installed Ubuntu on HP's xw4200 which is powered by Intel Pentium 4 551 @ 3.4GHz.
    This machine is incapable of smooth playback of 720p movie.
    Hate the idea that i'll have to install XP Media Center

  7. #7
    Join Date
    Sep 2008
    Posts
    39

    Default

    Don't know what to think of this. Generally I think it's good that there finally is a solution for video acceleration (for more than just MPEG-2). But the way it's done, I just can't like it. It's put in as a closed-sourced feature that will probably not be opened up or be shared with anyone. So what then? Each graphics card manufactorer is going to code their own video acceleration API? Well that will be a nice mess. I would have prefered if AMD would have worked together with Intel and Nvidia to get a proper, standard implementation for video acceleration in Linux. With VA-API there at least a tiny something to build on, so I don't understand why AMD just didn't finish this. Of course it's easier to just "reuse" the windows bits, but normally that never works that well, if just take parts from one OS and put them 1:1 into another one. I would say Linux is to different from Windows for this.

  8. #8
    Join Date
    Aug 2008
    Location
    Netherlands
    Posts
    285

    Default

    Quote Originally Posted by bash View Post
    Don't know what to think of this. Generally I think it's good that there finally is a solution for video acceleration (for more than just MPEG-2). But the way it's done, I just can't like it. It's put in as a closed-sourced feature that will probably not be opened up or be shared with anyone. So what then? Each graphics card manufactorer is going to code their own video acceleration API? Well that will be a nice mess. I would have prefered if AMD would have worked together with Intel and Nvidia to get a proper, standard implementation for video acceleration in Linux. With VA-API there at least a tiny something to build on, so I don't understand why AMD just didn't finish this. Of course it's easier to just "reuse" the windows bits, but normally that never works that well, if just take parts from one OS and put them 1:1 into another one. I would say Linux is to different from Windows for this.
    I'm afraid that this road would take considerable more time than the current path. Hopefully a future standard for linux will bring benefits for amd-ati so they consider to build on that. For now its the most pragmatic aproach they could choose in from my point of views. In the long run there will be a standard for sure.

  9. #9
    Join Date
    Oct 2008
    Posts
    143

    Default

    Will using XvMc remove video tearing?

  10. #10
    Join Date
    Dec 2007
    Location
    /dev/hell
    Posts
    297

    Default

    sad to know that like crossfire, features are introduced only for the newest cards...

    would be even more sad if that feature will not be backported!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •