Results 1 to 6 of 6

Thread: A Video Decoding Interface For Gallium3D

  1. #1
    Join Date
    Jan 2007
    Posts
    15,113

    Default A Video Decoding Interface For Gallium3D

    Phoronix: A Video Decoding Interface For Gallium3D

    Yesterday we talked about Nouveau Gallium3D video improvements that are allowing 1080p video clips to now play with this open-source driver stack. Today there's an ongoing discussion about a proper video decoding interface for Gallium3D. Younes Manton, the one responsible for some of the Nouveau work and Generic GPU Video Decoding using shaders, has proposed a proper video decoding interface to this new driver infrastructure...

    http://www.phoronix.com/vr.php?view=NzAwOQ

  2. #2
    Join Date
    Jan 2008
    Posts
    138

    Default

    I think this is the wrong approach, because video decoding should be done on the dedicated hardware that comes on the card, not on the general-purpose 3D engines. Decoding video on the 3D engines will degrade 3D performance (ie. you wouldn't be able to spin the cube if you have a video running).

  3. #3
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,514

    Default

    I think you'll see that in the closed drivers; not sure yet how practical it will be to use dedicated hw in the open drivers.

    Note that the 3D engine does all the back-end processing anyways (colour space conversion, scaling, deinterlacing, post processing etc) even when running with dedicated hardware, so extending that a bit further up the pipe is not a big deal. The dedicated hardware is mostly useful for the very front end work; performing bistream decoding and managing spatial prediction.
    Last edited by bridgman; 01-21-2009 at 08:46 AM.

  4. #4
    Join Date
    Nov 2008
    Location
    California
    Posts
    61

    Question

    Bridgeman, if Gallium3D video decoding pans out, could Linux users soon have *ALL* video formats accelerated through the GPU shaders? Assuming we disregarded the software patents and whatnot.

  5. #5
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,185

    Default

    I'm not bridgman, but it should be possible to accelerate everything. Though then parts of it could be illegal in the US ($DEITY bless living in europe)

  6. #6
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,514

    Default

    Yes, although some low end and older midrange cards would probably not have enough shading power to keep up with HD resolutions -- they have enough throughput for Xv but not a lot left over. It would depend on format but I would assume anything below an X1650 or X800 wouldn't have the shader power. For more recent cards, you'd probably want an HD2600, 36xx or 46xx to have some shader power left over for decoding.

    An ideal implementation would let you choose where to switch processing from CPU to GPU so that even low end hardware could at least accelerate MC - the nice thing is that the last stages in the pipe (MC, colour space conversion, scaling etc..) tend to be the hardest for the CPU and easiest for the GPU so the idea of "everything past this point is done on GPU" works well.

    This is just a guess so don't buy hardware based on it
    Last edited by bridgman; 01-24-2009 at 07:22 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •