Page 20 of 29 FirstFirst ... 101819202122 ... LastLast
Results 191 to 200 of 283

Thread: AMD Releases Open-Source UVD Video Support

  1. #191
    Join Date
    Apr 2011
    Posts
    309

    Default

    Quote Originally Posted by jrch2k8 View Post
    1.)"s it possible for any compiler in the earth to be OS-specific?" yes, if both all oses run on the same architecture any compiler do the job since the IR and the final ASM/Nmemonic is for the same hardware but for example linux/solaris run on sparc but windows don't, so you dont have the OS support or way to make a compiler that support sparc since the final ASM/Nemonic are specific for sparc and wont run on anything beside linux/solaris, hence making the compiler use specific kernel routines on those OSes[Posix] too, hence is not possible to compile this compiler on windows or run the result compiled binary on windows, hence OS dependant.

    2.) You say i can't implement just only the HLSL compiler to Wine?

    a.) you can but the result should be able to be interpreted by OpenGL since you don't have an DirectX driver and a compiler is lower than the wine DirectX layer and wine translate to opengl anyway
    b.) you can use specific IR like AMDIR/nVIR/TGSI/Etc. but then you have to manually handle how the context will interpret and link the shader to a GL object inside the framebuffer since again you don't have a DirectX driver
    c.) make a DirectX driver yourself, patch wine and add the compiler to the driver


    1) That i don't want: Make an OpenGL based D3D_compiler, or translate D3D to OGL. When you try to represent HLSL to GLSL, many times "word to word" its impossible. That's why we call it emulation. So something that in HLSL needs 100flops to give result, translated or compiled to GLSL needs 120+flops, so its inefficient.

    2) That i want: Vendors to give an HLSL front-end for their IL. So if i write an HLSL compiler will not be OGL based =inefficient, plus 10 times the work. For the IL its a front-end that is missing, but for a compiler the same thing is actually a back-end. A simple job is to write some Wine code and make MS_D3D to work with Wine (WineD3D work on Windows), then point the back-end extracted form the Windows driver(using Winetricks) to crate IL code, and all that inside a Wine Session. Its far less work than anything else.
    Last edited by artivision; 04-05-2013 at 07:58 AM.

  2. #192
    Join Date
    Apr 2011
    Posts
    309

    Default

    Anyway thanks for the UVD support, but GPUs are all about gaming.

  3. #193
    Join Date
    Jul 2010
    Posts
    78

    Default You sure?

    Quote Originally Posted by artivision View Post
    Anyway thanks for the UVD support, but GPUs are all about gaming.
    My HTPC begs to differ. I'd love to be able to accelerate x264 in XBMC with UVD.

  4. #194
    Join Date
    Apr 2013
    Posts
    5

    Default

    Thanks for the effort guys. It's really appreciated.
    Cheers!

  5. #195
    Join Date
    Jan 2013
    Posts
    1,116

    Default

    Quote Originally Posted by artivision View Post
    Anyway thanks for the UVD support, but GPUs are all about gaming.
    Of course, the HD3200 in my laptop is all about gaming. The HD4200 on many mainboards with RS880 chipset also. As are the low-spec videocards, like the HD54XX or 64XX. You must be kidding.

    Anyways, thanks for that, AMD! If you now get PM to work properly I could switch my laptop to the radeon driver without overheating and I won't have to use the legacy drivers anymore, so that I can keep up again with Slackware -current on that machine.

  6. #196
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Quote Originally Posted by Hugh View Post
    If you publish specs, you might be endangering some trade secret. You certainly cannot endanger a copyright or a patent. We agree that trademarks aren't on the table. Your comment suggests to me that you are not talking enough with your lawyers. That surprises me since I picture you (or your predecessors) talking to them for half a dozen years.
    Hal2k1 was mentioning copyright. I included it in the larger list for completeness. Re: patents, the concern there is IP which is currently protected by trade secret but where we have patent applications in flight internally.

    Quote Originally Posted by Hugh View Post
    ...

    The sad thing is that in that five years, Intel has reached sufficient performance for video that it becomes the obvious choice. I bought a few cute Brazos systems but even the closed source drivers have made the experience disappointing. Stupid little things like not keeping up with the kernel, something caused by not being "in-tree".
    Intel had been working on releasing materials to open source for years before we started. If you said "the sad thing is that in 10 years (or maybe 8, not sure) Intel has reached sufficient performance for video..." I would agree. I'm sure we will reach at least the same level in the same time.

  7. #197
    Join Date
    Feb 2008
    Posts
    39

    Default

    Quote Originally Posted by arekm View Post
    I've played with UVD on my E-350 APU and I'm quite happy with the results. Tested with mplayer (mplayer -vo vdpau -vc ffh264vdpau)
    and also with new vdpau xbmc code. 1080p movies are now playing fine.

    I also tested adobe flash plugin with
    OverrideGPUValidation = 1
    EnableLinuxHWVideoDecode=1
    option sset and had render *and* decode hardware accelerated! 1080p youtube was now playing nicely where before I was able to watch 480p only.

    vdpauinfo reports such capabilities:

    Decoder capabilities:

    name level macbs width height
    -------------------------------------------
    MPEG1 16 1048576 16384 16384
    MPEG2_SIMPLE 16 1048576 16384 16384
    MPEG2_MAIN 16 1048576 16384 16384
    H264_BASELINE 16 9216 2048 1152
    H264_MAIN 16 9216 2048 1152
    H264_HIGH 16 9216 2048 1152
    VC1_SIMPLE 16 9216 2048 1152
    VC1_MAIN 16 9216 2048 1152
    VC1_ADVANCED 16 9216 2048 1152
    MPEG4_PART2_SP 16 9216 2048 1152
    MPEG4_PART2_ASP 16 9216 2048 1152
    Thanks, arekm for testing on your E350, and thanks AMD team for making this happen. Like a lot of users, I thought UVD would be too encumbered to be released. This announcement was absolutely astonishing.

  8. #198
    Join Date
    Jun 2011
    Posts
    38

    Default

    can anyone please please please point out how to test it out? any pointers?? please! too eager! . Have been struggling with XvBa on catalyst for far too long, and unfortunately I have no idea on how to build the packages from the latest code... Do I need linux-next + mesa-devel + radeon-git? how to test it?

  9. #199
    Join Date
    Nov 2011
    Posts
    267

    Default

    Quote Originally Posted by xgt001 View Post
    can anyone please please please point out how to test it out? any pointers?? please! too eager! . Have been struggling with XvBa on catalyst for far too long, and unfortunately I have no idea on how to build the packages from the latest code... Do I need linux-next + mesa-devel + radeon-git? how to test it?
    I'm wondering myself...but here's what I can figure out.

    Linux-next won't cut it, AFAICT.
    In fact, you may need to go to
    http://cgit.freedesktop.org/~deathsimple/linux/
    git clone that repo, and get branch uvd-v3.8.4 then make localmodconfig
    You'll probably need to get the radeon microcode from http://people.freedesktop.org/~agd5f/radeon_ucode/
    And mesa from
    http://cgit.freedesktop.org/~deathsimple/mesa/
    And I'm not sure what to do about the Xorg driver, but it looks like any recent one will do (probably thanks to the video over shaders work).
    And you probably need a new libdrm, just so mesa builds...

  10. #200
    Join Date
    Jun 2009
    Posts
    1,106

    Default

    Quote Originally Posted by artivision View Post
    1) That i don't want: Make an OpenGL based D3D_compiler, or translate D3D to OGL. When you try to represent HLSL to GLSL, many times "word to word" its impossible. That's why we call it emulation. So something that in HLSL needs 100flops to give result, translated or compiled to GLSL needs 120+flops, so its inefficient.

    2) That i want: Vendors to give an HLSL front-end for their IL. So if i write an HLSL compiler will not be OGL based =inefficient, plus 10 times the work. For the IL its a front-end that is missing, but for a compiler the same thing is actually a back-end. A simple job is to write some Wine code and make MS_D3D to work with Wine (WineD3D work on Windows), then point the back-end extracted form the Windows driver(using Winetricks) to crate IL code, and all that inside a Wine Session. Its far less work than anything else.
    1.) yeap you always have a penalty in doing so

    2.) wineD3D works on windows cuz windows support DirectX, i think you are seeing the problem from a high level perspective and is not where the problem reside as i linked before there are already many HLSL compiler and converters and wine actually can understand most of DirectX fixed functions. The problem is that you need the whole DirectX driver not just the HLSL compiler or IL ported to linux, meaning the sole IL or compiler are useless on linux and cannot be interpreted directly without the full driver and again the driver is not in the DirectX code but entirely outside of it in the WDDM layer:

    WDDM layer is Equivalent to:
    linux KMS/KernelDRM/Gallium Winsys/Gallium StateTracker/xf86-video-XXX/libva-intel/etc

    DirectX layer
    Mesa's libGL/Xorg-server/LibVA/libVDPAU/libXV/SDL/Pixman/Cairo

    DirectX is the high level layer that define the protocol not the access to the GPU in any way and the WDDL is the low level layer that have the driver and produce the actual IL/IR/ASM, the problem is not just dump some IL and Voila[if it were that easy any idiot would have done so ages ago] the problem is to port the WDDM layer to Linux and implement libD3D[libGL equivalent for DirectX] otherwise is not possible to interpret that IL.

    understand that the fact that Opengl and DirectX can support Tessalation[for example] it doesnt mean in any way the do it the same way[which you wrongly assume so] at internal ASM level, you assume that since both PROTOCOLS can render almost the same they work exactly the same internally and that is not true, DirectX and OpenGL handle types, allocations, swizzling, texture type,framebuffer addressing, Command String checking and packing, object linking, swaping, vblank, offscreen rendering, etc different enough to make impossible just dump some asm from one to the other and expect it to render.

    GPU's don't have something soo convinient like send 0x11111 to adress 0x3332ffa to activate tessalation render pass on the scene or pass 0xffff2+the tga texture to adress 0xfff12345 to get an FBO surface but is more like PIC ASM with very low level functions[opcodes] that after houndreds/thousands of lines of code you can get something to work and Opengl and DirectX are PROTOCOLS that define FUNCTIONALITY EXPECTATIONS as a high level API not a low level hardware standard, so for example both DirectX and Opengl support RGBA textures but Opengl think is easier to swizzle from bottom-right and allocate it like first N bytes as identifiers+TexData continiously at a high addrress but DirectX thinks is better to zswizzle top-left but allocate the identifier and Tex Data non contigiously in the frambuffer at a low address but both will either way show you the pretty pixmap you uploaded but as you see the HLSL compiler will look out that texture the DirectX way and the opengl driver wont be able to find it never cuz opengl never allocate a texture in that way <--- see the problem now

    so to use HLSL shaders directly in the GPU you need the rest of the API too directly in the GPU[meaing full driver support] that do things internally as DirectX do it otherwise will never run

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •