Page 9 of 12 FirstFirst ... 7891011 ... LastLast
Results 81 to 90 of 118

Thread: Bridgman Is No Longer "The AMD Open-Source Guy"

  1. #81
    Join Date
    Mar 2009
    Location
    in front of my box :p
    Posts
    841

    Default

    On UVD

    I personally suspect the driver part for UVD accel is even written 95% but they're waiting for the lawyer department to get clearance for a release.
    Doing it on shaders might be more flexible for future codecs but then it won't be as efficient as an ASIC. Either way it is done I still hope we'll have something useful here quickly, since it would be of great benefit especially for HTPCs with an E-350 or similar series.

  2. #82
    Join Date
    Aug 2012
    Posts
    315

    Default

    Quote Originally Posted by Adarion View Post
    On UVD

    I personally suspect the driver part for UVD accel is even written 95% but they're waiting for the lawyer department to get clearance for a release.
    Doing it on shaders might be more flexible for future codecs but then it won't be as efficient as an ASIC. Either way it is done I still hope we'll have something useful here quickly, since it would be of great benefit especially for HTPCs with an E-350 or similar series.
    No they fail with this kind of behaviour in the past "HDMI-sound" and bridgman learned because of that mistake.
    They do have a example code but they will not release it because they only use the example code to make sure they use the right registers of the stars micro-controller of the UVD unit.
    In the end they will release only some spec with some register informations like the HDMI-Audio-case.
    In this way they do have the highest chance to get a OK from the lawyers.
    They tried it otherwise in the past and failed.
    Don't be naive the HDMI-Audio was the test-run to get such a critical stuff out of the door.
    For hardware informations spec there are lower regulations compared to software.
    They can release critical informations 5 times faster if they only focus on critical hardware informations instead of hardware+complete software implementation.

  3. #83
    Join Date
    Jul 2008
    Posts
    914

    Default

    I saw the presentation from xdc2012 about wayland and there he could render 1080p video with 3% cpu usage. I dont konw how much here the intel driver and some va-api support did or if also the wayland protocol is also better.

    but its hard to see with here 720p and 120% cpu load (2 cores) with a zacate.

    hope they could get that done... I would even TRY to do it myself if someone gave me the direction (maybe over shader if uvd shit is patented to death) but I fear that you have to programm that for each codec again and again, so you would have to do such work all 2 years with each new format or even with other resolutions and stuff...

    Or I buy in 6 months or so a few intel computers if their gpus are because of the software better than the amd ones... its hard to say but amd builds not only slower cpus but also slower gpus from a linux perspective. the only advantage they deliver are the price.

  4. #84
    Join Date
    Sep 2012
    Posts
    3

    Default

    Quote Originally Posted by blackiwid View Post
    I saw the presentation from xdc2012 about wayland and there he could render 1080p video with 3% cpu usage. I dont konw how much here the intel driver and some va-api support did or if also the wayland protocol is also better.

    but its hard to see with here 720p and 120% cpu load (2 cores) with a zacate.

    hope they could get that done... I would even TRY to do it myself if someone gave me the direction (maybe over shader if uvd shit is patented to death) but I fear that you have to programm that for each codec again and again, so you would have to do such work all 2 years with each new format or even with other resolutions and stuff...

    Or I buy in 6 months or so a few intel computers if their gpus are because of the software better than the amd ones... its hard to say but amd builds not only slower cpus but also slower gpus from a linux perspective. the only advantage they deliver are the price.
    Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

    I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.

  5. #85
    Join Date
    Jul 2008
    Posts
    914

    Default

    Quote Originally Posted by e.a.i.m.a. View Post
    Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

    I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.
    the normal player... so what could that be, gstreamer... but yes I did understand that gstreamer is not that optimised than mplayer because of that I developed lately that as a minitube alternative:

    https://github.com/spiderbit/youtube-mplayer-controller

    but still 3% vs 50-80% is still not good.

  6. #86
    Join Date
    Apr 2010
    Posts
    1,946

    Default

    I think video acceleration is becoming less and less important anyway. The reason is simple - you can't accelerate every format. But if something like HSA takes off, it would be able to share the processing between GPU and CPU and that would be "universal acceleration".

  7. #87
    Join Date
    Aug 2012
    Posts
    315

    Default

    Quote Originally Posted by crazycheese View Post
    I think video acceleration is becoming less and less important anyway. The reason is simple - you can't accelerate every format. But if something like HSA takes off, it would be able to share the processing between GPU and CPU and that would be "universal acceleration".
    HSA is marketing speech for: shader based calculation. Now you can imagine how good this will work.

  8. #88
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,574

    Default

    Quote Originally Posted by necro-lover View Post
    HSA is marketing speech for: shader based calculation without the usual drawbacks and overheads. Now you can imagine how good this will work.
    Fixed that for you...

  9. #89
    Join Date
    Sep 2011
    Location
    Rio de Janeiro
    Posts
    203

    Default

    Quote Originally Posted by bridgman View Post
    Fixed that for you...
    bridgman, I wonder if HSA is limited to shader based computation or if it's scope is wider, such as running the computational workload on the most apropiate kind of logic available on the system, general processing or fixed function, in order to have the best possible performance and power consumption. Obviously I would expect that shader computing is just the first step in that direction.

    It seems to me, as a layman, that running everything on a general processing unit (be it cpu, gpu of both) cannot be the most efficient way of doing it. In the future SoCs we would have several specialized blocks, each doing what it does best. If my understanding is correct, should we expect to run into the same problems we have today with such speciallized blocks and open source (UVD, PM and so on) and open source/linux or is AMD/HSA foundation planning on something to prevent that?

  10. #90
    Join Date
    Feb 2011
    Posts
    12

    Default

    Here is a introduction by MIke Houston:

    http://v.csdn.hudong.com/s/article.html?arcid=2809880

    The discussion of face recognition is especially interesting....While the slideshow is not shown, he describes a 10x benefit for GPU compute transitioning to a Multiple benefit for using CPU compute in a single software algorithm.

    Consequently, I think you can safely assume that the ability to use both the cpu and the gpu, concurrently and sequentially as needed is the ultimate objective.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •