Announcement

Collapse
No announcement yet.

Higher Quality AV1 Video Encoding Now Available For Radeon Graphics On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Higher Quality AV1 Video Encoding Now Available For Radeon Graphics On Linux

    Phoronix: Higher Quality AV1 Video Encoding Now Available For Radeon Graphics On Linux

    For those making use of GPU-accelerated AV1 video encoding with the latest AMD Radeon graphics hardware on Linux, the upcoming Mesa 23.3 release will support the high-quality AV1 preset for offering higher quality encodes...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Hardware [accelerated] video encoding is good only for streaming and video conferencing otherwise software encoding should be preferred.

    Comment


    • #3
      Originally posted by avis View Post
      Hardware [accelerated] video encoding is good only for streaming and video conferencing otherwise software encoding should be preferred.
      I'm sorry but BULLSHIT. The strain on CPUs from AV1 is massive, and totally pointless when high quality AV1 encodes are already of very good quality with GPU HW. I've recorded and streamed quite a few things with AV1, and the general quality is excellent, far above what H264 could do at the same bitrate. You're thinking in terms of H264 encoding, which is handled well by any CPU nowadays and generally is of less good quality on GPU hardware.

      Even at 4K, with 15Mb bitrate, I get a broadly excellent recording quality with AMD's encoder, which is not on the level of Nvidia's or Intel's. And when AOM AV1 is capable of dumping all of my CPU's processing, I'm very glad to be able to do it live on the GPU (monitor record live and write to disk without any delay or overhead).

      Also I'm hoping that these changes on the Linux side reflect on the Windows side too. I've tried OBS streaming and recording with Quality and High Quality presets: my 7900 xt, which really shouldn't fear that, basically fell apart. Images framerate on my screen went down to a cool .33 fps (yes, roughly 1 image every 3 seconds), and OBS basically streamed just as much.
      I've been sticking with Balanced, 15Mbs HQCBR for streaming on Youtube, and CQP 20 Balanced for recordings. I've recoded videos just as I was watching them on YT: for 1080p/1440p stuff, the quality was indistinguishable from the original video.
      Hopefully the quality keeps improving nonetheless. I am hoping for something like a near mint quality recording even at 4K, something which I've only tried on older videos with meh visual quality.

      Comment


      • #4
        It would have never crossed my mind these profiles needed to be implemented individually. I thought they were just parameters for a given algorithm (heuristics included).

        Comment


        • #5
          Originally posted by Mahboi View Post

          I'm sorry but BULLSHIT. The strain on CPUs from AV1 is massive, and totally pointless when high quality AV1 encodes are already of very good quality with GPU HW. I've recorded and streamed quite a few things with AV1, and the general quality is excellent, far above what H264 could do at the same bitrate. You're thinking in terms of H264 encoding, which is handled well by any CPU nowadays and generally is of less good quality on GPU hardware.

          Even at 4K, with 15Mb bitrate, I get a broadly excellent recording quality with AMD's encoder, which is not on the level of Nvidia's or Intel's. And when AOM AV1 is capable of dumping all of my CPU's processing, I'm very glad to be able to do it live on the GPU (monitor record live and write to disk without any delay or overhead).

          Also I'm hoping that these changes on the Linux side reflect on the Windows side too. I've tried OBS streaming and recording with Quality and High Quality presets: my 7900 xt, which really shouldn't fear that, basically fell apart. Images framerate on my screen went down to a cool .33 fps (yes, roughly 1 image every 3 seconds), and OBS basically streamed just as much.
          I've been sticking with Balanced, 15Mbs HQCBR for streaming on Youtube, and CQP 20 Balanced for recordings. I've recoded videos just as I was watching them on YT: for 1080p/1440p stuff, the quality was indistinguishable from the original video.
          Hopefully the quality keeps improving nonetheless. I am hoping for something like a near mint quality recording even at 4K, something which I've only tried on older videos with meh visual quality.
          "Trust me bro". Not a single screenshot, not a single proof, obviously talking about bitrates low enough that no archival quality (visually lossless) can even be theoretically achieved.

          We have very different definitions of the words, sir. You prefer to throw words like "BS" without putting your money where your mouth is. You prefer superlatives which don't exist in my world unless I can vouch for them.

          When I say "excellent" encoding quality I mean the tiniest of details and imperfections are preserved. I've seen what software SVT-AV1 produces at highest quality mode and it was a horrible blurfest. Now you're claiming that hardware video encoding is "the quality was indistinguishable from the original video"? I'm sorry, but that's a grand deception.

          x264 at preset veryslow/placebo requires 30-50Mbit bitrate for achieving visually lossless encoding for 1080p/24fps video. AV1 would need at the very least 20-30Mbit to do the same. 15Mbps hardware (i.e realtime) AV1 encoding achieving the same results? That's magic. I mean it's just a blatant lie. I'm not sure we'll ever have such a codec. For instance H.264 and H.265 lossless video codecs have approximately the same compression ratio despite all the advanced in the second.

          222.webp

          Here's another more recent comparison: 123.webp
          Last edited by avis; 12 October 2023, 11:51 AM.

          Comment


          • #6
            Honestly I'm more interested into having good (efficient hardware decoding for AV1 and HEVC and also be able to properly see HDR movies on non-HDR displays.
            Right now all HDR movies look like crap as tonemapping either doesn't work or it works very bad, at least in VLC.
            Makes me miss again the MPC-HC + MadVr on available for Windows.

            Comment


            • #7
              Originally posted by avis View Post

              "Trust me bro". Not a single screenshot, not a single proof, obviously talking about bitrates low enough that no archival quality (visually lossless) can even be theoretically achieved.

              We have very different definitions of the words, sir. You prefer to throw words like "BS" without putting your money where your mouth is. You prefer superlatives which don't exist in my world unless I can vouch for them.

              When I say "excellent" encoding quality I mean the tiniest of details and imperfections are preserved. I've seen what software SVT-AV1 produces at highest quality mode and it was a horrible blurfest. Now you're claiming that hardware video encoding is "the quality was indistinguishable from the original video"? I'm sorry, but that's a grand deception.

              x264 at preset veryslow/placebo requires 30-50Mbit bitrate for achieving visually lossless encoding for 1080p/24fps video. AV1 would need at the very least 20-30Mbit to do the same. 15Mbps hardware (i.e realtime) AV1 encoding achieving the same results? That's magic. I mean it's just a blatant lie. I'm not sure we'll ever have such a codec. For instance H.264 and H.265 lossless video encoding have approximately the same compression ratio despite all the advanced in the second.
              You can disagree with the language of the response, but you're attacking the response for saying "trust me bro" which your initial post did as well. You said it hardware encoding isn't good enough, but I've seen several videos and metrics that show AV1 hardware on par with good quality x264.

              Your follow up post provided much more detail and it's very clear now what your expectations for a codec are. Those expectations are very high and, from my experience, not everyone shares them. So your initial statement is very much your opinion as hardware encoding quality (especially the new AV1 stuff) is good enough for a lot of people.

              And from my experience with x264, x265, and AV1 depending on your viewing device and viewing distance a lot of the detail that people chase isn't perceivable outside of close pixel peeping. I say this having done my own subjective tests on a 75 inch 4k TV from regular/standard viewing distance.

              Comment


              • #8
                Originally posted by lakerssuperman View Post

                You can disagree with the language of the response, but you're attacking the response for saying "trust me bro" which your initial post did as well. You said it hardware encoding isn't good enough, but I've seen several videos and metrics that show AV1 hardware on par with good quality x264.

                Your follow up post provided much more detail and it's very clear now what your expectations for a codec are. Those expectations are very high and, from my experience, not everyone shares them. So your initial statement is very much your opinion as hardware encoding quality (especially the new AV1 stuff) is good enough for a lot of people.

                And from my experience with x264, x265, and AV1 depending on your viewing device and viewing distance a lot of the detail that people chase isn't perceivable outside of close pixel peeping. I say this having done my own subjective tests on a 75 inch 4k TV from regular/standard viewing distance.
                I'm sorry but I'm not a pixel peeper however I instantly spot missing details in human faces and body and these details are often crucial because they relay emotions. If you ever encoded videos with close ups of people, you'd know what I'm talking about.

                Again, the person who I replied to screamed about "GOOD ENOUGH FOR ME" but good enough for him means absolutely zero. If he came from the modem era of 320x240 videos compressed using MPEG1 or something at bitrates which would sound laughable nowadays, yeah, everything will be good.

                My first point-and-shoot camera recorded in MJPEG and not a single codec I've tried so far, not even super advanced VVC can encode those videos better than they are without losing a ton of details. Yeah, they are only 640x480 30fps or 1024x768 15fps but they are packed with details and imperfections. And it's not noise, so please don't try to offer me various denoisers - I've tried a dozen.
                Last edited by avis; 12 October 2023, 12:03 PM.

                Comment


                • #9
                  Horses for courses. If you need watchable footage from your streaming service, then AV1 is good enough, albeit far too slow for what it offers.

                  But if you spend $1k+ on your mirrorless camera and another $1k+ on the lens, then you probably don't want your footage to look like crap - and in such scenario AV1 is the worst option among all modern codecs, not to mention it would take ages to encode 4K. I wouldn't even try 8K with my 16 core PC.
                  Last edited by sobrus; 12 October 2023, 12:05 PM.

                  Comment


                  • #10
                    Originally posted by avis View Post

                    I'm sorry but I'm not a pixel peeper however I instantly spot missing details in human faces and body and these details are often crucial because they relay emotions. If you ever encoded videos with close ups of people, you'd know what I'm talking about.

                    Again, the person who I replied to screamed about "GOOD ENOUGH FOR ME" but good enough for him means absolutely zero. If he came from the modem era of 320x240 videos compressed using MPEG1 or something at bitrates which would sound laughable nowadays, yeah, everything will be good.

                    My first point-and-shoot camera recorded in MJPEG and not a single codec I've tried so far, not even super advanced VVC can encode those videos better than they are without losing a ton of details. Yeah, they are only 640x480 or 960x720 but they are packed with details and imperfections.
                    Sorry, your first statement is entering fallacy territory. I do video encoding. Have for awhile. I have a personal Jellyfin server I back my media up to. I use software encoding for my stuff. I've gone from x264 to x265 and now I'm dipping my toes into AV1. So far I've found, outside of a nasty bug with how SVT-AV1 handles high contrast scenes, with the right speed and quality value, I'm seeing very high level output from it.

                    I've watched lower bitrate stuff and done lots of testing with different quality levels and file size outputs and all the codecs do an excellent job at retaining details in places the eye will notice. But I've also found viewing distance is the great equalizer. On my 75 inch TV quality variations are a bit more noticeable, but on my 55 inch 4k tv, the differences in details are even harder to perceive.

                    And the person saying "good enough for me" was responding to your initial blanket statement that hardware encoding just wasn't good enough for viewing when, for many people, it is perfectly acceptable. The file size might get much bigger, but hardware encoding can do a solid job for a lot of people. I might not use and you might find it inferior and that's totally fine, but many do find it good enough.

                    Comment

                    Working...
                    X