Announcement

Collapse
No announcement yet.

FFmpeg 7.0 Released With Native VVC Decoding & Multi-Threaded CLI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • FFmpeg 7.0 Released With Native VVC Decoding & Multi-Threaded CLI

    Phoronix: FFmpeg 7.0 Released With Native VVC Decoding & Multi-Threaded CLI

    The very exciting FFmpeg 7.0 multimedia library has been released! FFmpeg 7.0 rolls out most notably the new native VVC decoder that is currently experimental for supporting Versatile Video Coding as well as introducing the multi-threaded FFmpeg CLI tool...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Holy crap that's impressive. I hope kdenline can now use all 16+16 cores on high end ryzens.

    Comment


    • #3
      Originally posted by caligula View Post
      Holy crap that's impressive. I hope kdenline can now use all 16+16 cores on high end ryzens.
      Depends on what you're doing. Your typical video encoding process can't be parallelized that much if the processes themselves aren't. One example: encoding AV1 with the aom encoder is single threaded, while SVT-AV1 is designed to be multithreaded. So only if you actually have tasks that don't require any serialization you'll see any improvements.

      Comment


      • #4
        Originally posted by Artim View Post

        Depends on what you're doing. Your typical video encoding process can't be parallelized that much if the processes themselves aren't. One example: encoding AV1 with the aom encoder is single threaded, while SVT-AV1 is designed to be multithreaded. So only if you actually have tasks that don't require any serialization you'll see any improvements.
        H.264 in Kdenlive uses around 33-50% of the available cores. It doesn't use the 1000 USD GPU at all. The encoding speed is around 5x real-time. I have no effects in use, just cut the original into slices. Same fps and resolution as the original.

        I remember people saying that video editing requires a powerful workstation. It seems Radeon 7900 XTX, Ryzen 5950X, and 64 GB of RAM is too powerful. A 4 core CPU with no GPU at all and 8 GB of RAM would be as fast. Maybe commercial video editors do something different.

        Comment


        • #5
          Originally posted by caligula View Post
          Holy crap that's impressive. I hope kdenline can now use all 16+16 cores on high end ryzens.
          It's not the codecs that the new multi-threading was applied to, those handle threading themselves entirely independently of ffmpeg. This was ffmpeg internals.

          Comment


          • #6
            Let me guess, it won't be part of Ubuntu 24.04 and Fedora 40?

            Comment


            • #7
              Originally posted by caligula View Post

              H.264 in Kdenlive uses around 33-50% of the available cores. It doesn't use the 1000 USD GPU at all. The encoding speed is around 5x real-time. I have no effects in use, just cut the original into slices. Same fps and resolution as the original.

              I remember people saying that video editing requires a powerful workstation. It seems Radeon 7900 XTX, Ryzen 5950X, and 64 GB of RAM is too powerful. A 4 core CPU with no GPU at all and 8 GB of RAM would be as fast. Maybe commercial video editors do something different.
              That highlights how much a "powerful workstation" has changed in the past 10-15 years. The days of 4 to 6 cores with a high IPC being the CPU in a high-end workstation are long gone. These days, the "low end" of workstations start with AMD 8c/16t X3D chips or an Intel with 2 blazing fast 7Ghz cores, 6 regular cores, and 8 E cores. From there it's just adding more and more cores that may or may not utilize 3D cache, run at blazing fast speeds, or may or may not be E/C cores.

              That's not even considering how much more powerful something like a 5800X3D or 7800X3D is when compared to an FX 8350 or when comparing Intel generational equivalents. Not only is their "high" core count our starting core count, each modern core will get things done at least 3x faster.

              Basically, a 2014 high end workstation is less powerful than a 2024 Walmart Special. Parts are just that much better these days. Going from a high end dual CPU Intel Westmere X5680 with 48GB DDR3 1333 (12c24t) to a mediocre Zen 2 4650G APU (6c/12t) with 32GB DDR4 3800 with the same RX 580 is what opened my eyes to how much better modern hardware performs.

              Comment


              • #8
                Originally posted by caligula View Post

                H.264 in Kdenlive uses around 33-50% of the available cores. It doesn't use the 1000 USD GPU at all. The encoding speed is around 5x real-time. I have no effects in use, just cut the original into slices. Same fps and resolution as the original.
                That means you're not using any hardware encoders, which in itself is not that efficient, also it doesn't look like there would be anything that could be parallelized further.

                I remember people saying that video editing requires a powerful workstation. It seems Radeon 7900 XTX, Ryzen 5950X, and 64 GB of RAM is too powerful. A 4 core CPU with no GPU at all and 8 GB of RAM would be as fast. Maybe commercial video editors do something different.
                Depends on the material you use. The power needed is for scrubbing through the material in real time without the need of proxy files vastly lowering the resolution. For that you don't really need a powerful CPU, but a bunch of RAM and a GPU that can do as much in hardware as possible. If that's not given, the CPU needs to be powerful enough to both decode the material and apply any modifications in real time. If you're not editing 8k material, that's not that demanding.

                Comment


                • #9
                  Originally posted by Artim View Post
                  Depends on the material you use. The power needed is for scrubbing through the material in real time without the need of proxy files vastly lowering the resolution. For that you don't really need a powerful CPU, but a bunch of RAM and a GPU that can do as much in hardware as possible. If that's not given, the CPU needs to be powerful enough to both decode the material and apply any modifications in real time. If you're not editing 8k material, that's not that demanding.
                  The thing is, companies like Intel already advertised almost 10 years ago that their integrated Quick Sync stuff (a tiny part of the CPU) does 12x real-time encoding, e.g.: https://www.intel.de/content/dam/www...ideo-guide.pdf - the same performance was available on 15W TDP laptop chips.

                  Now, 8 years later you buy the highest end consumer CPU + best GPU and get 5x real-time. I see this as a problem.

                  Comment


                  • #10
                    Originally posted by Malsabku View Post
                    Let me guess, it won't be part of Ubuntu 24.04 and Fedora 40?
                    Of cause not! ; )
                    They will be busy with main parts like Linux (kernel), Mesa (Vulkan & OpenGL graphics stack), the new things
                    they want to have to differentiate from others and obligations by contractors making sure that their use case
                    is well tested. Even Debian reps are no longer refreshed after nearly 2 months before release - now we have
                    little less than 3 weeks before release of Ubuntu 24.4.0 LTS - so this would be madness if there is no very good
                    team behind that SW with best connections to the core team and testing latest kernel versions etc.
                    The dependencies are not to underestimate - and every one will shout when spotting one bug after another.
                    If one wants bleeding edge programs, one could try installing it in parallel to the well tested ones so having
                    a fallback. For most users the stability reached with half year releases is much more important than using
                    tested programs which could be half or even an entire year old.
                    Otherwise one could switch to a rolling release ... there are some intersting distros in that respect.
                    I would wish for a rolling release of HWE - i.e. Linux starting with x.y.5 - following mainline, and Mesa x.y.3
                    as starting for the graphics stack. The difference is important and the frankenkernels really do make problems.
                    Additionally, the distro developers will help in more testing and fixing instead of maintaing too old kernels.
                    But what is important for the desktop is not most important for industry ... so while desirable may not happen
                    at all. : (
                    But I can understand the desire to test latest SW when using it ... most people will.

                    Comment

                    Working...
                    X