Announcement

Collapse
No announcement yet.

RAD Game's New Oodle Data Compression Beats Open-Source Alternatives

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • RAD Game's New Oodle Data Compression Beats Open-Source Alternatives

    Phoronix: RAD Game's New Oodle Data Compression Beats Open-Source Alternatives

    The latest work on RAD Game Tools' Oodle data compression with new compression codecs have it handily beating the open-source alternatives...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Wow, that's around half an order of magnitude in decode speed for the same compressed size. Impressive.

    Though honestly I don't give a damn unless I can use it, and I can't.

    Comment


    • #3
      Calling Rich's review "independent" is quite a stretch.
      He has been advocating RAD products for several seasons, received access to advanced versions and source,
      at a minimum, he qualifies as a good acquaintance. Link is maybe stronger, we don't know.
      Unsurprisingly, his last statement looks like an undisguised advertisement.

      Comment


      • #4
        Originally posted by microcode View Post
        Wow, that's around half an order of magnitude in decode speed for the same compressed size. Impressive.

        Though honestly I don't give a damn unless I can use it, and I can't.
        Agreed. I feel the same way about closed-source-only stuff (games and 5 grandfathered packages excepted), which means I ignore KZIP and PNGOUT in favour of 7zip, AdvanceCOMP, Zopfli, and the like.

        Comment


        • #5
          I have a hard time getting that excited about faster decompression speeds, although i'm sure it's important in places like mobile/embedded, or perhaps in datacenters. The compression speed and ratio are just much more important for my purposes. Decompression is never really a factor.

          Comment


          • #6
            I have much respect for Rich and am a fan specially of his work on LZHAM.

            His tests are coherent as always, so it's not a lie at all, this new compressor by RAD is amazing, but no one has access to the data he used for the test so we obviously can't assess at what compression level the output compression ratio gains end for each compressor. Mind that he maxed settings on the open-source compressors (zlib level 9, zstd level 22, brotli level 10), which is almost certainly overkill no matter what data you have. Zstd is specially bad at level 22, it has sweet spots from 1~8 and 15~20 (obviously exact level depends on your data, and upper or lower range depends on your priority), but level 22 gets real slow and even decompression speed takes a hit.

            I guess you wouldn't want to use Zstd level 22 unless you really, really need a big dictionary, in which case in my tests you're better off using a different compressor that prioritizes that anyway, like LZMA. Yes, it can get so slow that even LZMA becomes a better fit. In his test zstd was set up to fail, and still, it excelled. Amazing!

            EDIT: BTW, Zstd's final format (hopefully, really final this time) was released, unannounced, this week. Now it's DONE.
            Last edited by jntesteves; 04 August 2016, 05:00 PM.

            Comment


            • #7
              Originally posted by jntesteves View Post
              I have much respect for Rich and am a fan specially of his work on LZHAM, but I agree he's been pushing out advertising for RAD for a while.

              His tests are coherent as always, so it's not a lie at all, this new compressor by RAD is amazing, but no one has access to the data he used for the test so we obviously can't assess at what compression level the output compression ratio gains end for each compressor. Mind that he maxed settings on the open-source compressors (zlib level 9, zstd level 22, brotli level 10), which is almost certainly overkill no matter what data you have. Zstd is specially bad at level 22, it has sweet spots from 1~8 and 15~20 (obviously exact level depends on your data, and upper or lower range depends on your priority), but level 22 gets real slow and even decompression speed takes a hit.

              I guess you wouldn't want to use Zstd level 22 unless you really, really need a big dictionary, in which case in my tests you're better off using a different compressor that prioritizes that anyway, like LZMA. Yes, it can get so slow that even LZMA becomes a better fit. In his test zstd was set up to fail, and still, it excelled. Amazing!

              EDIT: BTW, Zstd's final format (hopefully, really final this time) was released, unannounced, this week. Now it's DONE.
              It's also difficult to determine which algorithm is good since there are so many parameters to consider. Decompression speed is naturally quite important but if we think about Linux products like routers, the 8-16 MB of flash pretty much dictates that we prolly want to use a really high compression ratio especially if the compressed "ROM" is allowed to use only a portion of the available NOR flash. Sometimes the memory consumption is too big. If you think about development boards like Raspberry Pi, the SD card speed vs CPU power is a really important factor. What about SSD vs spinning hard drives. What about SMR hard drives? zRAM? Buffers for programs like GIMP? So many use cases..

              Regarding these commercial RAD tools, I wonder how much of the speed is achieved by better tuning and optimizations and how much goodness is there due to algorithms. I suppose many Linux supported compression technologies (lz4,lzo,xz,gzip,bzip2) could be tuned more for non-x86 architectures or even on x86 if they don't use latest AVX and cache oblivious algorithms or if the multi-threaded code is badly optimized.

              Comment


              • #8
                "Regarding these commercial RAD tools, I wonder how much of the speed is achieved by better tuning and optimizations and how much goodness is there due to algorithms. "

                To put it simply, it's a combination of very intelligently designed algorithms combined with expert-level code optimization.

                Earlier this year I met with Fabian at RAD (the author of BitKnit, one of the codecs in Oodle) and spoke with him about modern LZ compressors. His knowledge was incredibly deep. He knew of parsing and coding techniques I could barely understand. The primary authors of Oodle, Charles Bloom and Fabian Giesen, are top notch experts in the field of data compression (not just lossless stuff - but video and image compression too).

                They've literally spent *years* researching all existing lossless codecs and implementing their own. They have seriously pushed the state of the art forward here and this is why my blog post sounds the way it does.

                Comment


                • #9
                  Hmm... I'm not convinced as the result seems to be:
                  - If you want it fast: use zlib
                  - If you want it small: use brotli

                  If you want something in-between then of course there are lots of options. I'm just not sure that there are many use cases falling in that area.

                  As long as you can compress/decompress as fast as you can receive/send the data then you are in business. There is also a cost in choosing an obscure data format.

                  Comment


                  • #10
                    You are talking about many different compression use cases like routers, Raspberry Pi, mobile, etc. But:
                    Rich Geldreich - former Valve developer... Now Valve it is mainly Steam digital distribution, but in the past it was game developer.
                    RAD Game Tools - like it is in the name - the most important usage of it is in game development.

                    Also consider that many, many PC games on starting screen has copyright information that it is using RAD/Blink technology... I think that there is some reason for that situation - perhaps it is about RAD better decompression rate than others tools, or... license/patents mess. When you buy RAD - probably you do not need to worry about layers...

                    Comment

                    Working...
                    X