Announcement

Collapse
No announcement yet.

LZ4m: Taking LZ4 Compression To The Next Level

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LZ4m: Taking LZ4 Compression To The Next Level

    Phoronix: LZ4m: Taking LZ4 Compression To The Next Level

    While LZ4 can be very fast with its compression and decompression speeds, there's a new kid on the block that appears to be even faster: LZ4m...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Well, what are you waiting for? You have the whitepaper, start hacking

    Comment


    • #3
      So many innovations in lossless compression in recent years, brings a glad tear to my eye.

      Comment


      • #4
        Originally posted by microcode View Post
        So many innovations in lossless compression in recent years, brings a glad tear to my eye.
        Really? Where?
        This one right here doesn't seem to be more than a simplification of LZ4 in order to gain some speed. I'm not following the domain to closely, but the last innovation I'm aware of is LZMA. And even that was based on research from 20 years before.

        Comment


        • #5
          Originally posted by bug77 View Post
          Really? Where?
          I suspect he was sarcastic.

          Comment


          • #6
            Originally posted by bug77 View Post
            Really? Where?
            This one right here doesn't seem to be more than a simplification of LZ4 in order to gain some speed. I'm not following the domain to closely, but the last innovation I'm aware of is LZMA. And even that was based on research from 20 years before.
            Not speaking specifically of this whitepaper, didn't have a chance to look at it. I'm referring to interesting developments like zstd, lzham, etc. which address different applications much better than things which existed a decade ago. I mean, somebody ported lz4 to the 8086 and it performs way better than anything of the time (including other fast decompressors, not just "LOL LZ4 IZ FASTER THAN PK-Zip". It's not some rehashing of old work, recent lossless comp work has done a lot of useful things.

            LZMA (and especially LZMA2) can get you high ratios, but it is depressingly slow both to compress and decompress. lzham gets similar ratios with considerably faster decompression.

            The one thing that should be shocking is the lack of movement in lossy still image compression. JPEG encoders are still getting better to this day, and they make JPEG better than most of the "JPEG killer" technologies that have been pushed throughout time. There are a couple of still image compressors which do get reliably better efficiency, but they are usually hideously slow, or of tenuous patent standing.

            Comment


            • #7
              LZ4m is optimized for small block size. Maximal offset is 270 (in LZ4 it is 65535).

              Comment


              • #8
                Originally posted by bug77 View Post

                Really? Where?
                This one right here doesn't seem to be more than a simplification of LZ4 in order to gain some speed. I'm not following the domain to closely, but the last innovation I'm aware of is LZMA. And even that was based on research from 20 years before.
                LZMA and LZ4 are solutions to totally different problems. LZMA has high compression ratio and LZ4 has excellent decompression speed. LZ4 is great for transparent disk compression and it can even increase the perceived disk bandwidth with minimal extra CPU load, LZMA useful for distributing packages over internet.

                Comment


                • #9
                  Originally posted by microcode View Post
                  The one thing that should be shocking is the lack of movement in lossy still image compression. JPEG encoders are still getting better to this day, and they make JPEG better than most of the "JPEG killer" technologies that have been pushed throughout time. There are a couple of still image compressors which do get reliably better efficiency, but they are usually hideously slow, or of tenuous patent standing.
                  The one thing about JPEG that I know of is that it's seen as an "alien tech" - it's so efficient at what it does that nothing ever actually came close to it in as many cases. Now though, the specification was all good but early encoder versions were quite bad, and most of the improvements came not from improving the spec, but the encoder. AFAIK they're now trying psychovisual optimizations, which are perfectly allowed by the format but no encoder used until now because, well, JPEG was already so good no one wanted to look into it any more. Until someone comes and says, "I got a compression format that's better than JPEG because it uses X feature" and then someone else comes and says "nice but hey, I can actually use that in JPEG too - so here's my libetterjpeg and it does better JPEG than JPEG, while it's still JPEG". That's not even including the various improvements that require embedded "extra" data in the JPEG file, which allows the file to show in a degraded state on any device - those usually don't stick around.

                  The same thing happened to MP3 (MPEG 1.0 layer 3) where the format is all fine and good (for what it's supposed to do, and originally it wasn't music), but there is a world of difference between a 128 Kbps (CBR) MP3 file encoded with Fraunhofer's L3ENC from 1993 and a 128Kbps ABR MP3 file encoded with the latest version of Lame. Same with h.263 between early DivX;-) and the last XviD version (and in a lesser measure, h.264), even Theora in its time.

                  Comment


                  • #10
                    Originally posted by mitch074 View Post
                    The one thing about JPEG that I know of is that it's seen as an "alien tech" - it's so efficient at what it does that nothing ever actually came close to it in as many cases. Now though, the specification was all good but early encoder versions were quite bad, and most of the improvements came not from improving the spec, but the encoder. AFAIK they're now trying psychovisual optimizations, which are perfectly allowed by the format but no encoder used until now because, well, JPEG was already so good no one wanted to look into it any more. Until someone comes and says, "I got a compression format that's better than JPEG because it uses X feature" and then someone else comes and says "nice but hey, I can actually use that in JPEG too - so here's my libetterjpeg and it does better JPEG than JPEG, while it's still JPEG". That's not even including the various improvements that require embedded "extra" data in the JPEG file, which allows the file to show in a degraded state on any device - those usually don't stick around.

                    The same thing happened to MP3 (MPEG 1.0 layer 3) where the format is all fine and good (for what it's supposed to do, and originally it wasn't music), but there is a world of difference between a 128 Kbps (CBR) MP3 file encoded with Fraunhofer's L3ENC from 1993 and a 128Kbps ABR MP3 file encoded with the latest version of Lame. Same with h.263 between early DivX;-) and the last XviD version (and in a lesser measure, h.264), even Theora in its time.
                    Yeah, I'm very excited to see what is done on the encoder side of Opus. Some day we may have transparent 32kbps stereo music.

                    Comment

                    Working...
                    X