Announcement

Collapse
No announcement yet.

LZHAM 1.0 Isn't Too Far Away For Compression Of Interest To Game Developers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LZHAM 1.0 Isn't Too Far Away For Compression Of Interest To Game Developers

    Phoronix: LZHAM 1.0 Isn't Too Far Away For Compression Of Interest To Game Developers

    Rich Geldreich, the former Valve developer associated with some of the game company's past Linux and OpenGL projects, is getting close to releasing LZHAM v1.0 as his lossless data compression library...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I'm not sure if it's a typo or some thoughts/sentences got mangled together...
    but can ddecompression two to three times faster

    Comment


    • #3
      Snappy?

      How does it compare to Snappy?

      Comment


      • #4
        I'll be releasing LZHAM v1.0 on github early next week. LZHAM is a high ratio data "distribution" codec, with very fast decompression compared to other codecs in its ratio class (others are LZMA and Microsoft's LZX formats). LZHAM's compressor is approximately as fast as LZMA's (in other words, kinda slow to very slow depending on the options you use). The compression step is typically either executed in the cloud or on a beefy Linux build machine somewhere. The compressor is heavily multithreaded but it's still an expensive step (1-7mb/sec on a Core i7).

        The decompressor is lightweight enough to run on mobile devices. I'll have more data about LZHAM's decompression throughput relative to LZMA with Unity asset data next week. I expect it to be 2-3x faster.

        LZHAM is not intended at all to compete in the same space as Snappy, LZ4, zlib, etc. LZHAM's compression ratio is significantly higher than any of these codecs. Also, LZHAM implements the most important subset of the zlib compression API's, so if you're familiar with zlib already LZHAM is simple to use.

        Comment


        • #5
          What reason is there for me to use this over LZ4?

          Comment


          • #6
            Originally posted by peppercats View Post
            What reason is there for me to use this over LZ4?
            Higher compression ratio and somewhat lower decompression speed. Compression ratio ~2, decompression speed ratio ~0.7: http://mattmahoney.net/dc/text.html

            Comment


            • #7
              You use something like LZMA or LZHAM when every byte counts. In a product I'm working on, many of our customers only get ~500kbps download rates, and we've been losing around 30% of our potential customers due to long downloads. LZ4 is useless for this, we need something with the highest ratio possible that is practical to run on low end (mobile) devices.

              Comment


              • #8
                I wrote some time ago here that the LZMA-like compression ratio and the fast decompression make LZHAM a good candidate to replace LZMA/XZ in distro packages. Also squashfs would benefit on systems with slow CPUs.

                Comment


                • #9
                  The stats on both lzham wiki and Mahoney's site use extreme values, requiring gigabytes of memory. That's clearly unviable on mobiles. Are there any stats with realistic decompression RAM settings (ie, less than 1mb RAM required for decompressing)?

                  If it does well with such values, perhaps it could be useful in kernel/initrd.

                  Comment


                  • #10
                    How feasible is it to compress parts of your internal memory? Like data that hasn't been accessed for an hour. Or data that is about to get swapped.

                    Comment

                    Working...
                    X