Announcement

Collapse
No announcement yet.

More Optimizations Coming To LZHAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • More Optimizations Coming To LZHAM

    Phoronix: More Optimizations Coming To LZHAM

    While LZHAM 1.0 was recently released, the former Valve engineer behind the compression library targeting game assets, Rich Geldreich, is already targeting more post-1.0 improvements to this library...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Well, it's always cool seeing software improve, but is a 1.5% - 2% improvement in a compression algorithm that basically nobody uses really news material?

    What I'd like to see is a decompression algorithm that can run on the GPU...

    Comment


    • #3
      Originally posted by BradN View Post
      Well, it's always cool seeing software improve, but is a 1.5% - 2% improvement in a compression algorithm that basically nobody uses really news material?

      What I'd like to see is a decompression algorithm that can run on the GPU...
      I agree, I don't see anyone moving away from ZIP/ZLIB anytime soon, and if they do, then they will implement their own asset packages to make it more difficult for users or other third parties to extract.

      From my limited experience, loading the data into the GPU's memory is slow and would take longer than simply decompressing the data with the CPU.

      I'd appreciate if someone more experienced would comment on this as well .

      Comment


      • #4
        It's already being used in a few high profile games.

        Comment


        • #5
          Well, 2% faster of something that was already pretty fast is nothing to blink at. (But yea, I didn't think Phoronix would care to make a news article about it..)

          FWIW, LZHAM is one of the first available 3rd party 7zip codec plugin DLL's. It's actually pretty useful as a 7zip plugin because it can decompress (more or less) as fast as .zip, but with compression very similar to LZMA.

          Comment


          • #6
            Originally posted by AnonymousCoward View Post
            I agree, I don't see anyone moving away from ZIP/ZLIB anytime soon, and if they do, then they will implement their own asset packages to make it more difficult for users or other third parties to extract.

            From my limited experience, loading the data into the GPU's memory is slow and would take longer than simply decompressing the data with the CPU.

            I'd appreciate if someone more experienced would comment on this as well .
            Well, zlib is stuck with ~21 year old (i.e. ancient) compression technology. It comes from the era of floppy disks, Windows 3.1, and 9600 baud modems. It's a nice lib for what it does, but lossless data compression technology has moved on.

            The vast majority of developers will not implement their own data compression codecs. It's trivial to code up your own custom asset package system, but creating a modern codec with usable performance is not something you can do in a few days of work. Modern LZ codecs are complex, very specialized pieces of software. Just testing each revision of LZHAM to ensure it doesn't break takes many hours (on a 20 core Linux workstation pretty much dedicated to just testing my various lossy/lossless codecs).

            Comment


            • #7
              Originally posted by richgel999 View Post
              Well, 2% faster of something that was already pretty fast is nothing to blink at. (But yea, I didn't think Phoronix would care to make a news article about it..)

              FWIW, LZHAM is one of the first available 3rd party 7zip codec plugin DLL's. It's actually pretty useful as a 7zip plugin because it can decompress (more or less) as fast as .zip, but with compression very similar to LZMA.
              What compiler are you using to get your results ?
              I find gcc vs MSVC vs clang are pretty different depending on compiler flags and other things like that...

              Comment


              • #8
                Originally posted by AnonymousCoward View Post
                I agree, I don't see anyone moving away from ZIP/ZLIB anytime soon, and if they do, then they will implement their own asset packages to make it more difficult for users or other third parties to extract.

                From my limited experience, loading the data into the GPU's memory is slow and would take longer than simply decompressing the data with the CPU.

                I'd appreciate if someone more experienced would comment on this as well .
                I have even less experience (read "no experience",) but it seems to me that if the data is going to be used in the GPU, and copying is slow, it makes more sense to pass it around compressed, and decompress it on the GPU, where it would be used.

                Comment


                • #9
                  Isn't that what Civ5 does? Invented a compression scheme for textures, decompresses to GPU buffers using compute shaders.

                  Comment


                  • #10
                    Originally posted by curaga View Post
                    Isn't that what Civ5 does? Invented a compression scheme for textures, decompresses to GPU buffers using compute shaders.
                    Do they seriously trust compression schemes used by GPU vendors so little?

                    Comment

                    Working...
                    X