Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 21

Thread: Btrfs LZO Compression Performance

  1. #11
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by BenderRodriguez View Post
    You may be right.
    What would matter then is the speed of your CPU. Also matters heavily the stuff your compressing.

    On my system I created a *tar file of pdf files. Some of the pdf files are text, but most of them are mainly images. Since they are already compressed quite a bit then you don't benefit from it a whole lot.

    This compresses at about 60-70 MB/s and decompresses at close to 400MB/s. The size saved is small, however. I am only saving 10-20 MB in 246MB. So using compression on something like that is not worth it.

    Meanwhile I used dd it create a 573MB file full of zeros. That compresses down to just over 2.3MB and takes 3/4 of a second.

    On the other end of a spectrum a 573MB file made using '/dev/urandom' takes almost 10 seconds to compress and is actually slightly larger afterwards.

    So even on very fast SSDs it MAY be worth it. If you care more about read speeds it may help. If you care about random access it may hurt.

    It also heavily depends on how smart the system is about using compression.


    It's stupid to compress jpg files or gif files or any sort of common multimedia files. They are already compressed heavily using specialized algorithms and it's extremely unlikely that lzo is going to help things any. But text files, documentation, program files, and many other can benefit from the compression. Some databases may benefit also, but you would think that if they did they would already be using compression internally.

  2. #12
    Join Date
    Jul 2009
    Location
    Montana
    Posts
    3

    Default Sandforce SSD

    I'd be really interested in seeing how this works on a SSD using a Sandforce controller. Sandforce has the fastest controllers because the drive itself is compressing the data. Filesystem compression may actually hurt performance on these fast SSDs.

  3. #13
    Join Date
    Jul 2008
    Posts
    1,719

    Default

    and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.

    using a ssd for / with /var, /tmp, /boot on different partitions. with reiser4 I was able to store 5gb more on a 80% full 64gb disk compared to ext4.

  4. #14
    Join Date
    May 2010
    Posts
    10

    Default

    Quote Originally Posted by energyman View Post
    and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.
    btrfs does the same thing -- it's the difference between `compress` and `compress-force` mount options.

    i am using LZO comp on a s101 netbook and m4300 notebook with spectacular results. you also have to remember that btrfs only compresses existing data when the data is modified, and even then it only compresses the new extent ... to compress an existing disk completely you need to mount with a compression option, then initiate a rebalance (can be done online).

    it is too bad about reiser4 ... i never used it myself but i've always read very good things about it; it's unfortunate Hans was so difficult to work with and ... well ... other things too. alas, it has no vendor to back it (to get into mainline) -- btrfs is the future here.

    C Anthony

  5. #15
    Join Date
    Jun 2008
    Posts
    13

    Default

    It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

    Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.

  6. #16
    Join Date
    May 2010
    Posts
    10

    Default

    Quote Originally Posted by mbouchar View Post
    It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

    Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.
    it's a pretty well established idea that on-disk compression can (and does) lead to impressive performance increase under many workloads. it's not a simple "yay" or "nay". the fact is in the time your disk seeks once your CPU has already burned thru several million cycles ... it's like light speed vs. the fastest human vehicle -- anything you can shave off the latter is probably a win, even if it already seems "pretty fast".

    there are even several workloads that benefit from _memory_ compression ... because RAM -- the uber spaceship of 2010+ -- is still peanuts compared to C. everything that isn't your CPU is a cache to your CPU; the less time to get it there the better. data locality is king.

    http://lwn.net/Articles/426853/

    "Zcache doubles RAM efficiency while providing a significant performance boost on many workloads."

    both zcache and btrfs (not sure ZFS) use LZO ... the simple truth is your CPU is a lazy bastard that spends most of it's time blaming it's poor efficiency on the rest of the team ;-)

    C Anthony

  7. #17
    Join Date
    Jan 2011
    Posts
    17

    Default

    I use btrfs + lzo compression on the latest linux images for the O2 Joggler.

    http://joggler.exotica.org.uk/
    http://en.wikipedia.org/wiki/O2_Joggler

    using slow usb flash devices (mine is ~9mb write and ~27mb/second read),. btrfs with lzo feels significantly faster than using zlib (and less cpu usage). Not an actual benchmark of course

  8. #18
    Join Date
    Sep 2010
    Posts
    1

    Default

    Me too, I think this testing should run on some video file to see the real benefit of compression.

    I don;t think 9X iozone testing result can apply to real world.

    Quote Originally Posted by BenderRodriguez View Post
    Why do i get the feeling that zlib/lzo mode speeds up iozone and fs-mark only because the created files are empty and thus compress almost infintely good ?

  9. #19
    Join Date
    Jul 2008
    Posts
    1,719

    Default

    Quote Originally Posted by mbouchar View Post
    It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

    Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.
    only that ram is dirt cheap and cpu's underworked almost all the time.

  10. #20
    Join Date
    Apr 2010
    Posts
    1,946

    Default

    It only needs fsck now!

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •