As pointed out, the compression tests are failing meaningless because the data is all zeros. A real world test using various forms of data would be far more representative. Also, it would be useful to see how btrfs shapes up in terms of throughput, latency and CPU consumption when using multiple reader/writer threads against other filesystems. From the benchmarking I've done, btrfs consumes more CPU than ext4 and xfs when dealing with multiple threads. Also, a file system is only really useful if fsck is dependable, so some quantative measurements on this would be most useful too.
And is there any reason, why lzo should not be enabled by default when using a modern processor? It probably won't be able to compress already compressed data. Yet, would it make the disc usage in this cases worse?
Recovering data from a failed drive where compression was used is close to impossible as you can't scan directly for the data file.
They ought to add support for Google's Snappy compression.
Both LZ4 and snappy patches have been submitted to btrfs by various people, but I don't think they made it into a release yet (at least, I see no documentation for lz4 or snappy mount options on the btrfs wiki).
Okay, LZO is fast. Really. Especially at decompression. As for snappy/lz4 they seems to be more or less on par in most benchmarks. Sometimes one performs better than other. As for me it loos like LZ4 is a bit better in regard of speed to compression ratio, but this is not granted and depends on nature of data being compressed. So there are no EPIC WIN which can justify inclusion of very similar (simple speed optimized LZ-based) algos.