Or should I use another mode of encryption?
You wouldn't do LVM -> Encryption -> btrfs -> LZO
Originally Posted by ᘜᕟᗃᒟ
You'd encrypt indivudal partitions with LUKS, like /dev/sda1 and /dev/sdb1, then when you're making the btrfs filesystem you'd do
mkfs.btrfs /dev/sda1 /dev/sdb1 -L EncryptedRoot
then turn on LZO compression. Btrfs IS its own volume manager so unless you're continually resizing partitions theres no need to do LVM AND btrfs
I'm not positive that you can do compression ontop of encryption though, you might be able to, but im not 100% positive.
Maybe the skinny EXTents will fix that fractal fragmentation nightmare ... stupid buffer under-run ...
But to get off topic entirely, reading the newsgroups on Tux3 ... apparently ... Hirofumi changed the horribly slow itable btree search to a
simple "allocate the next inode number" counter, and shazam! The slowpoke became a superstar.
Amazing ... simply amazing ...
That already got reported on Jux, see Michael's article about Tux3 and Dbench. "Allocate the next inode number" is always the fastest way to do it, its just not always the smartest. See the article and the follow up to see why and the explanation.
Originally Posted by juxtatux
Ah, I get it now - you've done that so the front end of tux3 won't encounter any blocking operations and so can offload 100% of operations. It also explains the sync call every 4 seconds to keep tux3 back end writing out to disk so that a) all the offloaded work is done by the sync process and not measured by the benchmark, and b) so the front end doesn't overrun queues and throttle or run out of memory.
Originally Posted by Ericg
Oh, so nicely contrived. But terribly obvious now that I've found it. You've carefully crafted the benchmark to demonstrate a best case workload for the tux3 architecture, then carefully not measured the overhead of the work tux3 has offloaded, and then not disclosed any of this in the hope that all people will look at is the headline.
This would make a great case study for a "BenchMarketing For Dummies" book.
When you're right you're right!
You're posting in the wrong thread...
My bad ... well ... On a btrfs *note* I hope the skinny extents fix the fragmentation issues all the same!
Originally Posted by GreatEmerald
Has anyone tried to break BTRFS on purpose and document it?
Like hard-reset on power on, on writing, on recovery, on recovery of recovery, etc?
I wonder how reliable it is compared to EXT on journal=data, excluding resistance to bitrot.
I broke btrfs on accident one time during an install of F18 because I accidentally killed the power to it (Pulled the wrong cable >.>) during an update. I didn't try to fix it though, for all I know a quick 'btrfsck /dev/sda1' would've fixed it, instead since it was a brand new install I just redid it.
Originally Posted by brosis
And I know btrfs can detect even a single bit of corruption, that got mentioned in a review of it that Michael covered-- the company's raid hardware was failing and btrfs noticed one bit was corrupted so far, printed warnings and then the failing hardware was caught.