Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 23

Thread: Btrfs In Linux 3.10 Gets Skinny Extents, Quota Rebuilds

  1. #11
    Join Date
    May 2013
    Posts
    13

    Default

    Or should I use another mode of encryption?

  2. #12
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,921

    Default

    Quote Originally Posted by ᘜᕟᗃᒟ View Post
    So would a btrfs filesystem with LZO compression and LVM encryption be reliable?
    You wouldn't do LVM -> Encryption -> btrfs -> LZO

    You'd encrypt indivudal partitions with LUKS, like /dev/sda1 and /dev/sdb1, then when you're making the btrfs filesystem you'd do

    mkfs.btrfs /dev/sda1 /dev/sdb1 -L EncryptedRoot

    then turn on LZO compression. Btrfs IS its own volume manager so unless you're continually resizing partitions theres no need to do LVM AND btrfs

    I'm not positive that you can do compression ontop of encryption though, you might be able to, but im not 100% positive.

  3. #13

    Default

    Quote Originally Posted by ᘜᕟᗃᒟ View Post
    So would a btrfs filesystem with LZO compression and LVM encryption be reliable?
    have a read of
    https://btrfs.wiki.kernel.org/index...._encryption.3F

  4. #14
    Join Date
    Feb 2010
    Posts
    28

    Thumbs up

    Hey!

    Maybe the skinny EXTents will fix that fractal fragmentation nightmare ... stupid buffer under-run ...

    But to get off topic entirely, reading the newsgroups on Tux3 ... apparently ... Hirofumi changed the horribly slow itable btree search to a
    simple "allocate the next inode number" counter, and shazam! The slowpoke became a superstar.

    Amazing ... simply amazing ...

  5. #15
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,921

    Default

    Quote Originally Posted by juxtatux View Post
    Hey!

    Maybe the skinny EXTents will fix that fractal fragmentation nightmare ... stupid buffer under-run ...

    But to get off topic entirely, reading the newsgroups on Tux3 ... apparently ... Hirofumi changed the horribly slow itable btree search to a
    simple "allocate the next inode number" counter, and shazam! The slowpoke became a superstar.

    Amazing ... simply amazing ...
    That already got reported on Jux, see Michael's article about Tux3 and Dbench. "Allocate the next inode number" is always the fastest way to do it, its just not always the smartest. See the article and the follow up to see why and the explanation.

  6. #16
    Join Date
    Feb 2010
    Posts
    28

    Cool

    Quote Originally Posted by Ericg View Post
    That already got reported on Jux, see Michael's article about Tux3 and Dbench. "Allocate the next inode number" is always the fastest way to do it, its just not always the smartest. See the article and the follow up to see why and the explanation.
    Ah, I get it now - you've done that so the front end of tux3 won't encounter any blocking operations and so can offload 100% of operations. It also explains the sync call every 4 seconds to keep tux3 back end writing out to disk so that a) all the offloaded work is done by the sync process and not measured by the benchmark, and b) so the front end doesn't overrun queues and throttle or run out of memory.

    Oh, so nicely contrived. But terribly obvious now that I've found it. You've carefully crafted the benchmark to demonstrate a best case workload for the tux3 architecture, then carefully not measured the overhead of the work tux3 has offloaded, and then not disclosed any of this in the hope that all people will look at is the headline.

    This would make a great case study for a "BenchMarketing For Dummies" book.

    When you're right you're right!

  7. #17
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,636

    Default

    You're posting in the wrong thread...

  8. #18
    Join Date
    Feb 2010
    Posts
    28

    Default

    Quote Originally Posted by GreatEmerald View Post
    You're posting in the wrong thread...
    My bad ... well ... On a btrfs *note* I hope the skinny extents fix the fragmentation issues all the same!

  9. #19
    Join Date
    Jan 2013
    Posts
    1,059

    Default

    Has anyone tried to break BTRFS on purpose and document it?
    Like hard-reset on power on, on writing, on recovery, on recovery of recovery, etc?

    I wonder how reliable it is compared to EXT on journal=data, excluding resistance to bitrot.

  10. #20
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,921

    Default

    Quote Originally Posted by brosis View Post
    Has anyone tried to break BTRFS on purpose and document it?
    Like hard-reset on power on, on writing, on recovery, on recovery of recovery, etc?

    I wonder how reliable it is compared to EXT on journal=data, excluding resistance to bitrot.
    I broke btrfs on accident one time during an install of F18 because I accidentally killed the power to it (Pulled the wrong cable >.>) during an update. I didn't try to fix it though, for all I know a quick 'btrfsck /dev/sda1' would've fixed it, instead since it was a brand new install I just redid it.

    And I know btrfs can detect even a single bit of corruption, that got mentioned in a review of it that Michael covered-- the company's raid hardware was failing and btrfs noticed one bit was corrupted so far, printed warnings and then the failing hardware was caught.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •