Page 1 of 3 123 LastLast
Results 1 to 10 of 27

Thread: Testing Out Btrfs In Ubuntu 10.10

  1. #1
    Join Date
    Jan 2007
    Posts
    14,240

    Default Testing Out Btrfs In Ubuntu 10.10

    Phoronix: Testing Out Btrfs In Ubuntu 10.10

    Yesterday we reported that Ubuntu 10.10 gained Btrfs installation support and since then we have been trying out this Btrfs support in Ubuntu "Maverick Meerkat" and have a fresh set of Btrfs benchmarks to serve up.

    http://www.phoronix.com/vr.php?view=15067

  2. #2
    Join Date
    Oct 2009
    Posts
    198

    Default

    Mmmh, still no cpu usage stats.

    Can someone illuminate me, where does brtfs come from and what's the main difference from ext3/4 so that it appears to perform someway better than ext4?

  3. #3
    Join Date
    Mar 2007
    Posts
    2

    Angry Cpu stats

    We really need to see CPU charts to see if 'this' is better then 'that'.
    I know compression takes CPU time, it has to. Now the question is how much?
    Think about Atom CPU's and a FS that uses compression, ouch.

    Would be nice to have near real time CPU charts with every test.

    One test could show something that is on the slow side, but uses 2x less CPU time.

    And I dont think it would be to hard to do RAM usage on top while your doing CPU.

  4. #4
    Join Date
    Apr 2010
    Location
    Alexandria, Egypt
    Posts
    5

    Default Btrfs compression ??

    Can any one tell me what is that called "Btrfs compression" ?

  5. #5
    Join Date
    Sep 2008
    Posts
    4

    Default compression

    Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller? And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)

    And as the others have asked, I would guess the CPU usage is very important with compression enabled. And along with the CPU usage, would it support parallel processing? Because I rarely use both the cores on my computer since most tasks can't, and if can harness the processing from the core that would have been idle anyway during a single-threaded task, then that would be awesome as well.

    Excited for a new FS!
    jetpeach

  6. #6
    Join Date
    Oct 2009
    Posts
    101

    Default

    Btrfs info

    Btrfs is intended to be more scalable and flexible than ext3/4. Essentially, ext4 is a stop-gap measure to get some more out of the ext design so that Btrfs has time to mature. Some of the things Btrfs supports and ext4 does not include quick and easy snapshots (better than LVM snapshots, since they know about the filesystem structure), online defragmentation, and online growing and shrinking. Btrfs, once complete, should do pretty much everything that ZFS does and some things that ZFS doesn't.

    jetpeach:
    The compression does make the files stored on the disk smaller, and its effectiveness does depend on the type of file.

  7. #7
    Join Date
    Feb 2009
    Posts
    34

    Default A Short History of btrfs

    “A short history of btrfs” (LWN.NET July 22, 2009) by Valerie Aurora (formerly Henson) is available at http://lwn.net/Articles/342892/

  8. #8
    Join Date
    Feb 2009
    Posts
    34

    Default

    Btrfs provides the foundation for many useful features.

    Snapshots are point in time data captures. Most would recognize system rollback which depends on snapshots.

    Backup, the feature most people don't do until the data is lost. This backup is almost instantaneous.

    When combined with distributed data storage systems like CEPH, you get replication, protection and performance.

  9. #9
    Join Date
    Oct 2008
    Posts
    3,012

    Default

    Quote Originally Posted by Blue Beard View Post
    “A short history of btrfs” (LWN.NET July 22, 2009) by Valerie Aurora (formerly Henson) is available at http://lwn.net/Articles/342892/
    That's a great article on btrfs, everyone should read it.

    btrfs (btree file system) really isn't being created to increase performance over existing file systems, the idea is to get a bunch of really cool new features, and try to make it all optimized enough to keep it from slowing down.

  10. #10
    Join Date
    Oct 2008
    Posts
    3,012

    Default

    Quote Originally Posted by jetpeach View Post
    Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller?
    Yes. Although the main reason to do this is to reduce the amount that has to be read from the disk and therefore the number of seeks, speeding up access by relying on a fast cpu rather than a slow hdd resource to do the majority of the work.

    And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)
    Yes. Especially when it comes to artificial benchmarks, since they might just rely on writing all zeroes or ones out to the hdd which are more compressible than anything you'd run into in real life. Actually, it seems like they could probably make the file system smart enough to heuristically stop compressing files that are already compressed (like video) in order to avoid the performance penalty. I don't have any idea if that's already being done or not.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •