Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 33

Thread: Ubuntu 12.10 File-Systems: Btrfs, EXT4, XFS

  1. #21
    Join Date
    Nov 2011
    Posts
    24

    Default

    Quote Originally Posted by Ericg View Post
    Side note: Why is Btrfs so good at threaded IO? Like what the f*ck? Thats a huge jump up from ext4 and xfs. Is it because of BTRFS design or what?
    COW "converts" a bunch of random writes into sequential writes; the downside is that the files will become more fragmented than with a traditional update-in-place filesystem.
    Last edited by jabl; 09-18-2012 at 02:59 AM. Reason: remove whitespace

  2. #22
    Join Date
    Nov 2011
    Posts
    24

    Default

    Quote Originally Posted by russofris View Post
    Unfortunately, I am still stuck with the perf/regression testing, and still have to write a CBA report for the directors. The other option is to wait for our vendor (RHEL or OEL) to adopt the new FS as a default/recommended option and implement it during the next cycle.
    ext4 is the default fs in RHEL6. If you're stuck on an older release, well, sucks to be you..

    (Another big advantage of ext4 over ext3 in addition to the ones mentioned previously is much faster fsync, since ext3 cannot really fsync an individual file but rather the entire file system is synced when fsync() is called.)

  3. #23
    Join Date
    Jan 2010
    Location
    France
    Posts
    20

    Default

    Michael, could you benchmark EXT4 with different journal size ?

    From this article (2 years half old), it can make a big changes with small files : http://www.linux-mag.com/id/7666/

  4. #24
    Join Date
    Jul 2008
    Posts
    565

    Default

    Dpkg is incredibly, horribly, painfully slow with btrfs due to fsync calls. Would have been nice to see that added to the benchmarks.

  5. #25
    Join Date
    Oct 2010
    Posts
    92

    Default

    Quote Originally Posted by devius View Post
    I'm 99,9% sure that's the case simply because physically the HDD in this test would never be able to achieve such high random write values. Even the fastest consumer HDD (Velociraptor 1TB) wouldn't be able to achieve even 1/10 of those 35MB/s.
    Like jabl said, the way btrfs works effectively converts random writes into linear writes - and with that in mind, 35 MB/sec is fairly modest.

  6. #26
    Join Date
    Jan 2008
    Posts
    195

    Default

    You should have also tested JFS. In my database tests, JFS outperforms all the others. Also, you should do a test with two HDDs (or a combination of HDD and SSD) where the journal for the main filesystem is stored on the 2nd device. Again, in my tests this can make a huge difference in overall throughput.

    My results are posted here:
    http://highlandsun.com/hyc/mdb/microbench/july/#sec11

  7. #27
    Join Date
    Nov 2011
    Posts
    24

    Default

    Quote Originally Posted by highlandsun View Post
    You should have also tested JFS. In my database tests, JFS outperforms all the others
    Sure, a filesystem which doesn't support barriers (such as JFS or ext2) will obviously outperform one which does (e.g. ext4, btrfs, xfs) on a test which tests synchronous writes (fsync()). In order to avoid an apples to oranges comparison, you need to either

    - Disable the disk write cache when using a filesystem without barrier support (slower but safer).

    - Disable barriers on the filesystems with barrier support (mount with barrier=0) (fast but unsafe).

    - Or even better, use a device with a non-volatile write cache (e.g. a RAID card with battery backed cache) (fast AND safe).

  8. #28
    Join Date
    Jan 2008
    Posts
    195

    Default

    Quote Originally Posted by jabl View Post
    Sure, a filesystem which doesn't support barriers (such as JFS or ext2) will obviously outperform one which does (e.g. ext4, btrfs, xfs) on a test which tests synchronous writes (fsync()). In order to avoid an apples to oranges comparison, you need to either

    - Disable the disk write cache when using a filesystem without barrier support (slower but safer).

    - Disable barriers on the filesystems with barrier support (mount with barrier=0) (fast but unsafe).

    - Or even better, use a device with a non-volatile write cache (e.g. a RAID card with battery backed cache) (fast AND safe).
    Hm, thanks for pointing that out. Yeah, I probably need to retest with disk write cache disabled.

    Of course, in a dedicated database deployment, I would just have a single DB residing in a dedicated filesystem, and preallocate all of the space for the DB file(s). At that point, metadata updates are irrelevant, they would only be occurring for the mtime stamps and not for any structural changes, so FS structural corruption would be impossible.

  9. #29
    Join Date
    Sep 2012
    Posts
    7

    Default

    I believe that fsck is as essential as it was with the previous generation of filesystems.

  10. #30
    Join Date
    Nov 2011
    Posts
    24

    Default

    Quote Originally Posted by highlandsun View Post
    Of course, in a dedicated database deployment, I would just have a single DB residing in a dedicated filesystem, and preallocate all of the space for the DB file(s). At that point, metadata updates are irrelevant, they would only be occurring for the mtime stamps and not for any structural changes, so FS structural corruption would be impossible.
    Yes, in such a situation a database engine can get away with using fdatasync() instead of fsync(). The filesystem still needs to have barrier support in order to provide data integrity guarantees when used on a device with a volatile write cache, though.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •