Results 1 to 8 of 8

Thread: Btrfs vs ext4 Performance

  1. #1
    Join Date
    Jun 2010
    Posts
    237

    Default Btrfs vs ext4 Performance

    One of the things I've been following closely on Phoronix is the development of btrfs. My understanding is that it is intended to have a lot more features and be at least as fast as ext4. Unfortunately, every time I see performance comparisons, btrfs is still slower than ext4. In fact, relative to ext4, btrfs seems to be getting slower. I realize this is partially because of regressions and partly from ext4 getting faster. I'm curious what others think about it. Will it catch up with ext4 on the desktop (as opposed to servers and mobile devices) and eventually become faster or will it always be behind?

  2. #2
    Join Date
    Apr 2011
    Location
    Bogotá / Colombia
    Posts
    4

    Thumbs up Agreed

    BTRFS performance seems to be not so measurable as regressions, kernel mods and other kernel related stuff seems to be pushing it back most of the time.
    Don't take me wrong, Phoronix benchmarks rock, but we have to wait at least one year more.

    I ended up choosing EXT4 for "stability" -there are obscure/hidden transaction errors- and choosing Debian because it seems to run faster in this system/distro -perhaps for its lack of compiler optimizations- Even with experimental branch Linux 2.6.38.

    BTRFS is not so good with databases at this time, which is a must (also for hobbyist desktops) when you think seriously of it. But I must confess the only sad thing about EXT4 is its lack of a compress option.

    Compress is the real must when processors become more powerful and disks keep quite a constant transfer rate. On-The-Fly decompression tends to make things snappier as shown by Reiser4 some years ago.

  3. #3
    Join Date
    Apr 2008
    Location
    Saskatchewan, Canada
    Posts
    462

    Default

    Quote Originally Posted by code933k View Post
    Compress is the real must when processors become more powerful and disks keep quite a constant transfer rate. On-The-Fly decompression tends to make things snappier as shown by Reiser4 some years ago.
    Transfer rate is irrelevant for most users these days: the primary limiting factor on hard drive performance is seek time, which is why SSDs are much better for disk-intensive tasks which can live with the reduced lifespan; even my 'Green' drive which runs at 5400rpm or thereabouts gets >80MB/second of sustained reads. Very few applications require sustained reads at such high rates, and those wihch do are probably working with compressed data (e.g. high-end video editing) and won't benefit from dynamic compression.

    Copy-on-write filesystems spread data around the disk more than traditional filesystems like ext4, so they're inevitably going to be slowed by the increased seek times that introduces. From what I remember Sun used to recommend a multi-gigabyte RAM cache for best performance of ZFS and I'd presume btrfs will be similar.

  4. #4
    Join Date
    Apr 2011
    Location
    Bogotá / Colombia
    Posts
    4

    Default

    Quote Originally Posted by movieman View Post
    Transfer rate is irrelevant for most users these days: the primary limiting factor on hard drive performance is seek time, which is why SSDs are much better for disk-intensive tasks which can live with the reduced lifespan
    I think you miss the whole point. Not that I don't agree with you along the lines, but I was talking about improving performance on traditional and affordable disks. Of course we all can go buy SSD stuff and stop worrying about filesystems. Isn't FAT32 doing great in the speed department for many of them anyway?

  5. #5
    Join Date
    Apr 2008
    Location
    Saskatchewan, Canada
    Posts
    462

    Default

    Quote Originally Posted by code933k View Post
    I think you miss the whole point. Not that I don't agree with you along the lines, but I was talking about improving performance on traditional and affordable disks.
    Yes. And built-in compression won't improve user-visible performance because very few users need a higher transfer rate than current disks provide and most of those users are processing compressed files anyway. Uncompressed HD editing might benefit, but then you're into huge RAID arrays to get decent performance in the first place; on-the-fly compression probably won't give you more than 2:1 with HD video in the best case.

    It's worth remembering that transfer rate increases automatically as you pack more data onto the same sized disk because it's (rotation speed x bytes per track). Seek time is the thing that doesn't increase much because it's primarily driven by rotation speed, which has barely changed in the last decade.

  6. #6
    Join Date
    Jul 2010
    Posts
    49

    Default

    Quote Originally Posted by movieman View Post
    Seek time is the thing that doesn't increase much because it's primarily driven by rotation speed, which has barely changed in the last decade.
    Packing more data onto fewer tracks implies fewer seeks. Also shorter effective seek times for the seeks that do occur. Seek time, today, is about equally divided between rotational latency, and head movement latency.

  7. #7
    Join Date
    Apr 2008
    Location
    Saskatchewan, Canada
    Posts
    462

    Default

    Quote Originally Posted by sbergman27 View Post
    Packing more data onto fewer tracks implies fewer seeks. Also shorter effective seek times for the seeks that do occur. Seek time, today, is about equally divided between rotational latency, and head movement latency.
    Seek times have barely changed and tracks are so small that the odds of not having to seek when starting a new program are small; worse than that, even if it's on the same track you still have to wait for the disk to rotate.

    The SSD I put in my netbook has a lower read throughput than the HD it replaced, but it halved the boot time because the disk spends most of its time seeking during the boot process and throughput is irrelevant when you can only perform about sixty seeks per second.

  8. #8
    Join Date
    Jul 2010
    Posts
    49

    Default

    [QUOTE=movieman;211966]Seek times have barely changed and tracks are so small that the odds of not having to seek when starting a new program are small[QUOTE]
    When starting a new program? We're talking about read/write requests. Seek times in the 1980s were on the order of 40 - 65 ms. Today, around 8 ms. Serial throughput... more like 1 mb/s in the 80s and maybe 100 mb/s now.

    worse than that, even if it's on the same track you still have to wait for the disk to rotate.
    But at that point, we're talking about throughput and not seek time. Even remotely modern disks do 1-track seeks without having to wait for a rotation. You can't get 100mb/s out of a drive without being able to read/write more or less continuously. And they cache track reads. If the request is on the same track, it gets returned immediately, because it has already been read.

    The SSD I put in my netbook has a lower read throughput than the HD it replaced, but it halved the boot time because the disk spends most of its time seeking during the boot process and throughput is irrelevant when you can only perform about sixty seeks per second.
    More like 120. But indeed, seek time is generally more important than throughput. We've added another tier to the storage hierarchy.

    Registers->L1->L2(->L3)->RAM->SDD->HDD

    (We only had registers, ram, and permanent storage when I started out. How messy things have gotten!)

    I keep my OS and small stuff on a cheap SSD. Big files go to a symlink to a large rotating storage drive. Linux should really have provisions for an SSD-cache by now, for when that makes sense. Microsoft is ahead on that front.

    -Steve

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •