EXT4 Still Leads Over Btrfs File-System On Linux 3.8
Phoronix: EXT4 Still Leads Over Btrfs File-System On Linux 3.8
With the final release of the Linux 3.8 kernel coming in the very near future, here are file-system benchmarks of EXT4 and Btrfs on the Linux 3.8 development code compared to recent Linux kernel releases.
Would be interesting to know why BTRFS is slower than ext4. Maybe it is simply inherently slower due to some design decisions that won't go away.
Just to point out the obvious: This is not really what BTRFS is about, so it's all a bit meaningless.
I've not used it so far for anything important as it still looks immature to me, but BTRFS is similar in intent to ZFS. It's designed with data integrity at it's heart. Ext4 could be 5 times faster than BTRFS, but if it comes at the expense of my data, then I'll choose the tortoise!
- Metadata and Data checksums that verify the read data is what was actually written,
- Automatic fixing of (meta)data if the redundancy is there,
- Snapshots for history,
- Manage large pools of storage units (typically physical disks),
Ext4 is "just" a file system.
- No builtin protection against silent data corruption,
- If you can't see it's corrupted, you can't know to try and fix it, plus redundancy would require md,
- Needs LVM for snapshots,
- Needs md for managing pools of disks,
Now if the test were for say 4+ disks in a RAID10, with Ext4 on top of LVM and md, then it might start becoming more apples to apples. It also goes without saying that setting up that BTRFS RAID10 would take much less effort and reading than md/LVM/ext4.
Last edited by kiwi_kid_aka_bod; 02-14-2013 at 02:38 PM.
RIght on, except that even with lvm, ext4 doesn't do everything that btrfs can do (i.e., always at least one good copy of data, even if it is older than current) and there is the write hole with raid (though i do wonder if you could implement raid10 such that writes happened at different times to each stripe on the mirrors).
Originally Posted by kiwi_kid_aka_bod
I've recently bought a new hd (seagate momentus 750GB) and decided to go with btrfs for the root (though home and var are both still ext4). So far its been amazing. The laptop has had numerous lockups (unrelated to btrfs) and other unclean shutdowns and it has resumed each time with no need for the annoying journal checking.
I find the headline to be misleading... EXT4 can't be leading over Btrfs, as they address different needs. It's comparing Apple to Orange. If they were artificially made to be similar (like enabling nodatacow and stopping consistency checking on Btrfs), it might make more sense. But as it is...
As kiwi kid has pointed out, BTRFS has a lot of additional features that make it either more reliable or more helpful. EXT4 might be fast but it is a relatively primitive FS - I like to think of it as a very very well polished FAT32 with journaling and support for drives 2TB and larger.
Originally Posted by blackout23
I'd be intersested.....
I'd be interested in seeing EXT4 in 3.8 vs. EXT4 in 3.7 given that 3.8 is supposed to support inline data, which could lead to improved performance as well as lower disk usage.
The big show stopper for BTRFS IMHO is that it does not appear to support hot spares. That's a major issue for anyone wanting to use RAID for work.
Yawn. Please can produce something meaningful like a comparison of native ZFS port vs. BTRFS. I would actually be interested in that... This article is just another junk "RSS filler"...
And again with the SSD only tests - how do I know what HD performance looks like
Most people will be using TB drives in the applications for BTRFS and both EXT4 and BRFS are optimised for seek times etc on physical drives - I'd like to know how these metrics compare.
This comparison tells me nothing :-(
As everyone pointed out already, a real test would be a simple RAID1 test of ext4 over MD, versus BTRFS RAID1 with two disks. And perhaps a similar test with RAID5/6 since the BTRFS code is almost ready.
Another feature that I'm waiting on, that I don't believe has been implemented yet, is automatic removeal of failing drives from the array. A did a lot of testing on four failing drives in a BTRFS RAID1 array. BTRFS does an absolutely amazing job of recovering stripes when bad blocks are found. The problem is that there is no setting for kicking a disk from the array when too many errors are encountered (like MD has). If a disk is dying and a huge swath of sectors goes bad, BTRFS will just keep trying to read, and relocating blocks as it goes. This had the effect slowing my test server to an absolute craw, rendering it unusable. The good news is that this feature should be trivial to implement, and probably will once the RAID5/6 code matures a bit. I'd expect the hot spare code to also make its way it at that time.
Originally Posted by Shaman666
Looking forward to it!