Linux 3.8 File-System Testing From A SATA 3.0 HDD
Phoronix: Linux 3.8 File-System Testing From A SATA 3.0 HDD
Most often when carrying out any Linux file-system benchmarks -- or really, any benchmarks in general -- on Phoronix it's using solid-state storage. SSDs are just too great to pass up with their incredible performance. However, for those still using rotating media, here's a collection of file-system benchmarks from the new Linux 3.8 kernel when tested on a Serial ATA 3.0 Western Digital hard drive.
BTRFS is competitive
I fully expected to have a reason to bash BTRFS's poor performance, due to how slow it presented itself in previous benchmarks. But I'm pleasantly surprised to note that BTRFS beat EXT4 in several categories. Let's hope BTRFS gets even faster. Of course, this could all be because of an EXT4 regression...
Last I checked (which was a few weeks ago) BTRFS still has an out-standing data corruption bug, and it also still has a problem with database files (and by extension, VM images, both are very small random writes to a single massive file)
Originally Posted by stan
I do agree that BTRFS is competitive, and does very nicely in these tests and in real world (despite the data corruption bug, i'm using it as the root and home FS on my home server cuz It was the only machine I had to test it out on). But those 2 bugs do really need to get fixed just for peace of mind.
Last edited by Ericg; 02-27-2013 at 01:25 PM.
Do you have a pointer to that? From the mailing list a few issues crop up now and then, but nothing appears particularly popular or serious. (And other filesystems have bugs too.)
Originally Posted by Ericg
What problem? By default those kind of files are a worst case for a copy on write filesystem as you'll end up with it highly fragmented. There is an autodefrag setting which addresses that.
Originally Posted by Ericg
You can also disable copy on write as a mount option affecting the whole filesystem (in which case you may as well use a different filesystem), but the neat thing is you can disable it on a file by file basis with the chattr command. You can also chattr a directory in which case it will then apply to all newly created files in that directory too.
Yep, HDD tests are importent
I use Ext4, but...
"The EXT4 vs. Btrfs file-system comparison was more competitive with a Serial ATA 3.0 hard drive
compared to recent SSD file-system comparisons."
Comparing to the SSD (only) tests here
where on SSD Ext4 trounced BTRFS by a factor of 4 times, on the HDD BTRFS was faster than Ext4 by 14
percent. That's an enormous difference that is VERY important to people using multi TB (I was just
reading a dicsussion thhear where one of the members was managing 9TB through 130 TB systems) file
systems where SSD is not an option - especially as new research indicates spinning disk storage is
to go through yet another leap of capacity in the next few years.
"In the end the results were mixed between which Linux file-system was faster depending upon the
Are you kidding, in these tests BTRFS beat Ext4 in 9 out of 14 tests and was only significantly
behind Ext4 in the PostgresSQL test, which has historically been a problem case for COW file
systems. It's a shame that the SSD tests didn't include the PostgresSQL test, it would have made
comparison a bit more interesting.
In short, BTRFS was significiently behind Ext4 in one out of 14 tests and beat Ext4 in 64% of the
Great to see HDD benchmarks - they're truely useful, keep up the good work.
I don't quite understand the mention of 'fast Sata 3 interface' when using a relativily slow (5400 RPM) WD Green disk. Those green disks are amazing in their own right, but a performance benchmark on them seems a bit odd.
A "slow" drive can still saturate the interface. The drives all have caches on them so data can come from cache really fast. Additionally they can provide a fairly high sequential throughput. Also remember that the amount of area coming under the heads at the outside of the disk is still a lot (much less on the inside). The main difference between rotating disks and SSD is that the former have large seek latencies and variable throughput depending on caching algorithms/size as well as where on the disk the blocks are located (eg the middle reduces seek times, outside is best throughput)
Why using a deadline scheduler on a HDD?
Why adding JFS into comparison when it doesn't issue barrier at all? This could be a huge performance hit for other FSs.