So, I totally agree that testing games was pointless... And you can't defend it saying you wanted to test "real life app experience". The way file system influences experience is by map loading, app startup, etc. NOT by fps.
So, boot time please, gnome/kde launch time, file copying, "du -sh /", text search on files etc.
And, it would definitely get more attractive to test more filesystems. JFS, Reiser4, btrfs would add a lot of flavor to such article. :)
Interesting results. I would however, like to see more variety in the bonnie, IOzone, and IOmeter configurations. These I feel actually do represent "real world" tasks; for tasks that deal with heavy I/O that is, which is why trying to simulate a variety of workloads through those tools is important. I can imagine a real fileserver streaming multiple video files while updating the locate database for example.
But I do think that your philosophy of testing with the defaults--however the software under test is supposed to be configured out-of-the-box--is a good one. It does not preclude tuning but results from tuning should never be shown alone. That said, it would also be interesting to see if there is any tuning that would make a difference for these filesystems in different cases.
ext4 seems cool! Does it protect against silent corruption? Typically 20% of a modern hard drive is devoted to error correcting codes. Once in a while, you will run into a problem that is not correctable, or what is worse; not detectable. You dont even know that there was some error in your files:
And Ive heard about a large ext3 filesystem being fsck, it took one week. Does ext4 suffer from the same problem?
EXT4 looks great in the first part of the test that tests large files. It also confirms what i have noticed that EXT3 sucks compared to XFS with big files (4 to 15GB).
Looking forward to using EXT4 on my fileserver that got lots of large mkv files :)
I'll not use it again, simply due to the amount of data I have lost on JFS, Same for ReiserFS.
If your hardware supports write-barriers, XFS doesn't lose any data or corrupt. Just about any hardware you can buy in the last 2 years support write barriers properly, so XFS should be fine.
The defualt Linux XFS tuning parameters is "wrong" in 2 ways:
* It makes the log section way too small
* It mounts with 2 log-buffers instead of the maximum 8.
Why do I mention those 2 things?
With more logbuffers XFS handles accessing losts of small files much better, and it effectively removes the "thrashing" some people mention. This is a mount-time parameter.
With a larger log, it can handle deletes and changes much better, since with XFS it tries to queue up as much things at a time, to minimise disk seeking. The defualt log is typically ~4MB, but enlarging this to 64MB, well you can feel the performance difference. This is a filesystem-create parameter.
Other usefull options are telling XFS how the underlying RAID is configured (If you have any), and it scales extremely well, since it can keep all disks ~ equaly busy. I was amased at how well I got an XFS filesystem to perform on a 5-disk RAID 5. Truly recommended if you RAID. This is a filesystem-create parameter.
Regarding power usage:
On my notebook I originally used JFS, because it apparently used the least CPU, but this didn't improve battery life at all. In fact it might have gotten better since I changed to XFS.
Yes, XFS uses significantly more CPU power, but it completes the DISK stuff much faster. So I reckon on most systems the DISK uses more power to seek than the extra CPU cycles used to avoid 1 seek.
Regarding XFS, I'm using it on partitions storing only big files, since it's excellent in that regard (as long as defragmention is done now and then). I once tried to use it as root filesystem, but that was a mistake as the performance with small files was really appaling. Since someone mentioned some tweaks to make it perform better on small files I went looking and found out that adding logbufs=8 (needs >=128MB RAM) to mount options should make it perform better.
So it will be interesting to see if it really makes a difference..
Have you mounted all FS with barriers on or off?
Because ext3 turns them off by default and xfs, reiserfs on by default.
barriers cost 30% performance on ext3. If you don't made the playing field even, the benchmark is not worth the electricity bill