@topic: So fuse about halfs the performance compared to native, and uses a ton of cpu. Did that surprise anyone?
Originally Posted by droidhacker
In addition, they make some SERIOUS claims against the viability of a fuse-based filesystem that are, quite frankly, FALSE. Yes, the zfs-fuse filesystem can be slow... on OLD KERNELS. The limitations that these problems are created by have been solved. zfs-fuse, when correctly configured, gives near-platter performance levels!
So say that the rsnapshot takes 30 minutes to run, does it guarantee that the last file to be transfered hasn't been altered?
What's your business case for a transfer that is going to take 30 minutes but yet may have been altered from when you started? If you're looking to back up something like a transactional database making copies of open files is not the way to be going.
While interesting to see how the different FSes performs, there is so much more to it than speed, imho.
Like the fact that ext4 will loose your data ( it has done, no one will trust it for another 5 years ). And btrfs is still a bit raw, but has potential. Still needs a few years worth of enterprise usage to be considered trustworthy.
It's amazing that linux has so many filesystems to choose from, but not one really good choise
How about this test for a more "real world" example:
Given /some/dir to be backed up at regular intervals, how much work is involved to do that for the different FSes? To spicy things up, the backup has to be of the state of that dir at exactly 1pm.
ext4 + lvm2 on top of your raid configuration of choice and you are done sir.
and this way protects you also from the screw ups of the filesystem itself.
Ah, I see. For some reason I was given the impression that you can use FUSE with pretty much any FS and never bothered to verify (I don't exactly have any use for it). It just seemed like a quick way to "level the playing field".
Michael, thanks for the tests. While I still don't think these are really "benchmarks", they certainly provide interesting real-world data, which is what we want, after all. Very good job overall; it must have taken significant effort to get these tests to run as well as they did.
I'll echo others' concerns that the tests are still being run on a single disk configuration, meaning that it is probably not informative for those who are seriously considering btrfs or zfs for server use. But for desktop users, these tests are indeed meaningful.
I like seeing ext4 being the performance leader almost always, and this is a good justification for using it on desktops. The filesystem-related data loss rates on ext4 are down low enough these days on 2.6.34+ that most desktop users can use it and get the performance benefit. Hopefully said desktop users don't keep any really important data on their computer without backing it up somewhere, like their email or a thumb drive -- 95%+ of desktop computers don't run a redundant RAID array, so that means you are always vulnerable to hardware failure, let alone software failure. So backup backup backup, etc., and then use your awesome ext4 performance to get your work done.
I do wish ext4 were COW and supported snapshots, but I have a feeling that would also kill some of the places where its performance excels. You can't have it all. Or, who knows, maybe Ted will come out with ext5 that combines all the advantages of ext4 with COW and snapshotting....