You got it upside down. The concept of block device caches is well known and well documented throughout the (for example POSIX) API. The contract for write() is that it will SOMEHOW make notion that this and that should go here and there and all it guarantees is that subsequent read() returns that data; on the other hand, fsync() specification says that it will WAIT until the data, the metadata and the directory entry are reported by the DEVICE to have been written on a stable storage. If your OS or device behaves differently then it's BROKEN and all bets are off.And if you're running ext3, that only really matters if the writes were reordered by the drive. If you're running a file system like ext4 which believes it can write random data to random places at random times, well, you're toast.
Where does sync magically avoid data loss from the disk cache on a power failure? The only thing it guarantees is that the filesystem tries to write the data to the disk; there's no guarantee that it actually gets there, if the system crashes while the sync is in progress. And if the filesystem writes in a random order there's no guarantee that whatever part of the file does get to disk before a crash will be valid.
Seriously, you're demanding that programmers return to the stone age of computing where they had to worry all the time about what the hardware was doing underneath them; you might as well demand they make low-level BIOS calls to write files to disk or write their own raw I/O routines and interrupt handlers to read them back.
This is what I expect the programmers to acknowledge and work with, nothing more, nothing less. It's an abstraction that shields you from the actual implementation, either if the file system writes to disk at once or if on the other hand the data takes a round trip through the solar system.
However, the concept of EXT3 "hoping that data won't be reordered" is the exact opposite. You're ASSUMING certain geometry and behavior of the drives that may or may not hold. EXT4 actually steps down from this in the concept that a disk is a random write device with details unknown.
Oh yes, that's why barriers were initially OFF for ext3 and that's why some distro's (ubuntu) maintain that tradition even when the default has changed after a lenghty debate akin to the one we have.No, ext3 _IS_ more reliable, at least by default. That is a simple fact: the default configuration for ext3 on pretty much all distributions is set for reliability over performance, which is what the vast majority of users want for a general purpose filesystem.
They are told "leave your illusions and welcome to the real world". And in the spirit of freedom they always can revert to their old ways.So now people are being told 'dump ext3, which reliably stores your data in 99.999% of cases and replace it with ext4 which will happily corrupt it if your application doesn't use a transactional database model'. And you're surprised that people aren't rushing towards the brave new future of random data loss or lousy performance?
In other words, "buy an UPS and backup your data, morons".
You've reversed the cause and consequence. Firefuck not syncing its bookmarks is the result of a broken filesystem the majority used. Again an abstration that leaked: if ff would adhere to the standards, people would whine that their games run slow, because ext3 would also sync stuff from the game.A general purpose filesystem exists to reliably store user data on the disk. If a supposed general purpose filesystem deletes my bookmarks when a game crashes, then the filesystem is broken.
Home dirs are general purpose? C'mon.That's not to say that such a filesystem doesn't have other uses where reliable data storage is less important than performance, but it certainly should not be pushed for general purpose use like storing user home directories.
I don't know yet. Enlighten me.Or is ZFS for 'sissies' too?