Page 7 of 7 FirstFirst ... 567
Results 61 to 67 of 67

Thread: Ted Ts'o: EXT4 Within Striking Distance Of XFS

  1. #61
    Join Date
    Oct 2007
    Location
    Sweden
    Posts
    174

    Default

    Quote Originally Posted by Jimbo View Post
    Attacking me makes you more juvenile.
    I never attacked you. It was you who unprovoked used condescending tone while not quite being correct. Something I remember was happening to me when I was a teenager. Something I never did was repeating the same argument with no respect to the arguments of the other part.

    Please show me those chipsets, current or old, that do data checksumming from memory to the storage controller, and I will admit that part to be safe.
    I know the data is checksummed on the drive. You dont need to repeat yourself.

    Quote Originally Posted by Jimbo View Post
    I repeat, 1234567890 is not converted on 2234567890 due to errors on your hard disk, and this is not not happening all the time as kebabbert says (this was my main reply), and even on memory errors your hardware controller and your OS protects you. Yeah silent errors exits, but is not easy to see them.

    You are guessing that simple errors lead to data corruption and this is not true!!
    I don't guess. I know. It did happen to me due to firmware or hardware issue (confirmed once) and I don't have a huge storage cluster.
    People touch their internal computer components all the time, potentially inducing electrostatic hidden malfunctions. Memory corruption happened to me also, although I don't know if it caused data corruption in storage (I happened not to have checksums of all the files on my drive then), but it would be trivial for this to happen.

    I would expect ZFS and data checksumming being actually of more use for home users, who run on cheap commodity hardware (often self-assembled) with little error correction & detection (yes, the disk drives do do error correction, even the cheap ones, but none of the other components do) and with buggy firmware.

  2. #62
    Join Date
    Apr 2008
    Location
    Saskatchewan, Canada
    Posts
    462

    Default

    Quote Originally Posted by misiu_mp View Post
    I would expect ZFS and data checksumming being actually of more use for home users, who run on cheap commodity hardware (often self-assembled) with little error correction & detection (yes, the disk drives do do error correction, even the cheap ones, but none of the other components do) and with buggy firmware.
    Data is checksummed over the ATA bus and to and from the disk, so the odds of a bit error occurring there and not being flagged are tiny. So it would have to happen in the memory or disk controller hardware if it was going to happen at all.

    I read an interesting article about memory errors a while back. I think it was either on the ZFS mailing lists or linked from there; from what I remember they found that there were basically two types of memory, some would give essentially no errors and the other would give lots of errors, and nothing much in between. So for a robust system the best solution was to throw out any DIMM which had memory errors until you ended up with a set that didn't.

    In twenty years I can think of only one issue we put down to memory corruption, where a script file we'd read from disk had an error that went away after we rebooted; presumably it got corrupted in the OS disk cache and reloading it from disk resolved that.

  3. #63
    Join Date
    Jan 2008
    Posts
    772

    Default

    Quote Originally Posted by movieman View Post
    I read an interesting article about memory errors a while back. I think it was either on the ZFS mailing lists or linked from there; from what I remember they found that there were basically two types of memory, some would give essentially no errors and the other would give lots of errors, and nothing much in between. So for a robust system the best solution was to throw out any DIMM which had memory errors until you ended up with a set that didn't.
    That approach would make sense if you'd somehow already eliminated the mainboard/chipset as the source of the corruption (and yes, that does happen).

  4. #64
    Join Date
    Oct 2007
    Location
    Sweden
    Posts
    174

    Default

    I had a 512mb stick with about 10mb of bad blocks in it. The system did boot and run. Applications were crashing randomly and I would get some kernel panics. I could use it through the bad ram patch by Rick van Rein.
    Another one was a laptop with soldered-on memory that showed just a few bad addresses. I couldn't use the badram patch because the addresses were too low, outside of the ordinary user-accessed memory. I set it to be used by the built-in graphics. Got few glitches from the display every now and then but it worked well.

  5. #65
    Join Date
    Jul 2009
    Posts
    61

    Default

    Quote Originally Posted by energyman View Post
    file A is on disk.

    You want to rename it to B.

    You call rename(). A crash at the wrong moment and both are gone. r there is a file A or B. But it contents? Gone. That is a fucking braindead idiocy.
    This part, I agree. With less strong language than "fucking braindead" -- I cannot write filesystem code, so I respect those that can.

    Quote Originally Posted by energyman View Post
    POSIX is crap anyway (windows NT is posix compliant too... yeah..)
    Here I disagree. A.) POSIX is certainly not crap. B.) Windows NT is POSIX-compatible by returning E_NOTIMPL (not implemented) to every single POSIX function. Personally, I don't call this compliant, though it technically may be.

    Also, I am a stickler to specifications. If POSIX sucks and fsync() is a bad method, then get a better standard. But just ignoring the standard, doing things the non-sanctioned way, and then whining when data is lost, is not an option. You FOLLOW THE STANDARD, and if you believe the standard is bad, you make a DIFFERENT one and follow that one. Just ignoring the standards is braindead.

    Quote Originally Posted by energyman View Post
    In reality data is sacrosanct. Nuking it is not an option. A FS nuking data is fucking broken by fucking design.
    Could you calm down so a sensible debate can be had? Emotions and logical debate do not mix. Anyways, I agree that data has priority above all.

  6. #66
    Join Date
    Jul 2009
    Posts
    61

    Default

    Quote Originally Posted by movieman View Post
    Fsync() is evil.

    I mean, really, truly, horribly, satanically evil.

    Flushing data on my laptop forces the disk to spin up just to write a file that I probably don't care that much about, thereby wasting my battery power. Flushing data on an ecommerce database server, on the other hand, is probably vital to ensure that databases are kept up to date.

    But that's a system configuration choice, and should not ever be something that applications randomly decide to do. If I don't care that I might lose the last five minutes of files when I crash, then Firefox shouldn't be calling fsync() every time I visit a new web page. But it does because there are so many crappy filesystems which will corrupt your files if you crash before everything has been flushed to disk.

    Having every application decide whether to force a write to the disk and waste my battery power is simply braindead. Filesystems should behave in a sensible manner so that we don't need this kind of hackery to make them work the way they should have worked in the first place. If I have file A on disk and I edit it and write it out, then the filesystem should be able to ensure that when I read it back after a reboot it will either be file A or file B and not an empty file or some corrupted mixture of the two. Anything else is unacceptable in a general use filesystem (special use filesystems may well prefer speed to consistency and be able to handle corruption issues).
    As I just said to energyman/Jade, I agree. HOWEVER, stick to the damn standards. POSIX may suck, fsync() may suck. But either make a new standard and stack that respects it (I have not yet heard a good, implementable proposal of another way besides full fsync() to guarantee data safety), and then follow that, or stick to POSIX. Ignoring bits of a standard and insisting other parts of the software stack respect how you've deviated from it is a good way to cause issues in the future, like these here.

  7. #67
    Join Date
    Jul 2009
    Posts
    61

    Default

    Addendum to my reply to energyman/Jade (damn 10-minute edit lockout):

    Oh, and by adding 'nodelalloc' to ext4's configuration, you can force it to play nice with non-conforming apps. I know you like Reiser4 for other reasons too, which I do not claim are invalid. Personally, I would rather take ext4's support-by-large-userbase and the fact that every recent filesystem-dealing app (recovery, resizing/moving, etc.) supports ext4 rather than use an out-of-tree filesystem which requires more pain from me to get working with other distros I play with, etc., and has less FS app support. But it is every man's own decision, and you need to calm down, respect this, and stop saying everything that is not your opinion is bad.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •