Errors are a fact of life.
That's just it -- different situations call for different tools. I read about people using ZFS for home media servers and the only ting I can think of is "overkill." Personally, I don't care if one bit gets flipped in a movie file. Seriously, one pixel, one color off, in one frame. I do believe that corruption happens more often than people think, but with GUI OS's, there's a huge chance that one pixel in a Windows Icon is going to be off. There's just so much non-critical space on a hard drive these days.
Originally Posted by kebabbert
Now, in the production server side, I could see where my work doesn't want a bit flipped in the source code and they are willing to pay the performance penalty and associated costs for ECC memory, etc. But they are sill stuck with NTFS, unless they want to triple IT spending and go with a SAN solution.
But 100% data integrity doesn't seem possible at a reasonable price point, if you look as CERN's data. Even with ZFS and ECC, other subsystems can induce errors. I suppose you could go the three systems route and only trust trust results that two out of three agree upon but who would pay for that except for a few fringe cases? Certainly not my home media server.