Insignificant corner case. What would I gain with a couple of bigger disks in redundant RAID, outside of trivial cases ? If I'm running RAID-5, I need to know that my data is distributed on all drives with parity, so no matter where it is:I'll give you another extreme example: Lets say you have a 12 disk x 1TB RAID array. If you wanted to upgrade to 2TB disks, with btrfs you could just remove a few disks (space permitting), and add a few bigger ones. This would be time consuming, but it's possible to upgrade an entire array like this without downtime. With MDADM, it's possible but significantly more difficult as the original disk sizes are stored in the metadata. It takes some... work, so it's usually just easier to build a new array and copy the data over.
- one faulty drive won't kill me
- i can count on transfer performance of RAID-5 with N drives
Extra checksums are nice, but as ortogonal addition to RAID parity, not as replacement.
The difference between tradition RAID5 and btrfs, is that the checksumming is done on a per-file/per-metadata level instead of on a per-stripe level. There are a lot of articles online about the RAID5 write hole, and how checksummed file systems like btrfs can help avoid it. ]
This alone is reason enough to keep away from it as solution that could replace RAID infrastructure. How could one seriously use such solution without maintenance tools ?In my opinion the main thing holding btrfs back from production use at the DC, is userspace tools. The btrfs utilities are very simple for working with arrays, but there is not yet a lot of automation for dealing with removing bad disks from arrays, or working with hot-spares, etc. I hoping this comes soon.