Announcement

Collapse
No announcement yet.

It Turns Out The Btrfs RAID 5/6 Issue Isn't Completely Fixed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • It Turns Out The Btrfs RAID 5/6 Issue Isn't Completely Fixed

    Phoronix: It Turns Out The Btrfs RAID 5/6 Issue Isn't Completely Fixed

    Earlier this week we reported on the Btrfs RAID5/RAID6 code being fixed, well, it appeared to. However, now the Btrfs developers have clarified that the situation isn't entirely resolved...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    poor poor Facebook

    Comment


    • #3
      Good to see that there are some attempts at good information here.

      Comment


      • #4
        I am following the BTRFS mailing list and was actually planning to alert Michael about this, I'm glad youhave the guts to fully acknowledge that a previous news story might have been a bit "enthusiastic" and that you are posing this to avoid that users who don't read the btrfs wiki end up in a dataloss trap. Well done Michael ! Keep up the good work.

        http://www.dirtcellar.net

        Comment


        • #5
          Rock solid production setup = XFS on mdadm raid.

          I see what BTRFS is going for, but IMO its going to be years before its ready for prime time.

          Comment


          • #6
            Originally posted by torsionbar28 View Post
            Rock solid production RAID5/6 setup = $production-grade-fs on mdadm raid. (where production-grade-fs is either ext4, btrfs or xfs)

            I see what BTRFS is going for RAID5/6, but IMO its going to be years before its ready for prime time.
            fixed.

            Comment


            • #7
              Originally posted by waxhead View Post
              I am following the BTRFS mailing list and was actually planning to alert Michael about this, I'm glad youhave the guts to fully acknowledge that a previous news story might have been a bit "enthusiastic" and that you are posing this to avoid that users who don't read the btrfs wiki end up in a dataloss trap. Well done Michael ! Keep up the good work.
              It's not really a matter of trying to make the previous post "enthusiastic", based upon my reading of that earlier email thread with the patch (as well as the reader who tipped me off to that patch), I had thought it was fixed, without reading too deep into it.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #8
                Originally posted by torsionbar28 View Post
                Rock solid production setup = XFS on mdadm raid.
                It's not nearly the same as a checksummed COW filesystem with volume management. If there's silent bit rot how would the filesystem tell that something's wrong? And even if it could, how would it tell which volume has the correct data (provided there's the required redundancy).

                So IMO: rock solid production setup = Btrfs with RAID1.

                Comment


                • #9
                  Originally posted by Michael View Post

                  It's not really a matter of trying to make the previous post "enthusiastic", based upon my reading of that earlier email thread with the patch (as well as the reader who tipped me off to that patch), I had thought it was fixed, without reading too deep into it.
                  Yeah , I did not mean that you purposely made it that way. What I meant was that when you are eager to get out news, you may miss what is written on the mailing list a bit later (and quite frankly who can blame you on that). Of course there is a balance in getting the news out fast vs getting it 100% accurate. Heck, I read the news - read the mailing list and was also quite convinced that RAID56 did a huge step forward (which it did). It was not until a bit later some other things surfaced. So in all honestly , I was perhaps a bit eager to comment too and could have chosen another word.

                  http://www.dirtcellar.net

                  Comment


                  • #10
                    Originally posted by kobblestown View Post

                    If there's silent bit rot how would the filesystem tell that something's wrong? And even if it could, how would it tell which volume has the correct data (provided there's the required redundancy).
                    If you were using RAID 6 MDADM shouldn't it possible to determine which disk had the corrupt sector by comparing both sets of parity systematically by computing the parity leaving each disk out one at a time?

                    Comment

                    Working...
                    X