Announcement

Collapse
No announcement yet.

Btrfs Gets Cleaned Up & Code Refactoring For Linux 5.3

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Btrfs Gets Cleaned Up & Code Refactoring For Linux 5.3

    Phoronix: Btrfs Gets Cleaned Up & Code Refactoring For Linux 5.3

    David Sterba sent in the Btrfs file-system updates on Monday for the Linux 5.3 kernel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2

    "Btrfs is also now tracking chunks that have been TRIM'ed and unchanged since last month so they are skipped from being TRIM'ed again."

    This is wrong... It's supposed to be last mount

    http://www.dirtcellar.net

    Comment


    • #3
      Originally posted by waxhead View Post
      "Btrfs is also now tracking chunks that have been TRIM'ed and unchanged since last month so they are skipped from being TRIM'ed again."

      This is wrong... It's supposed to be last mount
      Yep fixed, thanks.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Originally posted by Michael View Post

        Yep fixed, thanks.
        No worries... Actually my phone "corrected" mount to month and I accidentally posted the same "typo" if you will.... I'm surprised that you catched my fix that fast! Keep up the good work!

        http://www.dirtcellar.net

        Comment


        • #5
          Glad we're seeing more cleanups... But Unless maybe you're using RAID5/6 (what are the states of these nowadays?), BTRFS has been pretty stable for me since I've started using it back no later than 2012 on amd64...

          I've even been using it on my Talos II for a year now without issue, and now I'm using it on my BlackBird for close to a month without issue. In both cases, I'm using a little endian distribution.

          With my forays into POWER, I have finally found a really annoying issue with BTRFS.... If you've created a BTRFS with one page size, it will not mount with another. For example, if you create a filesystem running a kernel with a 64K page size on POWER, it will not mount on a kernel built with a 4k page size, and vice versa. This isn't a POWER specific issue. This can occur on arm64, etc as well.

          Comment


          • #6
            Hopefully in 5.4 we will see the RAID1C3 and RAID1C4 patch by David Sterba be integrated for RAID1 with 3 and 4 copies. That would be awesome an alternative to the raid 5/6.

            Comment


            • #7
              Originally posted by hiryu View Post
              Glad we're seeing more cleanups... But Unless maybe you're using RAID5/6 (what are the states of these nowadays?), BTRFS has been pretty stable for me since I've started using it back no later than 2012 on amd64...

              I've even been using it on my Talos II for a year now without issue, and now I'm using it on my BlackBird for close to a month without issue. In both cases, I'm using a little endian distribution.

              With my forays into POWER, I have finally found a really annoying issue with BTRFS.... If you've created a BTRFS with one page size, it will not mount with another. For example, if you create a filesystem running a kernel with a 64K page size on POWER, it will not mount on a kernel built with a 4k page size, and vice versa. This isn't a POWER specific issue. This can occur on arm64, etc as well.
              The raid5/6 bit is still vulnerable to the write hole issue. This seems to be a difficult problem to solve. You can sort of get around it with selecting raid1/10 for your metadata and while you may lose data you should not loose your filesystem. However raid1/10 only protect against one missing storage device so raid6 is almost pointless in this case. Patches for n-way raid1 was posted a while ago so once raid 1 with one extra or two extra copies are merged raid 6 is suddenly a bit more useful and that may help solve the write hole as well. That being said raid5/6 is not as well tested as raid 1/10 so personally I would wait a bit before using it. Oh and by the way the use of the name raid is a mistake when it comes to btrfs in my opinion - it is quite similar to regular raid in principle but it's not the same and that should have been addressed a long time ago to avoid confusing people that don't know how it works. Most "horror" stories on the mailing list are related to exotic setups or people not understanding the terminology in btrfs terms.

              http://www.dirtcellar.net

              Comment


              • #8
                Originally posted by waxhead View Post

                The raid5/6 bit is still vulnerable to the write hole issue. This seems to be a difficult problem to solve. You can sort of get around it with selecting raid1/10 for your metadata and while you may lose data you should not loose your filesystem. However raid1/10 only protect against one missing storage device so raid6 is almost pointless in this case. Patches for n-way raid1 was posted a while ago so once raid 1 with one extra or two extra copies are merged raid 6 is suddenly a bit more useful and that may help solve the write hole as well. That being said raid5/6 is not as well tested as raid 1/10 so personally I would wait a bit before using it. Oh and by the way the use of the name raid is a mistake when it comes to btrfs in my opinion - it is quite similar to regular raid in principle but it's not the same and that should have been addressed a long time ago to avoid confusing people that don't know how it works. Most "horror" stories on the mailing list are related to exotic setups or people not understanding the terminology in btrfs terms.
                I think you mean you can't lose more than one drive with RAID1. RAID10 you should be able to lose 2, though I suppose you can't lose more than one drive in a single mirror. Or is there something different about BTRFS RAID10?

                Comment


                • #9
                  Originally posted by hiryu View Post

                  I think you mean you can't lose more than one drive with RAID1. RAID10 you should be able to lose 2, though I suppose you can't lose more than one drive in a single mirror. Or is there something different about BTRFS RAID10?
                  My point exactly. You can only loose one drive. Look at the wiki!

                  edit (a bit more details):
                  Regular raid10 can sometimes loose two drives but you still have a 50% chance of losing your array if that happens. In btrfs you have a near 100% chance of losing your array if you loose more than two drive prescient as raid1 because the stripes are distributed across any of the devices in the array.
                  Last edited by waxhead; 16 July 2019, 03:37 PM.

                  http://www.dirtcellar.net

                  Comment


                  • #10
                    Originally posted by waxhead View Post

                    My point exactly. You can only loose one drive. Look at the wiki!
                    I will have to get around to that... Wow.

                    Although I have BTRFS+RAID10 on 2 of my systems... It's hardware RAID on both systems as I specifically need the performance.

                    Comment

                    Working...
                    X