Announcement

Collapse
No announcement yet.

Stratis Storage 3.3 Released - Easily Make Use Of Expanded RAID Arrays

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Stratis Storage 3.3 Released - Easily Make Use Of Expanded RAID Arrays

    Phoronix: Stratis Storage 3.3 Released - Easily Make Use Of Expanded RAID Arrays

    Red Hat's storage team responsible for the Stratis solution has released a new feature update...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I'm still probably missing something, but I was not able to find out how Stratis handles physical device failures. There does not seem to be any management and old redundancy non-support was removed some version past. So for me Stratis seems like RAID-0 lookalike where any failed device is going to destroy some or all data. This seems to me totally non-starter, especially as ​​BTRFS and ZFS have redundancy options.

    Comment


    • #3
      Originally posted by maage View Post
      I'm still probably missing something, but I was not able to find out how Stratis handles physical device failures. There does not seem to be any management and old redundancy non-support was removed some version past. So for me Stratis seems like RAID-0 lookalike where any failed device is going to destroy some or all data. This seems to me totally non-starter, especially as ​​BTRFS and ZFS have redundancy options.
      don't know Stratis either, but I'd guess they use it togheter with dmraid and/or LVM to get redundancy

      Comment


      • #4
        Originally posted by maage View Post
        I'm still probably missing something, but I was not able to find out how Stratis handles physical device failures. There does not seem to be any management and old redundancy non-support was removed some version past. So for me Stratis seems like RAID-0 lookalike where any failed device is going to destroy some or all data. This seems to me totally non-starter, especially as ​​BTRFS and ZFS have redundancy options.
        As far as I can tell, they don't.

        At some point (https://lwn.net/Articles/755454/) they planned to implement dm-integrity and dm-raid modules which would have done the job of detecting data corruption and providing data redundancy, but it doesn't seem to have been acted upon.

        Comment


        • #5
          Originally posted by Developer12 View Post

          As far as I can tell, they don't.

          At some point (https://lwn.net/Articles/755454/) they planned to implement dm-integrity and dm-raid modules which would have done the job of detecting data corruption and providing data redundancy, but it doesn't seem to have been acted upon.
          Awesome link. Per that diagram, I have issues with them marking integrity as optional. Long term, it should be a priority assuming it's possible. Otherwise, why wouldn't I stick with one of the systems that can detect, report, and fix data errors transparently? I've had both BTRFS and ZFS find problems in scrubs for enterprise HDDs. It's really strange to consider bit flips and mechanical errors acceptable if you're going through the trouble of setting up SW Raid on Linux.

          I'm rooting for StratisD's success, but it has been 5 years and I haven't seen a peep on the issue of data integrity. Who wants to take that gamble?

          Ultimately, I'm hoping StratisD will have an answer someday.
          Last edited by Mitch; 21 October 2022, 05:24 PM.

          Comment


          • #6
            Originally posted by Mitch View Post

            Awesome link. Per that diagram, I have issues with them marking integrity as optional. Long term, it should be a priority assuming it's possible. Otherwise, why wouldn't I stick with one of the systems that can detect, report, and fix data errors transparently? I've had both BTRFS and ZFS find problems in scrubs for enterprise HDDs. It's really strange to consider bit flips and mechanical errors acceptable if you're going through the trouble of setting up SW Raid on Linux.

            I'm rooting for StratisD's success, but it has been 5 years and I haven't seen a peep on the issue of data integrity. Who wants to take that gamble?

            Ultimately, I'm hoping StratisD will have an answer someday.
            Ultimately, everyone is fighting the last war. The "table stakes" haven't been adjusted to include data checksumming and repair. The reasons vary, from the buggy reputation of btrfs to the licencing scares of ZFS to the complexity of mdadm, but the default remains ext4. The latest cutting-edge development remains journalling.

            At this rate microsoft may well beat everyone, releasing an NTFS replacement with built-in checksums and the option to install to a mirror. They'll make two boot SSDs required for their next marketing brand and overnight everyone will have online checksumming and repair, except people running linux on ext4. They actually already support something not unlike ZFS in their standard disk management tools. (https://learn.microsoft.com/en-us/wi.../refs-overview) It's a sensible move for them too, being as plenty of trouble plaguing windows users can come from file corruption and hard drive failures.

            Comment


            • #7
              Originally posted by Developer12 View Post

              Ultimately, everyone is fighting the last war. The "table stakes" haven't been adjusted to include data checksumming and repair. The reasons vary, from the buggy reputation of btrfs to the licencing scares of ZFS to the complexity of mdadm, but the default remains ext4. The latest cutting-edge development remains journalling.

              At this rate microsoft may well beat everyone, releasing an NTFS replacement with built-in checksums and the option to install to a mirror. They'll make two boot SSDs required for their next marketing brand and overnight everyone will have online checksumming and repair, except people running linux on ext4. They actually already support something not unlike ZFS in their standard disk management tools. (https://learn.microsoft.com/en-us/wi.../refs-overview) It's a sensible move for them too, being as plenty of trouble plaguing windows users can come from file corruption and hard drive failures.
              ReFS has been around for a decade, and still can't serve as a boot drive. It still lacks fundamental features like hard links. For that matter, my workplace has dismissed Stratis for lacking fundamental features, like truly accurate space accounting and volume shrinking.

              Comment


              • #8
                Originally posted by Snaipersky View Post

                ReFS has been around for a decade, and still can't serve as a boot drive. It still lacks fundamental features like hard links. For that matter, my workplace has dismissed Stratis for lacking fundamental features, like truly accurate space accounting and volume shrinking.
                Perhaps, but at this rate I could see microsoft shipping a consumer version of ReFS before ext4 is dethroned from it's entrenched position. It's probably not that hard to add various features to ReFS on top of the existing infrastructure, just as ZFS has a layer that implements unix filesystems on top of it's existing objectstore model.

                Comment


                • #9
                  michael is single handedly keeping stratis alive. everyone on the planet would assume that this project had faded into obscurity were it not for the fact that michael obsessively reports on every point release

                  Comment


                  • #10
                    Originally posted by Developer12 View Post

                    Perhaps, but at this rate I could see microsoft shipping a consumer version of ReFS before ext4 is dethroned from it's entrenched position
                    Ext4 continues to be a good general purpose mature filesystem with a legacy that goes back decades, so some distributions continuing to use it by default shouldn't be a surprise to anyone. However, there are plenty of distributions using other filesystems as default. Fedora and OpenSUSE/SUSE uses Btrfs as default. RHEL uses XFS by default and so on.

                    Comment

                    Working...
                    X