Announcement

Collapse
No announcement yet.

OpenZFS 2.1 Gets More Cleanups, Better Documentation Ahead Of Release

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenZFS 2.1 Gets More Cleanups, Better Documentation Ahead Of Release

    Phoronix: OpenZFS 2.1 Gets More Cleanups, Better Documentation Ahead Of Release

    The seventh release candidate of OpenZFS 2.1 is now available for testing while it looks like soon it will cross the finish line as the latest feature release for this open-source ZFS file-system implementation for Linux and FreeBSD systems...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    zfs... another case of "i would like to like it"
    but it eats way to many iops and lacks to many features

    well... that way my dm-integrity->mdadm raid->cryptsetup->lvm->ext4 setup keeps running for a few years. way faster, better supported (eg defrag, swap)

    when i switch my nas to ssd i'll have a look again

    Comment


    • #3
      Originally posted by flower View Post
      zfs... another case of "i would like to like it"
      but it eats way to many iops and lacks to many features

      well... that way my dm-integrity->mdadm raid->cryptsetup->lvm->ext4 setup keeps running for a few years. way faster, better supported (eg defrag, swap)

      when i switch my nas to ssd i'll have a look again
      If you're talking about a NAS, then just stick a lot of memory in it and you won't even need swap or defrag. ZFS's heavy use of block caching makes defrag unimportant. You'll have RAID support as well as built-in encryption that can be applied to specific datasets, or have different encryption keys for different datasets. So your entire chain there could be replaced with one thing - ZFS.

      As far as lack of features, I don't know what I would do without the ability to send/receive snapshots, and ZVOL's are extremely useful for virtualization.

      Comment


      • #4
        Originally posted by Chugworth View Post
        If you're talking about a NAS, then just stick a lot of memory in it and you won't even need swap or defrag. ZFS's heavy use of block caching makes defrag unimportant. You'll have RAID support as well as built-in encryption that can be applied to specific datasets, or have different encryption keys for different datasets. So your entire chain there could be replaced with one thing - ZFS.

        As far as lack of features, I don't know what I would do without the ability to send/receive snapshots, and ZVOL's are extremely useful for virtualization.
        my nas has 128GB ECC RAM
        but with my 5400rpm drives seq read with zfs was down to 10mb/s after a short while.
        i even tried it with a special vdev on a fast m2 ssd.

        my setup still can saturate my 2.5gb/s link. without defrag. that m2 ssd now holds the integrity data in an zfs zvol though

        Comment


        • #5
          Last I knew, OpenZFS would not build on at least some ppc64le kernels as of 5.12 because the new (and now default as of 5.12) use of the improved spinlock code which is exported GPL-only.

          Comment


          • #6
            Originally posted by flower View Post

            my nas has 128GB ECC RAM
            but with my 5400rpm drives seq read with zfs was down to 10mb/s after a short while.
            i even tried it with a special vdev on a fast m2 ssd.

            my setup still can saturate my 2.5gb/s link. without defrag. that m2 ssd now holds the integrity data in an zfs zvol though
            10mb/s? I'm guessing you must be using SMR drives. They are known to work badly with ZFS.

            Comment


            • #7
              Originally posted by Chugworth View Post
              10mb/s? I'm guessing you must be using SMR drives. They are known to work badly with ZFS.
              no, definitly not smr.
              at start it was very good.
              then i moved all files (around 4tb filled from 12tb nett total on a mirrored stripe) to another dataset which likely caused fragmentation (no way to check)

              and from then it got worse.
              mostly very big files which got extracted in parallel - which may caused frag as well.
              but i am not the only one... 5400rpm drives are just too slow for zfs. saw some users on reddit with the same problem - and the suggestion was always the same -> get faster drives.

              10mb/s was after two month.

              but well i am happy with my dm-integrity/mdadm setup. i'll switch to zfs as soon as i can afford 16x2tb dc ssd.

              EDIT: zpool iostat did show a huge iops count. and those 5400rpms just dont have many. so i am pretty sure about my analysis. they also where aligned correctly
              Last edited by flower; 11 June 2021, 01:46 PM.

              Comment


              • #8
                Originally posted by flower View Post
                zfs... another case of "i would like to like it"
                but it eats way to many iops and lacks to many features

                well... that way my dm-integrity->mdadm raid->cryptsetup->lvm->ext4 setup keeps running for a few years. way faster, better supported (eg defrag, swap)

                when i switch my nas to ssd i'll have a look again
                The IOPS penalty against MDRAID+XFS is about 15% when properly tuned according to tests that I did in 2016. Many would deem this to be a worthwhile tradeoff for the O(1) snapshot capability and background incremental send/recv that enable backups without penalizing active workloads. I suspect that the difference between a properly tuned ZFS and your setup is smaller than the one I measured against MDRAID+XFS in the past.

                Fragmentation is a fairly minor issue with ZFS (and most POSIX filesystems), although it does hurt performance in certain workloads. You can do it via zfs send/recv if you really must defragment. Whether that is online or not depends on how much free space you have, although best results might be found offline (where you restore from backup). However, most people simply don't need it.
                Last edited by ryao; 11 June 2021, 02:12 PM.

                Comment


                • #9
                  Originally posted by CommunityMember View Post
                  Last I knew, OpenZFS would not build on at least some ppc64le kernels as of 5.12 because the new (and now default as of 5.12) use of the improved spinlock code which is exported GPL-only.
                  There is an open issue for that:

                  Type Version/Name Distribution Name Ubuntu Distribution Version 21.10 (development) Linux Kernel 5.12 Architecture ppc64el ZFS Version 2.0.4 SPL Version 2.0.4 Describe the problem you're observing ...


                  Canonical is asking if upstream would change the symbol export to retain compatibility.

                  Comment


                  • #10
                    Originally posted by ryao View Post
                    There is an open issue for that: https://github.com/openzfs/zfs/issues/11958
                    An issue was previously opened as: https://github.com/openzfs/zfs/issues/11172 but until 5.12 (with changes in some default configs) it was mostly unseen.

                    Canonical is asking if upstream would change the symbol export to retain compatibility.
                    Given that spinlocks are a rather common/fundamental thing for 3rd party modules to need either directly or indirectly, and realistically not all such modules are going to be GPL'd, it clearly is in the interest of module developers to see this get resolved by relaxing the GPL-only requirement for this specific use (if kernel spinlocks are now going to be off-limits, things get interesting). Time (and the authors willingness to change the license) will tell what approaches are going to be necessary for such 3rd part modules.

                    Comment

                    Working...
                    X