Announcement

Collapse
No announcement yet.

Optane SSD RAID Performance With ZFS On Linux, EXT4, XFS, Btrfs, F2FS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Optane SSD RAID Performance With ZFS On Linux, EXT4, XFS, Btrfs, F2FS

    Phoronix: Optane SSD RAID Performance With ZFS On Linux, EXT4, XFS, Btrfs, F2FS

    This round of benchmarking fun consisted of packing two Intel Optane 900p high-performance NVMe solid-state drives into a system for a fresh round of RAID Linux benchmarking atop the in-development Linux 5.2 kernel plus providing a fresh look at the ZFS On Linux 0.8.1 performance.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Nice results, I'm surprised F2FS performed this well.
    The new result overview is pretty great, thanks Michael!

    Comment


    • #3
      Yes the new graph is cool!
      It'd be nice too to start including bcachefs in those, to get an idea of what's coming in the future.

      Comment


      • #4
        Originally posted by geearf View Post
        Yes the new graph is cool!
        It'd be nice too to start including bcachefs in those, to get an idea of what's coming in the future.
        As said in the article, Bcachefs is coming in its own article next week. It won't be part of any regular test articles (due to the time involved, etc) until it's 1. mainlined 2. proved its usefulness.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #5
          This is one of the weirdest tests I've read lately. RAID1 performing on the same level as RAID0 and in some cases better than no RAID? Either we need new tests for Optane/XPoint or there's something wrong with the whole setup.

          Comment


          • #6
            Since ZFSonLinux can't use SIMD optimizations anymore without kernel patches it runs rather slow these days.

            Comment


            • #7
              Originally posted by Michael View Post

              As said in the article, Bcachefs is coming in its own article next week. It won't be part of any regular test articles (due to the time involved, etc) until it's 1. mainlined 2. proved its usefulness.
              Woops sorry, I didn't see that.

              Thank you!

              As for usefulness, well I think speed is part of it. If it mounts faster and works faster than btrfs with a similar set of features (eventually), that's nice. If it's slower on everything with still less feature, not so much.

              Comment


              • #8
                Originally posted by bug77 View Post
                This is one of the weirdest tests I've read lately. RAID1 performing on the same level as RAID0 and in some cases better than no RAID? Either we need new tests for Optane/XPoint or there's something wrong with the whole setup.
                Why wouldn't RAID1 be faster? It allows for parallel read operations.

                Back in the day with spinning drives. RAID1 could be significantly faster as the driver could choose the drive that needed the least amount of seek, or keep two parallel sequential reads going at full speed.

                Comment


                • #9
                  Originally posted by Michael View Post

                  As said in the article, Bcachefs is coming in its own article next week. It won't be part of any regular test articles (due to the time involved, etc) until it's 1. mainlined 2. proved its usefulness.
                  Hi, Michael could you publish your ZFS configurations? those result are way too atrocious for my liking taking into account i can reach some of those results with spinning disks instead of SSDs, so i assume you are using a single default POOL with no Volumes with whatever Ubuntu include as "defaults" which are in no way right for benchmarking.

                  ZFS should never be used on the Bare POOL with defaults values.

                  some helpful commands to debug that performance:

                  zpool status -v
                  zfs list
                  zfs get all

                  also this one could help to see if you have multi queue active on all disk

                  cat /sys/block/your_drive_here/queue/scheduler

                  also did you create a RAID0 with ZFS? i mean something akin to zpool create -f [new pool name] /dev/sdx /dev/sdy? because that is the worst scenario possible for ZFS and honestly the one scenario where no one should use ZFS for because you get 0 as ZERO data protection but 100% of the overhead since each disk has to write Metadata and checksum while waiting for the other disk for the same which translate in ZERO scaling, you can add 100 drives in stripe and your top speed will never be more than -/+10% of the fastest single disk in the best case scenario but in the real world the more drives you add to the stripe the worst the performance will be.

                  Caveat:
                  I do understand that you are benchmarking the Out-Of-The-Box settings on scenarios that regular user should be familiar with, i do, but ZFS is not and never was meant for desktops or OOB settings, ZFS is/was designed specifically to be optimized per volume for whatever you need as is often the case on Enterprise hence the defaults are the worst case scenario settings OOB for 99% of the tasks that a regular user will need and specially for benchmarking.

                  If you post some of that relevant data i have no problem giving you a hand getting some basics right to improve your ZFS numbers, you also have several jewels on the Internet like archwiki and percona sites.

                  https://wiki.archlinux.org/index.php/ZFS (the basics done right)
                  http://open-zfs.org/wiki/Performance_tuning (the medium level optimizations)
                  https://www.percona.com/blog/2018/05...fs-performance (some high level percona magic )

                  Also you need a kernel patch to bring back hardware acceleration on ZFS if you don't have it

                  Thank you very much for your hard work

                  Comment


                  • #10
                    Originally posted by bug77 View Post
                    This is one of the weirdest tests I've read lately. RAID1 performing on the same level as RAID0 and in some cases better than no RAID? Either we need new tests for Optane/XPoint or there's something wrong with the whole setup.
                    RAID1 is supposed to be faster (at least compared to no RAID) when doing read tests. As long as there are no other bottlenecks, you should get proportionately double the bandwidth since you're reading different bytes from both drives at the same time. I can see how RAID0 might be slightly slower since you have one file split between 2 different drives, which might involve more logic.

                    Comment

                    Working...
                    X