Announcement

Collapse
No announcement yet.

Linux RAID Performance On NVMe M.2 SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux RAID Performance On NVMe M.2 SSDs

    Phoronix: Linux RAID Performance On NVMe M.2 SSDs

    For boosting the I/O performance of the AMD EPYC 7601 Tyan server I decided to play around with a Linux RAID setup this weekend using two NVMe M.2 SSDs. This is our first time running some Linux RAID benchmarks of NVMe M.2 SSDs and for this comparison were tests of EXT4 and F2FS with MDADM soft RAID as well as with Btrfs using its built-in native RAID capabilities for some interesting weekend benchmarks.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Why no btrfs raid 0?

    Comment


    • #3
      Originally posted by thelongdivider View Post
      Why no btrfs raid 0?
      It's stated why in the article...
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        The Btrfs numbers in one word: confusing.

        Comment


        • #5
          Originally posted by sdack View Post
          The Btrfs numbers in one word: confusing.
          Yes, I agree. It makes me think something's wrong with the test or setup because when I did benchmarks on my BTRFS system, I got much better results. 4.12 kernel

          I tried to perform the same tests myself, but it was going to take way too long, so I aborted

          Comment


          • #6
            Typo:

            Originally posted by phoronix View Post
            While with the smaller code-base of LLVM (but still huge compared to most code-bases), the file-system wasn't impairing the compilation peformance.

            Comment


            • #7
              Originally posted by EarthMind View Post

              Yes, I agree. It makes me think something's wrong with the test or setup because when I did benchmarks on my BTRFS system, I got much better results. 4.12 kernel

              I tried to perform the same tests myself, but it was going to take way too long, so I aborted
              Keep in mind all the PTS tests are done multiple times, etc. But yes compared to other Btrfs tests I've done, on 4.13 the performance is definitely weirder than normal unless it's something to do with Btrfs handling in NVMe RAID conditions as first time have done NVMe RAID testing.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #8
                Anyone got an explanation how btrfs is seriously trailing the bunch in almost all tests, but then in Linux compilation it is the clear winner?

                Comment


                • #9
                  This test could have used M.2 SSD heatsinks.. otherwise some modules start to throttle. Of course the article doesn't mention this common scenario with M.2 drives. The heatsinks cost $10 per piece. Another issue is working set size vs DRAM cache on-chip.. Third thing, would have been interesting to see a baseline for kernel compilation in /tmpfs.
                  Last edited by caligula; 16 September 2017, 05:10 PM.

                  Comment


                  • #10
                    Except for blogbench F2Fs appears to be the best file system on NVMe as obviously (because it is optimized for flash nand memory). It should be the best also testin a flash drive memory provided of USB interface.

                    I wait F2Fs as alternative to Ext4 during the installation of linux operating system into a NAND flash memory.
                    Last edited by Azrael5; 16 September 2017, 05:11 PM.

                    Comment

                    Working...
                    X