Linux RAID Performance On NVMe M.2 SSDs With EXT4, Btrfs, F2FS

Written by Michael Larabel in Storage on 16 September 2017 at 12:15 PM EDT. Page 1 of 5. 18 Comments.

For boosting the I/O performance of the AMD EPYC 7601 Tyan server I decided to play around with a Linux RAID setup this weekend using two NVMe M.2 SSDs. This is our first time running some Linux RAID benchmarks of NVMe M.2 SSDs and for this comparison were tests of EXT4 and F2FS with MDADM soft RAID as well as with Btrfs using its built-in native RAID capabilities for some interesting weekend benchmarks.

The Tyan Transport SX TN70A-B8026 2U server arrived with a Samung 850 PRO SATA 3.0 SSD, but for pushing things further, I picked up two additional Corsair MP500 NVMe SSDs. I have a half dozen or so Corsair MP500 NVMe drives in different benchmarking systems in the past few months and when it comes to budget-friendly but performant NVMe drives, the MP500 has been among the best at this point. The 120GB version can be found for about $100 USD and that's sufficient capacity for these benchmark systems and delivers a nice boost in I/O performance over SATA SSDs.

The TN70A-B8026 does offer 24 x 2.5-inch NVMe hot-swap bays for really maximizing the I/O ability and is made possible by the 128 PCI-E 3.0 lanes of EPYC. Unfortunately I'm still working on acquiring some enterprise-grade 2.5-inch NVMe drives (or a number of M.2 to 2.5-inch adapters) but for the time being the Corsair MP500 in RAID should offer a nice I/O performance boost. The CSSD-F120GBMP500 is rated for maximum sequential reads up to 3,000 MB/s, maximum sequential writes up to 2,400 MB/s, random reads up to 150K IOPS, random writes up to 90K IOPS, and 175TBW endurance.

With the two Corsair Force MP500 drives installing in this AMD EPYC server running Ubuntu x86_64 with the Linux 4.13 kernel, I ran a variety of different benchmarks for reference purposes:

- Samsung 850 256GB EXT4
- Force MP500 120GB EXT4
- Force MP500 120GB F2FS
- Force MP500 120GB Btrfs
- Force MP500 120GB Btrfs + LZO
- 2 x 120GB Force MP500 EXT4 RAID0
- 2 x 120GB Force MP500 EXT4 RAID1
- 2 x 120GB Force MP500 F2FS RAID0
- 2 x 120GB Force MP500 F2FS RAID1
- 2 x 120GB Force MP500 Btrfs RAID1

A few notes about the selection: I didn't run any OpenZFS benchmarks for this testing as I will save it for its own article with ZFS being an out-of-tree Linux file-system while here just sticking to the interesting file-systems with Linux 4.13. You will notice the RAID0 tests are not there for Btrfs. Btrfs on Linux 4.13 with RAID0 ended up being very unstable and would result in the system crashing. This came as a surprise as usually Btrfs RAID0/RAID1 has been very solid (normally its with Btrfs RAID5/RAID6 where it can be risky), at least with SATA SSDs, but this was not the case today. I also did a Btrfs NVMe run for reference with the LZO native compression enabled by default.

Each of these file-systems were tested out-of-the-box with their stock mount options on Linux 4.13 except for where otherwise noted (e.g. LZO). All of these Linux I/O benchmarks were carried out in a fully-automated and reproducible manner using the Phoronix Test Suite.


Related Articles