Announcement

Collapse
No announcement yet.

Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

    Phoronix: Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

    With the Linux 4.17 kernel soon to be released, I've been running some fresh file-system and I/O scheduler tests -- among other benchmarks -- of this late stage kernel code. For your viewing pleasure today are tests of a high performance Intel Optane 900p NVMe SSD with different I/O scheduler options available with Linux 4.17.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Intel Optane is a really niche NVMe though. May be tests with recent Samsung 970 Evo or WD Black NVMe would be more relevant for most users.

    Comment


    • #3
      And the queue depths were?

      Comment


      • #4
        how to enable the low_latency property of BFQ? module parameter? sysfs? sysctl? where?

        Comment


        • #5
          What happened to CFQ? Last time I checked that was the default for HDDs and one of the most common schedulers out there. It's very strange to see it omitted from the comparison.

          Comment


          • #6
            What was the block size used on the Optane? If you published it, I didn't see it, (sorry)

            Comment


            • #7
              Originally posted by edwaleni View Post
              What was the block size used on the Optane? If you published it, I didn't see it, (sorry)
              If you are interested I did several benchmarks of Optane with different sector sizes: http://www.linuxsystems.it/2018/05/o...t4-benchmarks/
              ## VGA ##
              AMD: X1950XTX, HD3870, HD5870
              Intel: GMA45, HD3000 (Core i5 2500K)

              Comment


              • #8
                Originally posted by Shnatsel View Post
                What happened to CFQ? Last time I checked that was the default for HDDs and one of the most common schedulers out there. It's very strange to see it omitted from the comparison.
                CFQ is essentially irrelevant to NVMe drives. CFQ isn't one of the multi-queue schedulers and NVMe is optimized to work best with multiple IO queues. One queue per CPU, generally. That means that a CPU does not have to send its IO to another CPU for queuing, it simply submits it to the NVMe device.

                BFQ is the multi-queue replacement for CFQ and is good for almost all devices. Kyber keeps NVMe responsive under heavy IO load by keeping the queue depth managable so high priority IO doesn't have to wait too long. Important since NVMe queues can, in theory (there's usually hardware limits), grow to 65535 in length under the "none" scheduler.

                Comment


                • #9
                  Originally posted by pegasus View Post
                  And the queue depths were?
                  Defaults
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #10
                    Originally posted by Shnatsel View Post
                    What happened to CFQ? Last time I checked that was the default for HDDs and one of the most common schedulers out there. It's very strange to see it omitted from the comparison.
                    Was just an oversight of forgetting to add it to the test queue, will be in the SSD/HDD tests coming up.
                    Michael Larabel
                    https://www.michaellarabel.com/

                    Comment

                    Working...
                    X