Announcement

Collapse
No announcement yet.

Linux 5.6 I/O Scheduler Benchmarks: None, Kyber, BFQ, MQ-Deadline

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 5.6 I/O Scheduler Benchmarks: None, Kyber, BFQ, MQ-Deadline

    Phoronix: Linux 5.6 I/O Scheduler Benchmarks: None, Kyber, BFQ, MQ-Deadline

    While some Linux distributions are still using MQ-Deadline or Kyber by default for NVMe SSD storage, using no I/O scheduler still tends to perform the best overall for this speedy storage medium.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Wow, now this was interesting ... I was a fan of BFQ but it looks like I should switch to none ...

    Let's see the next test with HDDs ...

    Comment


    • #3
      Well, for many it's not just raw performanceperformance that is important but also responsiveness and multitasking performance.

      For me on a SATA SSD, the raw file copy performance is best with noop or deadline, but the "system" performance is much better with BFQ.

      Comment


      • #4
        Good article, and I'd also like to see if Optane has similar responses to cutting out the I/O scheduler.

        It can also depend quite a bit on the workload since DBs like PostGres are clearly already doing their own I/O optimizations inside the software so the kernel-level schedulers are more getting in the way for those workloads. However, for other workloads that aren't as optimized some type of kernel scheduler might still be helpful. Obviously there's a huge gap between NVME and old spinning storage, but there may be a need for an NVME-specific scheduler if one with even be helpful in these devices.

        Comment


        • #5
          I'd also like to see 900p Optane results

          Comment


          • #6
            Originally posted by haplo602 View Post
            Let's see the next test with HDDs ...
            Yes, that would be interesting. HDDs still offer far more economical bulk storage.

            Comment


            • #7
              "none" did surprisingly well. Is it bad for my SSD if I switch to no scheduler at all?

              Comment


              • #8
                I've been running none with no built in io-schedulers for over 7 years now on my workstations with low QD. Bloat.

                Comment


                • #9
                  You have to remember NVMe on hardware level is "kind of" pararrel so no scheduling and asking things straight away is simply fastest, and you don't have to worry about those problems. However for AHCI SATA drivers and also IDE drives story might be diffrent. And still comes question of SSD vs HDD. NVMe is by far best scenario for "none".

                  Comment


                  • #10
                    Originally posted by piotrj3 View Post
                    You have to remember NVMe on hardware level is "kind of" pararrel so no scheduling and asking things straight away is simply fastest, and you don't have to worry about those problems. However for AHCI SATA drivers and also IDE drives story might be diffrent. And still comes question of SSD vs HDD. NVMe is by far best scenario for "none".
                    Does NVMe not suffer responsiveness issues with `none` like SATA disks? This benchmark shows it's great for performance, but has no indication of the impact on responsiveness under load when trying to do anything else via GUI.

                    Comment

                    Working...
                    X