Announcement

Collapse
No announcement yet.

Kingston HyperX Predator SSD On Linux: Still Not Making Par

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kingston HyperX Predator SSD On Linux: Still Not Making Par

    Phoronix: Kingston HyperX Predator SSD On Linux: Still Not Making Par

    Last week I posted some initial Kingston HyperX Predator M.2 SSD Linux benchmarks. Since those results, which were rather disappointing when factoring in the cost of this solid-state storage, I've run some more tests. While the performance has improved with a newer Skylake Linux system, the results are still not as great as advertised and I'm just returning the darn drive.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I've been pretty impressed with samsung SM951 so far:
    # dd if=/dev/sda of=/dev/null bs=131072
    1953586+1 records in
    1953586+1 records out
    256060514304 bytes (256 GB) copied, 102.796 s, 2.5 GB/s

    Comment


    • #3
      Originally posted by BillBroadley View Post
      I've been pretty impressed with samsung SM951 so far:
      # dd if=/dev/sda of=/dev/null bs=131072
      1953586+1 records in
      1953586+1 records out
      256060514304 bytes (256 GB) copied, 102.796 s, 2.5 GB/s
      Err.. is your distro remapping the device node to sda? It's a NVMe drive.

      Comment


      • #4
        Originally posted by phoronix View Post
        Phoronix: Kingston HyperX Predator SSD On Linux: Still Not Making Par

        Last week I posted some initial Kingston HyperX Predator M.2 SSD Linux benchmarks. Since those results, which were rather disappointing when factoring in the cost of this solid-state storage, I've run some more tests. While the performance has improved with a newer Skylake Linux system, the results are still not as great as advertised and I'm just returning the darn drive.

        http://www.phoronix.com/vr.php?view=22217
        Those numbers still look like a device in sata mode... not NVMe. I dont think its boken. Unless the kernel identifies it as an NVMe...

        Comment


        • #5
          Originally posted by milkylainen View Post
          Err.. is your distro remapping the device node to sda? It's a NVMe drive.
          No it's not.
          No idea what's wrong though. Kernel log might say how it's set up exactly albeit I'm not sure you'd see something there (I guess, there's always the potential of not having it configured as pcie 2.0 x4 for some reason or something, bugs in power management (which can also drop pcie lanes) but seems all pretty unlikely).
          That said, for pcie ssds to be faster than sata ones, you need the right workloads anyway - they are essentially faster at large file reads but not much else (the nvme ones are much faster with small blocks additionally but only at large queue depths). So they don't offer much in the value department unfortunately (albeit my belief is there's not really any technical reason why they are more expensive).

          Comment


          • #6
            @debianxfce: If V300, what firmware version?

            Kingston mem-gate: http://www.anandtech.com/show/7763/a...er-micron-nand

            Comment


            • #7
              for dd you need at least add iflag=direct to get reproducible results (without fs cache).

              Comment


              • #8
                Originally posted by marvin42 View Post
                for dd you need at least add iflag=direct to get reproducible results (without fs cache).
                While true, when doing dd over the entire SSD (typically an order of magnitude or more than RAM), the RAM cache isn't likely to make all that much of a difference.

                Comment


                • #9
                  Originally posted by milkylainen View Post

                  Err.. is your distro remapping the device node to sda? It's a NVMe drive.

                  SM951 is availabile in ACHI and NVMe versions. In my case it's ACHI and is the only storage connected. So I'm quite sure it's the SM951 I'm reading and writing to.

                  Comment


                  • #10
                    Originally posted by marvin42 View Post
                    for dd you need at least add iflag=direct to get reproducible results (without fs cache).
                    I wouldn't expect much difference since I'm testing a 256GB SSD on a system with 32GB ram. But it turns out it does return somewhat lower numbers:

                    # dd if=/dev/sda of=/dev/null bs=131072 iflag=direct
                    1953586+1 records in
                    1953586+1 records out
                    256060514304 bytes (256 GB) copied, 136.214 s, 1.9 GB/s

                    Just to retest:
                    # uptime;dd if=/dev/sda of=/dev/null bs=131072 iflag=direct
                    1953586+1 records in
                    1953586+1 records out
                    256060514304 bytes (256 GB) copied, 136.501 s, 1.9 GB/s

                    Not sure if all of that drop is all real though. It might well not overlap I/O as much and sync after every write. As a test I'll rerun with twice the block size:
                    # uptime;dd if=/dev/sda of=/dev/null bs=262144 iflag=direct
                    976793+1 records in
                    976793+1 records out
                    256060514304 bytes (256 GB) copied, 126.692 s, 2.0 GB/s

                    If I'm right that will keep improving:
                    # uptime;dd if=/dev/sda of=/dev/null bs=524288 iflag=direct
                    488396+1 records in
                    488396+1 records out
                    256060514304 bytes (256 GB) copied, 121.121 s, 2.1 GB/s






                    Comment

                    Working...
                    X