Announcement

Collapse
No announcement yet.

6-Disk ZFS On Linux RAID Server Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 6-Disk ZFS On Linux RAID Server Benchmarks

    Phoronix: 6-Disk ZFS On Linux RAID Server Benchmarks

    With the recent big update to ZFS On Linux I've begun running some new ZFS Linux file-system tests. Today are just some preliminary numbers from running ZOL 0.6.4 with various RAID levels across six 300GB H106030SDSUN300G 10K RPM SAS drives...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Please do a self healing / crash test

    Comment


    • #3
      What is the difference between raidz and raidz1? I thought those were the exact same scheme.

      RAIDZ (or RAIDZ1) = similar to Raid 5
      RAIDZ2 = similar to Raid 6
      RAIDZ3 = even more parity than raidz2
      mirrored = raid1
      striped = raid0

      So is this a mistake in the article or what was actually meant by raidz vs raidz1 ?

      Comment


      • #4
        Well, something that would be useful is:

        * base performance of each disk
        * whether or not the disks are behind a raid controller, if so what controller and how are the disks exposed to linux (i.e. jbod or single disk raid0 volume on the controller)
        * Useful to know is whether or not the pool uses compression or not, if it uses compression, which algorithm
        * How was the pool created, what options, etc

        Same thing for the btrfs benchmarks

        Comment


        • #5
          Originally posted by fahrenheit View Post
          Well, something that would be useful is:

          * base performance of each disk
          * whether or not the disks are behind a raid controller, if so what controller and how are the disks exposed to linux (i.e. jbod or single disk raid0 volume on the controller)
          * Useful to know is whether or not the pool uses compression or not, if it uses compression, which algorithm
          * How was the pool created, what options, etc

          Same thing for the btrfs benchmarks
          also:

          If noop was used as disk scheduler.
          If there was an ARC limit set.

          Comment


          • #6
            Originally posted by energyman View Post
            also:

            If noop was used as disk scheduler.
            If there was an ARC limit set.
            He always uses system defaults for the benchmarks.

            Comment


            • #7
              Originally posted by vadix View Post
              He always uses system defaults for the benchmarks.
              which means screwing over every fs that is not the default...

              Comment


              • #8
                Do you have an older benchmark with the same set up, but on Linux 3.x, that we can compare with?

                Comment


                • #9
                  normally i'd be thinking two raidz arrays of 3 disks striped together would give ok performance, reliability etc for 10k sas drives. although i'd feel more comfortable with a hot spare; and those drives are likely to be old, and raidz2 would give more performance if ram requirements aren't high, and random access isn't high. (most of the working set should fit in ram)

                  the alternate is 3 pair's of mirrors (it looks like the test could be a 6 way mirror, with every disk mirrored to each other?)

                  zfs tuning software wise zfs set redundant_metadata=most, zfs set compression=lz4

                  if not needing much ram could increase arc cache size too; as it'll only use half of memory.

                  if wanting to get higher performance could change the 2 system disks out for ssd's, and have system/zil/l2arc share the ssd's; although the sas controller may need improving if wanting good sata3 speeds.

                  Comment


                  • #10
                    Originally posted by mercutio View Post
                    normally i'd be thinking two raidz arrays of 3 disks striped together would give ok performance, reliability etc for 10k sas drives. although i'd feel more comfortable with a hot spare; and those drives are likely to be old, and raidz2 would give more performance if ram requirements aren't high, and random access isn't high. (most of the working set should fit in ram)

                    the alternate is 3 pair's of mirrors (it looks like the test could be a 6 way mirror, with every disk mirrored to each other?)

                    zfs tuning software wise zfs set redundant_metadata=most, zfs set compression=lz4

                    if not needing much ram could increase arc cache size too; as it'll only use half of memory.

                    if wanting to get higher performance could change the 2 system disks out for ssd's, and have system/zil/l2arc share the ssd's; although the sas controller may need improving if wanting good sata3 speeds.
                    didn't know about the redundant_metadata setting and that ZFS offered that much redundancy - excellent !


                    having a larger ARC beyond half of memory - doesn't necessarily mean higher performance:

                    The problem I reported on several pull requests (#3190 (comment) and #3216 (comment)) was a system hang when doing a 100% read pattern with 16 threads. After some digging, I found the problem is lo...

                    Comment

                    Working...
                    X