Announcement

Collapse
No announcement yet.

Some Quick Tests With ZFS, F2FS, Btrfs & Friends On Linux 4.4

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Some Quick Tests With ZFS, F2FS, Btrfs & Friends On Linux 4.4

    Phoronix: Some Quick Tests With ZFS, F2FS, Btrfs & Friends On Linux 4.4

    Our latest benchmarking fun from the freshly minted Linux 4.4 kernel is testing all of the popular built-in Linux file-systems plus the recently updated ZFS On Linux. File-systems tested for this comparison were Btrfs, XFS, EXT4, F2FS, ReiserFS, NTFS, and ZFS.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Good benches. If you're looking at trying something different, two in particular would be interesting: tux3, reiser4, and maybe something that no one knows about? Then just throw in an ext4 to show a 'baseline' in performance.

    Comment


    • #3
      Originally posted by profoundWHALE View Post
      Good benches. If you're looking at trying something different, two in particular would be interesting: tux3, reiser4, and maybe something that no one knows about? Then just throw in an ext4 to show a 'baseline' in performance.
      I've done Reiser4 and Tux3 tests last time there were major updates to them. AFAIK, nothing major to any of them since my last round of tests... Only use now would be to compare how they compare against the latest mainline Linux file-systems as opposed to expecting any boosts out of Reiser4 or Tux3.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Would be good to see those tests for USB2 + USB3 Flashdrives in addition with exfat and vfat. because i never know whats the best to use in there.
        for internal SSD there is only f2fs for me. because it works well and even after a crash my data was still there

        Comment


        • #5
          First I dont get this the last fs benches were from dezember? or maybe november? so what did you except would happend since then shurly not much of course?

          So the only intersting part of this benches would be the relative numbers, which os got regressionss or which gained most speed. so blending in the last numbers too would be nice, now everybody has to compare it manualy if he wants to learn anything from this numbers. except maybe people that are new on phoronix.

          Comment


          • #6
            What about testing *efficiency* with a Dual Core Intel and rotational disk instead of the usual highend datacenter uber-server

            Comment


            • #7
              Originally posted by blackiwid View Post
              First I dont get this the last fs benches were from dezember? or maybe november? so what did you except would happend since then shurly not much of course?

              So the only intersting part of this benches would be the relative numbers, which os got regressionss or which gained most speed. so blending in the last numbers too would be nice, now everybody has to compare it manualy if he wants to learn anything from this numbers. except maybe people that are new on phoronix.
              Yes, if the same tests were done on a same system, the deltas would be really nice. Should be quite easy to draw the old and new data on the same diagram.

              Also regarding switches, it would make sense to test each file system with various switches on/off and pick the best/worst/avg for the main test against other file systems. Now we can't really know what's the potential of each system.

              Another complaint is hdd vs sdd. I think the situation for home users is 50%-50%, which one to pick nowadays. You could also test something like Linux bcache with a hybrid disk system. These shouldn't be too exotic. There are lots of real people using a sdd & hdd hybrid and both sdd and hdd as the master disk on desktops. On servers you want raidX or raidzX comparisons. And comparison of resilvering / scrub speed.

              Comment


              • #8
                Originally posted by caligula View Post

                Yes, if the same tests were done on a same system, the deltas would be really nice. Should be quite easy to draw the old and new data on the same diagram.

                Also regarding switches, it would make sense to test each file system with various switches on/off and pick the best/worst/avg for the main test against other file systems. Now we can't really know what's the potential of each system.
                Anyone know of a concise wiki/guide or basic optimization script for doing the best mount options for each file-system? Just as if I do the switches I'm familiar with, I already know based upon past experience people will complain why did I enable ABC and not XYZ, etc.
                Michael Larabel
                https://www.michaellarabel.com/

                Comment


                • #9
                  What would be interesting would be HFS+ benchmarks to see just how bad HFS+ is.

                  Comment


                  • #10
                    Honestly I quite impressed by the FUSE NTFS performance. I remember when it was new, NTFS performance was a painful 100x slower than ext3 at the time.


                    A word of warning with flash drives: They are often optimised to with with their pre-formatted fs, iow vFAT or ExFAT.

                    You can get rock solid performance using ext4, but you likely have to profile the flash unit and custom configure the partition offset and configure striping for optimal performance.
                    One example is on the Samsun EVO+ 32GB microsd card I have. if I do 4k random write bench on the pre-formatted vFAT, I got about 3.1MB/s on a straight re-format to ext4 I got about 0.6MB/s, and using flashbench to determine the flash cell sizes and then aligning the partition and configuring striping correctly on EXT4, I got 3.1MB/s 4k random write again.

                    The controllers often also have oddities such as they perform much better in the first 64MB where the FAT tables sit, and then after that their performance drops severely.

                    There are lots of useful info from the RPi community as to how to configure flash for better performance.

                    Comment

                    Working...
                    X