Announcement

Collapse
No announcement yet.

EXT4, Btrfs, XFS & NILFS2 HDD File-System Tests On Linux 4.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • EXT4, Btrfs, XFS & NILFS2 HDD File-System Tests On Linux 4.8

    Phoronix: EXT4, Btrfs, XFS & NILFS2 HDD File-System Tests On Linux 4.8

    Up until running the tests for today's article, I can't remember the last time I touched a hard drive... It's been many months ago at least. Nearly all of our tests at Phoronix are from solid state storage, but I decided to pick up a new HDD for running some Linux file-system tests on a conventional hard drive for those not having an SSD.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Thanks for these Micheal. They are still useful for my 4 hard drive. Nothing to convince me to switch off btrfs with its compression and snapshots, but still useful.

    Comment


    • #3
      I appreciate the presence of conclusions.

      That said, I never ever heard of NILFS2.
      wikipedia says "Using a copy-on-write technique known as "nothing in life is free", NILFS records all data in a continuous log-like format that is only appended to, never overwritten, an approach that is designed to reduce seek times, as well as minimize the kind of data loss that occurs after a crash with conventional file systems."

      It appears to be a log-structured CoW filesystem with little else (apart from usual stuff).
      Neat, maybe a bit barebones tho.

      Comment


      • #4
        I still use hard discs for storage. SSD's may be fast, but there's no overall storage capacity to match. I'm limited by my network speed anyway (1GB) and I dont need massive IO to watch a movie and stopped running any games across the network as well (I usedto have a monster fileserver with striped arrays for that, but dems was the old days) when SSD's became feasible (size-wise) on a local machine.

        It's good to see differant outcomes for differant filesystems, so you can tune each part/disc for your needs which is what I do. EXT4 for standard stuff, XFS for the movies and what-not and I dont bother with BTRFS for compression because most of my capacity is for media which doesn't compress, and would burn out my fileserver's wee CPU anyway.

        Certainly a differant case for work machines.

        NILFS looks like it has a few interesting roles it could play for stuff like scratch partitions for media editing, or a games partition for those quick load times. Maybe even a tmpfs load partition. Situations where your important data is safely tucked away on other machines and can be quickly restored.
        Hi

        Comment


        • #5
          Regarding PgSQL and SQLite and Michael using the out-of-the-box settings :
          CoW systems are bad performers for huge files that get constantly randomly written into (like VMs and databases).
          In cases where these files have their own integrity mechanism (like the file system running inside the VM, or the journal/log used by the database), it is redundant to also use CoW on these files.
          (Use instead integrity/incremental replication/etc. mechanisms offered by the VM or the DB themselves instead)

          (ZFS has reportedly a mechanism to detect such case automatically).
          On BTRFS, one should "chattr +C" the files that are in such cases.
          (I don't know what's the proper mechanism in NILFS, F2FS nor UDF, though I'm quite sure the last two actually lack any such mechanism)

          Actually, in these circumstances, I'm quite impressed that BTRS didn't suck much badly at the PgSQL benchmark, and that NILFS did perform that well on SQLite.

          Originally posted by stiiixy View Post
          I dont bother with BTRFS for compression because most of my capacity is for media which doesn't compress, and would burn out my fileserver's wee CPU anyway.
          The main advantage of BTRFS aren't as much compression as :
          - checksums of everything, adding an additional check for integrity (beyond whatever RAID1/5/6 you might be using on the underlying partition).
          - CoW, meaning a lot less risk of corruptions
          - and CoW's snapshotting that comes for free and that you can leverage for your backuping strategy (i.e.: using btrfs snapshoting, instead of the classical "rsync+hardlinks" way of making historical copies).

          Same with any other CoW system that has a snapshotting feature (that means the upcoming XFS too, once it works) (sorry, that means no for UDF).

          I use it on my debian file server (old 2 core low power AMD64). So unless you have an even slower Atom, BTRFS is usable in your case.
          (As long as you stick to features of BTRFS which are considered production ready. Don't attempt btrfs' built-in RAID5/6)

          If you want to use BTRFS on your server, you'll need to :
          - scrub the drive in a cron job (remember, the whole argument was to use those checksums)
          - filtered rebalance from time to time (handling free space is complicated)
          If you're not comfortable rolling your own tools, pick a distro that does it for you (opensuse provides nice tools).

          Comment


          • #6

            Btrfs fio seq read result looks suspect? Conflicts with previous published results.



            Previous 4.4 kernel same benchmark - Btrfs was ok:



            Same benchmark 4.4. to 4.7 kernels but reporting IOPS - btrfs was ok:

            Comment


            • #7
              Originally posted by DrYak View Post
              Regarding PgSQL and SQLite and Michael using the out-of-the-box settings :
              CoW systems are bad performers for huge files that get constantly randomly written into (like VMs and databases).
              In cases where these files have their own integrity mechanism (like the file system running inside the VM, or the journal/log used by the database), it is redundant to also use CoW on these files.
              (Use instead integrity/incremental replication/etc. mechanisms offered by the VM or the DB themselves instead)

              (ZFS has reportedly a mechanism to detect such case automatically).
              On BTRFS, one should "chattr +C" the files that are in such cases.
              (I don't know what's the proper mechanism in NILFS, F2FS nor UDF, though I'm quite sure the last two actually lack any such mechanism)

              Actually, in these circumstances, I'm quite impressed that BTRS didn't suck much badly at the PgSQL benchmark, and that NILFS did perform that well on SQLite.



              The main advantage of BTRFS aren't as much compression as :
              - checksums of everything, adding an additional check for integrity (beyond whatever RAID1/5/6 you might be using on the underlying partition).
              - CoW, meaning a lot less risk of corruptions
              - and CoW's snapshotting that comes for free and that you can leverage for your backuping strategy (i.e.: using btrfs snapshoting, instead of the classical "rsync+hardlinks" way of making historical copies).

              Same with any other CoW system that has a snapshotting feature (that means the upcoming XFS too, once it works) (sorry, that means no for UDF).

              I use it on my debian file server (old 2 core low power AMD64). So unless you have an even slower Atom, BTRFS is usable in your case.
              (As long as you stick to features of BTRFS which are considered production ready. Don't attempt btrfs' built-in RAID5/6)

              If you want to use BTRFS on your server, you'll need to :
              - scrub the drive in a cron job (remember, the whole argument was to use those checksums)
              - filtered rebalance from time to time (handling free space is complicated)
              If you're not comfortable rolling your own tools, pick a distro that does it for you (opensuse provides nice tools).
              hah, late reply, but yeah I've pretty much come across most of your pointers. I had literally set up a nice Poweredge with BTRFS R6 as these issues started to arise. The first point to note were the abysmal write speeds. Then corruption issues cropped up, it was pretty much 'bye-bye' R6 mode. Shame really, as I wanted BTRFS to be the counterpart to ZFS. I simply reverted to R10. Still a RW slog but for now it's running smoothly and I will review it again along with CEPHS on a Red Hat-based (CentOS) deployment and (Open)Suse, simply because they both seem the most scalable as I have a spare Poweredge to play with.

              As for my personal wee server, it WAS running for a long time a simple Asus C-60M-I (I told you it was wee!) because where I live, power is expensive, and the temperature on average is HOT. Day and night. I was working lots and needed something to just work and smash out a few super-high-def ep's of *insert HBO programme here* when I hit the piss after work. It fit the bill as I needed low-DB's as well considering my apartment had five metre high ceilings and sounded like a choral chamber. It's changed up now to something more modern =D

              Anyway, I intend on smashing out a whole bunch of PTS disc benchmarks cos for some reason I like disc benchies. Will have to sling a few cabbages Mich's way for his effort. Wonder if he'd care for remote access...
              Hi

              Comment


              • #8
                .For the posterity
                Originally posted by stiiixy View Post
                I had literally set up a nice Poweredge with BTRFS R6 as these issues started to arise.
                As of today, RAID6 still isn't considered production ready for BTRFS.
                In a pefect condition (removing adding/discs) it now more or less works.
                In a crashy condition (system losing power in the middle of a write, with - eg.: the parity not being up to date with actual data. a.k.a "Write Hole") RAID5/6 isn't sufficient in BTRFS
                (parity isn't checksumed for obvious technical reasons, and the current routine cannot leverage checksum to detect that the parity is wrong)

                mdadm is what you currently need for stable raid6.
                it doesn't have checksums so it cannot handle write hole neigher, but at least it's rock solid).


                before using btrfs's raid6, one needs to wait until :
                - write hole is correctly handled (when all disks are present but disagree, btrfs should be able to leverage checksums to know which subset of disk is okay, and which contains the corrupted information).
                - get tested a lot in RAID5/6 configuration for the bugs to get weeded out.
                - real wolrd situations of power loss tested.

                Comment

                Working...
                X