Announcement

Collapse
No announcement yet.

F2FS With Linux 6.2 Lands Atomic Replace, Per-Block Age-Based Extent Cache

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • F2FS With Linux 6.2 Lands Atomic Replace, Per-Block Age-Based Extent Cache

    Phoronix: F2FS With Linux 6.2 Lands Atomic Replace, Per-Block Age-Based Extent Cache

    Jaegeuk Kim has ushered in the Flash Friendly File-System (F2FS) updates for the in-development Linux 6.2 kernel, which is headlined by two new features for this file-system...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Staying away from F2FS after killing 3 SD cards just after a couple hours of use each. The first 2 I thought I was just unlucky, but after the 3rd failure, I was sure it couldn’t be coincidence.

    If someone has his Sandisk extreme card fail and enter into read-only mode while using F2FS, please don’t try to replace the card and use the same FS. The new card will have the same fate.

    There are more reports of this on the internet, but it’s a bit hard to search for it, since you get lots of false positives.

    I can confirm the issue is still present on the latest LTS kernel. Maybe it is just a bad combination of Sandisk cards and a the Raspberry Pi SD controller. It’s still weird that after all this years it still went completely unnoticed. If I use ext4 the cards don’t fail.
    Last edited by amxfonseca; 15 December 2022, 08:41 AM.

    Comment


    • #3
      Originally posted by amxfonseca View Post
      If someone has his Sandisk extreme card fail and enter into read-only mode while using F2FS, please don’t try to replace the card and use the same FS. The new card will have the same fate.
      F2FS is designed to spread it's writes evenly over the whole volume to get the maximum life out of Flash without wear leveling. I can't imagine this having a negative effect on any other devices.
      Maybe there was another problem (to much swapping?). Still the best place for this is a bug report to the F2FS devs. If there is no sensitive data on the SDs maybe sending them to the devs could help in analysing the problem.
      If I use ext4 the cards don’t fail.
      Strange, maybe there was some missalignment with F2FS that led to doubling writes on small files? But for an SD to fail in a few months it would still need a massive amount of writes.

      It's also possible (but highly unlikely) that you just got 3 bad SD cards in a row.

      Comment


      • #4
        Originally posted by Anux View Post
        Strange, maybe there was some missalignment with F2FS that led to doubling writes on small files? But for an SD to fail in a few months it would still need a massive amount of writes.

        It's also possible (but highly unlikely) that you just got 3 bad SD cards in a row.
        It is really weird. I just install Arch Linux (by bootstrapping a new rootfs from a different machine, because offering a pre-made image is too complicated for Arch Linux), and after running a few Pacman install/updates the card goes into a read-only mode. The card still accepts write commands, but just ignores them. So the system seems to be running fine (due to the page caching), until it starts to get unbearably slow, before locking up due to F2FS to printing thousands of kernel errors due to filesystem inconsistencies.

        This happened on 32GB/64GB cards with probably less than 2GB or 3GB of total data written into them. Since all the 3 different cards failed in the same way, after less than a day of light use, I stopped using F2FS. Never had a failure since then.

        The report I read from a Gentoo forum describes the same behaviour, with almost all the cards failing with F2FS and not with a different FS. This must be a super specific combination of using F2FS/Raspberry Pi/Sandisk Extreme Cards. Even though I had the same issue across different kernel versions and different Pi models (although maybe the Pi 3 and the Zero 2W use the same chipset). So the probability of just having a bad card are quite low.

        I don't have more cards to spare, but maybe I'll get a Samsung card later and test it for science. Since they fail in less than 48h with light use, it is not hard to verify. I am now using a new 128GB Sandisk Extreme, but I won't risk putting F2FS on it, since it is impossible to recover from that failure.

        I still have one of the broken cards, since I didn't bother to ship it back to Sandisk to get it replaced. I doubt someone can collect any useful information about the issue. The card is basically a snapshot of an almost clean Arch installation, and everything you write to it just gets discarded. It is a nice write once and read many device, too bad you can't select the moment it enters into that mode.

        Comment


        • #5
          Sandisk uses some non standard extensions to increase read/write speed in conjunction with their own card readers but than it shouldn't be dependend on the FS in use.

          I have a sandisk extreme (256G?) since 2 years in my camera and every RAW picture is 50 MB so I write a lot of data to it. They shouldn't be able to be written to death in 1 day, whatever your FS or OS is messing up there.

          Have you tried to overwrite the whole device (not just a partition) with dd if=/dev/zero to get it back in a working state?

          Comment


          • #6
            Originally posted by amxfonseca View Post

            It is really weird. I just install Arch Linux (by bootstrapping a new rootfs from a different machine, because offering a pre-made image is too complicated for Arch Linux), and after running a few Pacman install/updates the card goes into a read-only mode. The card still accepts write commands, but just ignores them. So the system seems to be running fine (due to the page caching), until it starts to get unbearably slow, before locking up due to F2FS to printing thousands of kernel errors due to filesystem inconsistencies.

            This happened on 32GB/64GB cards with probably less than 2GB or 3GB of total data written into them. Since all the 3 different cards failed in the same way, after less than a day of light use, I stopped using F2FS. Never had a failure since then.

            The report I read from a Gentoo forum describes the same behaviour, with almost all the cards failing with F2FS and not with a different FS. This must be a super specific combination of using F2FS/Raspberry Pi/Sandisk Extreme Cards. Even though I had the same issue across different kernel versions and different Pi models (although maybe the Pi 3 and the Zero 2W use the same chipset). So the probability of just having a bad card are quite low.

            I don't have more cards to spare, but maybe I'll get a Samsung card later and test it for science. Since they fail in less than 48h with light use, it is not hard to verify. I am now using a new 128GB Sandisk Extreme, but I won't risk putting F2FS on it, since it is impossible to recover from that failure.

            I still have one of the broken cards, since I didn't bother to ship it back to Sandisk to get it replaced. I doubt someone can collect any useful information about the issue. The card is basically a snapshot of an almost clean Arch installation, and everything you write to it just gets discarded. It is a nice write once and read many device, too bad you can't select the moment it enters into that mode.
            Did you use any non default options with mkfs.f2fs or your fstab?

            Another issue with f2fs is that its (IMO) poorly configured by default.

            Comment


            • #7
              Originally posted by Anux View Post
              Sandisk uses some non standard extensions to increase read/write speed in conjunction with their own card readers but than it shouldn't be dependend on the FS in use.

              True, even though I never used their readers. I mean, I've always used Extremes on my Switch and other SBCs, always super reliable.

              This issue just seems to be specific to F2FS (or even some Arch Linux issue), but since this happened already on different kernel versions (with even different kernel architectures: armv7 and aarch64), I don't think it is distro specific, since I saw the same report with the same kernel error messages on Gentoo forums.

              I think it can even be caused by some weird bug on the Sandisk controller itself, that only occurs with some specific write pattern that F2FS seems happy to trigger. Because SD cards have a primitive wear-level algorithm, so maybe the log structure nature of F2FS breaks some of their controller assumptions.

              I've tried to zero the card (using dd) with no success. Even clearing the partition table is useless, as soon as I write the new table and remount it, it just pops up with the original partitions and data.

              Originally posted by brucethemoose View Post

              Did you use any non default options with mkfs.f2fs or your fstab?

              Another issue with f2fs is that its (IMO) poorly configured by default.
              Initially I just went with defaults. But on the latest victim I used the `-O extra_attr,inode_checksum,sb_checksum` recommendation from the Arch wiki. Also mounted it with `lazytime` the last time, but before was just pure defaults.

              I really need to try a Samsung card to really try to know where the issue lies. It can't be just unluck, but at the same time I think that this kind of issue would have already being caught if it was that common. So maybe it is just a big batch of Sandisk cards with some nasty firmware bug.

              Comment


              • #8
                I would be happy to here any news about your problem, because I wanted to adopt F2FS for my archlinux Pi myself (on a sandisk SD). Now I'm a little bit unsure ...

                Comment


                • #9
                  Originally posted by amxfonseca View Post
                  Staying away from F2FS after killing 3 SD cards just after a couple hours of use each. The first 2 I thought I was just unlucky, but after the 3rd failure, I was sure it couldn’t be coincidence.

                  If someone has his Sandisk extreme card fail and enter into read-only mode while using F2FS, please don’t try to replace the card and use the same FS. The new card will have the same fate.

                  There are more reports of this on the internet, but it’s a bit hard to search for it, since you get lots of false positives.

                  I can confirm the issue is still present on the latest LTS kernel. Maybe it is just a bad combination of Sandisk cards and a the Raspberry Pi SD controller. It’s still weird that after all this years it still went completely unnoticed. If I use ext4 the cards don’t fail.
                  Sounds like a Linux issue where its put the SSD's into frozen state and not specific to F2FS. I had the same problem with cheap Sandisk SATA SSD's but I was using ext4, not F2FS.

                  This happened on Manjaro which is an arch derivative.

                  Comment


                  • #10
                    I use f2fs on many sdcards (including SanDisk) and on SSD for around 4 years, they work fine.

                    Comment

                    Working...
                    X