Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: SUSE Enterprise Considers Btrfs Production Ready

  1. #1
    Join Date
    Jan 2007
    Posts
    13,403

    Default SUSE Enterprise Considers Btrfs Production Ready

    Phoronix: SUSE Enterprise Considers Btrfs Production Ready

    SUSE is now comfortable officially supporting the Btrfs file-system and considering it a "production ready" Linux file-system...

    http://www.phoronix.com/vr.php?view=MTI0Nzc

  2. #2
    Join Date
    Sep 2008
    Posts
    89

    Default

    Meanwhile, if you visit Oracle's btrfs page, you're greeted with this bolded information:

    The Btrfs disk format is not yet finalized, and it currently does not handle disk full conditions at all. Things are under heavy development, and Btrfs is not suitable for any uses other than benchmarking and review.
    I suppose that's outdated, but that's bad enough.

  3. #3
    Join Date
    Sep 2011
    Posts
    40

    Default

    On the same page:
    The Btrfs pages have moved to http://btrfs.wiki.kernel.org, please visit the new site to find documentation, downloads and more details on Btrfs.

  4. #4
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,394

    Default

    I suppose it's about time. The btrfsck tool is still a bit hit-and-miss with whether it introduces new issues while fixing or not, but then you are not likely to hit any real issues with it unless you have a faulty RAM module (which I had, unfortunately, but Btrfs remained bootable and all the personal data was still intact even under those circumstances). So while not the default option, having it as an alternative is good enough now.

  5. #5
    Join Date
    Feb 2011
    Posts
    39

    Default

    I use btrfs for a few years now on all my systems (server, desktop, laptop), I use it on dm-crypted raid5, I use it as root-fs. And I never had any data loss (where ext4, on a rc-kernel, but already stable, ate some of my data). Not even when it was really experimental and really didn't have any out-of-disk handling - I hit that of course :-)
    So yeah, it's not statistically valid, but for me it is production ready.

  6. #6
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,394

    Default

    Yea, same here, I used it from the very first version of openSUSE that allowed using it at install time, and had only minor difficulties that do not depend on the FS to begin with. Though I'd really wish YaST would have LVM-like support for subvolumes, right now they are underutilised...

  7. #7
    Join Date
    Dec 2012
    Posts
    4

    Cool

    Quote Originally Posted by mazumoto View Post
    I use btrfs for a few years now on all my systems (server, desktop, laptop), I use it on dm-crypted raid5, I use it as root-fs. And I never had any data loss (where ext4, on a rc-kernel, but already stable, ate some of my data). Not even when it was really experimental and really didn't have any out-of-disk handling - I hit that of course :-)
    So yeah, it's not statistically valid, but for me it is production ready.
    Anecdotal experiences can be deceiving, that's true. This being said, when I tested brtfs out a while ago, I experienced catastrophic data loss. A combination of LVM and LUKS killed it, at least that was my hypothesis at the time. It was perhaps too soon to expect it to survive such scenarios.

    I was running tests and the data was there to be shredded, so no actual loss. I plan to do another round of testing sometimes in the 3.7 - 3.8 period... Is it ready to be used in production? I couldn't tell right now. Some distros seem to think it is, I'm not that convinced. With good, often made and actually restorable backups... and with enough time to do the restore... why not. I'm just afraid that people will start using it without backups. Than again, fsck and related tools could probably use more suck... ehm... testers.

  8. #8
    Join Date
    Nov 2011
    Posts
    347

    Default

    if MS pulled s*** like this, the fanboys would've jumped on them like a rapid pack of wolves. but hey, we are already releasing alpha grade turdballs like opensuse so we might as well have a alpha-grade filesystem to match. the fact that ubuntu already has a huge server marketshare proves my point that you don't need a rock solid distro to power a LAMP stack.

  9. #9
    Join Date
    Feb 2009
    Posts
    360

    Default

    Stop it. Learn to compose a solid argument.

    Btrfs has worked fine for me as well, despite what lot of people on the internet (who've never used it) like to say. Outside of the limitations listed on the kernel site, it's a solid FS already. Probably not -quite- production ready, but it's stable enough to be. The most important thing is having an up to date Kernel, so it's not recommended on your average Ubuntu spin unless you update it manually.

    I've tested it extensively on multiple servers, two personal workstations, and my home machine. It's still slow for certain types of file operations, but otherwise solid. One of the things I experimented with was the btrfs "RAID 1" implementation, which is not exactly RAID 1. I used a medium sized array of failing hard disks. These were all years old disks with dozens of bad sectors. My main conclusion is that the file-aware stripe duplication works amazingly well. The build in scrub functionality is also really damn amazing, and pretty much negates the need for an fsck.

    One feature that it doesn't have yet, that would keep me from using it in production, is to automatically remove failing devices from an array. I believe this is on the todo list, but nobody has tackled it yet. When a hard disk hits some unrecoverable sectors, btrfs works amazingly well in recoving from duplicate stripes. It does this seamlessly. The big issue is that if a disk is REALLY failing, like literally self descructing, then btrfs will keep hitting bad sector after bad sector, and recovering them as it goes. This works great when just a few sectors die, but not half a disk. This had the effect of slowing my server down to a crawl, making it essentially unusable. It would be much better to have the disk dropped from the array at this point.

  10. #10
    Join Date
    Dec 2008
    Posts
    145

    Default

    Quote Originally Posted by benmoran View Post
    Stop it. Learn to compose a solid argument.
    Right back at you.

    For something that has a relatively low failure rate, like say between 0.1 and 10%, anecdotes about it working are almost useless, since by definition, it works for the vast majority of cases. In contrast, anecdotes about it NOT working can be somewhat useful, since at least you can hear about some possible failure modes, and if there are enough failure reports then you may be able to conclude that the failure rate is higher than previously hypothesized.

    I think that btrfs definitely falls in this category. For a stable filesystem, I'd like to see the annual failure rate well below 0.1%. But from the reports that I have seen, and my guess at how many systems are using btrfs regularly, I think the failure rate is above 0.1%.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •