Page 3 of 3 FirstFirst 123
Results 21 to 24 of 24

Thread: ZFS File-System Tests On The Linux 3.10 Kernel

  1. #21
    Join Date
    Oct 2012
    Posts
    299

    Default

    Quote Originally Posted by smitty3268 View Post
    You mean like this ZFS on Linux test? I guess we should let you know now.
    reading comprehension is a rare gift these days...

  2. #22
    Join Date
    Feb 2009
    Posts
    377

    Default

    Quote Originally Posted by ZeroPointEnergy View Post
    Doesn't a uuid references a partition? If that disk is gone you have to edit your configuration and mount the other device in the RAID....
    No, the UUID is per file system. Any disk in the array can be mounted via the same UUID.
    For example, here is a 2-disk RAID1 btrfs system (single partition on each disk). In this case it's a 2 disk system,
    but it would work just the same with 20 disks:

    Code:
    blkid /dev/sdb1:
    /dev/sdb1: UUID="04bf1179-a858-4ac9-935b-9279722f6b4a" UUID_SUB="f0983cb3-eb7e-45c3-b086-e58baa798d45" TYPE="btrfs" 
    blkid /dev/sda1:
    /dev/sda1: UUID="04bf1179-a858-4ac9-935b-9279722f6b4a" UUID_SUB="5062890f-f8fe-46da-b90c-68c2bf096403" TYPE="btrfs"
    And the fstab, with seperate subvolumes for root and home:
    Code:
    UUID=04bf1179-a858-4ac9-935b-9279722f6b4a	    /       btrfs   defaults,compress=lzo,autodefrag,subvol=@       	0 0
    UUID=04bf1179-a858-4ac9-935b-9279722f6b4a	    /home   btrfs   defaults,compress=lzo,autodefrag,subvol=@home	0 0

  3. #23

    Default

    Quote Originally Posted by ZeroPointEnergy View Post
    Doesn't a uuid references a partition? If that disk is gone you have to edit your configuration and mount the other device in the RAID. With ZFS I can simply reference the pool by name and I don't have to care what disks are involved or available as long as there are enough to assemble the RAID.
    That's what uuid and labels are for. So as benmoran has already explained, it works this way. You must care, that btrfs needs a device scan from userspace (e.g. initramfs) or you must list all devices of the raid for mounting: https://btrfs.wiki.kernel.org/index.....2Fetc.2Ffstab

  4. #24

    Default

    Quote Originally Posted by xterminator View Post
    I love BTRFS, I've both used it in fedora and ZFS on FreeBSD.

    ZFS is horrible (slow, difficult to configure, no accurate documentation). And it's even more so on FreeBSD. So much so that it seems like FreeBSD was never meant to use ZFS. Worse, FreeBSD forum guys are real douchebags (no offense). I tried getting help from them but all I received was "idiot", "Linux Loser", "STFU, GTFO & RTFM", etc...

    For BTRFS, it's the complete opposite. Sure it's more easier to corrupt the file system and lose data but you have to remember that BTRFS is still in development even then it's doing really well. It's almost production really.
    From my experiences, this applies to both filesystems. For both I had to search for proper documentation, both I was able to corrupt easily and had to recreate them, because of no working fsck. Especially as ZFS still claims to not need it (SGI did so with XFS log time ago).

    ZFS is the choice for Solaris and will stay. Btrfs seems to be the FS for linux in future. Btrfs is newer and suits better for block devices than ZFS.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •