Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 24

Thread: ZFS File-System Tests On The Linux 3.10 Kernel

  1. #11
    Join Date
    Jun 2008
    Location
    Perth, Scotland
    Posts
    435

    Default

    Quote Originally Posted by mercutio View Post
    lvm's snapshotting isn't great
    What's wrong with it? Not trolling, just curious.

  2. #12
    Join Date
    Feb 2009
    Posts
    367

    Default

    Btrfs may not be as polished yet, but as of the last few kernel releases it's pretty much ready to go IMO. Btrfs does have the advantage of superior handling of block devices and volumes IMO.

  3. #13
    Join Date
    Oct 2012
    Posts
    148

    Default

    Quote Originally Posted by Chewi View Post
    What's wrong with it? Not trolling, just curious.
    If you have a snapshotted volume, every write to base volume is duplicated for every snapshot. So if you have 4 snapshots of the base volume, the disk sees 5 times as many writes. You can guess what that does to performance.

    If you run out of space in the snapshot, the whole snapshot is lost forever, so if you actually want to keep it for longer time, you need to provision at least as much space as the base volume has.

    (there were also other problems but I recall hearing that they have been fixed)

    But with all the problems with them, I still use them, only for very specific tasks, under detailed supervision and so on. The btrfs snapshots are "plug and play" in comparison.

  4. #14

    Default

    The situation isn't really ideal at the moment.

    Ext4 is a great fast filesystem, however it is missing data integrity checks and the self healing mechanism of ZFS. I think this is a must have today.
    ZFS is rock stable and brings all you want for a modern filesystem, but it is slow as hell on Linux and the development is pretty much dead.
    BTRFS brings most of the features ZFS brings and some more I really like, like the automatic reallocation of hot data. However last time I used it, it seamed not ready for production.

    I'm aware that what filesystem you use heavily depends on what use cases you have. I currently run ZFS on all my setups because stability and integrity are more important to me than performance. I would switch back to BTRFS if the problems I faced last time are fixed now. Have to make another test soon.

  5. #15

    Default

    Quote Originally Posted by ZeroPointEnergy View Post
    Ext4 is a great fast filesystem, however it is missing data integrity checks and the self healing mechanism of ZFS. I think this is a must have today.
    Self-Healing like creating new? Or is this now fixed in ZFS?

    ZFS is rock stable and brings all you want for a modern filesystem, but it is slow as hell on Linux and the development is pretty much dead.
    BTRFS brings most of the features ZFS brings and some more I really like, like the automatic reallocation of hot data. However last time I used it, it seamed not ready for production.
    A filesystem where you can't delete files if it is full, I wouldn't call rock stable. And this was observed on Solaris, not OpenSolaris or Linux. I'm amused about the ZFS-hype still around.

  6. #16

    Default

    Quote Originally Posted by PuckPoltergeist View Post
    Self-Healing like creating new? Or is this now fixed in ZFS?

    A filesystem where you can't delete files if it is full, I wouldn't call rock stable. And this was observed on Solaris, not OpenSolaris or Linux. I'm amused about the ZFS-hype still around.
    What are you talking about? Can you link some more information, that would be really helpful. This has nothing to do with hype, I just emphasize data integrity and i can't possibly be aware of every last bug. And what alternative is there? Last time i tried BTRFS you could not even mount by label from grub, so if a disk fails in a RAID you can't even boot anymore. That's not production ready for me and i don't even have to read a bug tracker to notice that.

  7. #17
    Join Date
    Jul 2008
    Posts
    1,718

    Default

    First: make sure IO scheduler is no-op
    Second: turn of readahead

    the second one delivered an enormous performance boost.

    But... hey, phoronix benchmarked ext3 with barriers off because that was 'default' - and it was default to look good in benchmarks....

  8. #18
    Join Date
    Aug 2010
    Posts
    28

    Default

    By default, fs_mark writes a bunch of 0's to a file in a 16k chunks, calls close followed by an fsync followed by a mkdir call.
    Code:
    [pid 13710] write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 16384) = 16384 <0.000020>
    [pid 13710] fsync(5)                    = 0 <0.033889>
    [pid 13710] close(5)                    = 0 <0.000005>
    [pid 13710] mkdir("./", 0777)           = -1 EEXIST (File exists) <0.000005>
    From my observations, fsync is slightly more expensive for zfs, and this is where you see the hit in the fs_mark benchmarks. With "sane/real-world" amounts of fsync calls and possibly a few other tweaks, ZoL is an extremely fast, stable, feature-rich, production-ready file system. Many people are using it with Linux and having great success.

    A 5-disk raidz1 pool:

    Read Sample:
    Code:
    dd if=sample1.mkv of=/dev/null  bs=1M
    4085+1 records in
    4085+1 records out
    4283797121 bytes (4.3 GB) copied, 15.3844 s, 278 MB/s
    Write Sample ( read from ssd ):
    Code:
    time dd if=/root/sample2.mkv of=test bs=1M; time sync;
    9428+1 records in
    9428+1 records out
    9886602935 bytes (9.9 GB) copied, 35.6332 s, 277 MB/s
    
    real	0m35.635s
    user	0m0.010s
    sys	0m2.666s
    
    real	0m2.665s
    user	0m0.000s
    sys	0m0.077s

  9. #19

    Default

    Quote Originally Posted by ZeroPointEnergy View Post
    What are you talking about? Can you link some more information, that would be really helpful.
    For the recreating of the filesystem, thats from the opensolaris-forum. There were enough postings about damaged filesystems, where three, four or five suggestions were made and the last was to recreate the filesystem and restore the backup. For me that's not production ready. Okay it was OpenSolaris in this cases.
    For not being able to delete files on a filled filesystem, that was a in-house problem.

    This has nothing to do with hype, I just emphasize data integrity and i can't possibly be aware of every last bug. And what alternative is there? Last time i tried BTRFS you could not even mount by label from grub, so if a disk fails in a RAID you can't even boot anymore. That's not production ready for me and i don't even have to read a bug tracker to notice that.
    Didn't tried by label but uuid worked for me. But label should work to. If it doesn't it's a bug that needs to be reported.

  10. #20

    Default

    Quote Originally Posted by PuckPoltergeist View Post
    Didn't tried by label but uuid worked for me. But label should work to. If it doesn't it's a bug that needs to be reported.
    Doesn't a uuid references a partition? If that disk is gone you have to edit your configuration and mount the other device in the RAID. With ZFS I can simply reference the pool by name and I don't have to care what disks are involved or available as long as there are enough to assemble the RAID. So far I did not find out how to achieve this with a BTRFS and the wiki doesn't help much here. I don't consider this some strange use case, that's pretty much the first thing you want to do if you use a RAID.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •