Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 24

Thread: ZFS File-System For Linux Is Still Around

  1. #11
    Join Date
    Jan 2011
    Posts
    14

    Default

    Quote Originally Posted by kraftman View Post
    Don't you think this reduces interoperability?
    Yes. But even if Btrfs doesn't die, Btrfs and its GPL licence puts other *nix systems in exactly the same position as ZFS is on Linux: they can use it, but they can't distribute it with their kernel. This isn't any better for interoperability.



    What stops them from releasing ZFS under the BSD license? GPL is there for good reasons and it's the best license when you want to compete with other projects. However, its restrictions have nothing to this thread and strange Oracle's behavior.
    One of the disadvantages of the BSD license to users, is that it doesn't grant you a patent licence on the code, like eg. ZFS's CDDL does.

  2. #12

    Default

    Quote Originally Posted by dacha View Post
    Yes. But even if Btrfs doesn't die, Btrfs and its GPL licence puts other *nix systems in exactly the same position as ZFS is on Linux: they can use it, but they can't distribute it with their kernel. This isn't any better for interoperability.
    This is true what you said, but while ZFS is now closed source the possibility of killing btrfs is much lower now.

  3. #13
    Join Date
    Sep 2006
    Posts
    714

    Default

    If you want to use ZFS just use BSD.

    The only reason why the Lawrence Liverpool people want to use ZFS on Linux is because they hit the limits of the Ext4-based storage for their Lustre backends.

  4. #14
    Join Date
    Sep 2006
    Posts
    714

    Default

    Also nowadays, because of going back to closed source, ZFS has two major forks:
    1. Solaris ZFS
    2. Open Source ZFS

    Solaris ZFS can read and import Open Source ZFS, but the reverse is not true. ZFS is backwards compatible, but not forward compatible and the Solaris ZFS has new features that the Open Source ZFS does not have.

    Yes. But even if Btrfs doesn't die, Btrfs and its GPL licence puts other *nix systems in exactly the same position as ZFS is on Linux: they can use it, but they can't distribute it with their kernel. This isn't any better for interoperability.
    Well.
    A) Oracle can't close source BTRFS like they did with ZFS. The reason being is that while they did sponsor the development originally and hosted the websites for it originally they do not own the copyrights.

    B) It's a Linux file system shares code extensively with Linux-VFS, So it's a derivative and has to be licensed GPL.

    C) Portable file systems were never much of a priority for anybody. Unix systems, including BSD, used UFS/FFS fairly universally. Even OS X supported it. However they tended to introduce subtle changes and assumptions so that even though they share a common code base portability was undermined.

  5. #15
    Join Date
    Sep 2011
    Posts
    2

    Default

    Quote Originally Posted by drag View Post
    Also nowadays, because of going back to closed source, ZFS has two major forks:
    1. Solaris ZFS
    2. Open Source ZFS

    Solaris ZFS can read and import Open Source ZFS, but the reverse is not true. ZFS is backwards compatible, but not forward compatible and the Solaris ZFS has new features that the Open Source ZFS does not have.
    Also the opposite. Please see Fork Yeah! The Rise and Development of illumos and ZFS Feature Flags. Especially the first video mentions (around 44 min) that previous ZFS key developers quit at Oracle and are now contributing to the open source version.

  6. #16
    Join Date
    Dec 2007
    Posts
    146

    Default

    Quote Originally Posted by joffe View Post
    However, with FreeBSD, OpenIndiana, and Solaris 11 around - do you really want to?
    I've got a 5.something Terrabyte raidz array in my HTPC. I've had it running well over a year now with few issues. The data integrety has been perfect, however, there is an annoying bug that causes the system to hang from time to time if performing small write operations on one of the files over Samba.

    In theory, for reliable software raid 5, raidz can't be beat. I think practice is still catching up to theory, but I hope they keep working on it.

  7. #17
    Join Date
    Dec 2007
    Posts
    146

    Default

    Quote Originally Posted by timofonic View Post
    If ZFS wants to survive outside Solaris and their Open Source forks, a dual licensing must be made in some form and included into the vanilla kernel.

    Other than that, this is like the fanatics wanting to use Reiser4 into Linux 3.x. It's just a pipe dream that's going nowhere.
    I disagree. ReiserFS was never that great of a filesystem implementation to begin with. Of the 13 years I've been running linux on various machines the only time I've lost data due to filesystem corruption was when using Reiser, and it happened twice inside a year.

  8. #18
    Join Date
    Jan 2009
    Posts
    1,439

    Default snapraid

    Quote Originally Posted by timofonic View Post
    If ZFS wants to survive outside Solaris and their Open Source forks, a dual licensing must be made in some form and included into the vanilla kernel.

    Other than that, this is like the fanatics wanting to use Reiser4 into Linux 3.x. It's just a pipe dream that's going nowhere.

    Please be more realistic, even HAMMER2 is a more viable option once Matt Dillon agrees on some kind of interoperatibility between non-BSD UNIX-like systems.

    But what about Btrfs?

    What about distributed file systems? CRFS, FhGFS, Tahoe-LAFS, Ceph, Lustre, MooseFS, GFS2, OCFS, OneFS, XtreemFS, GlusterFS, HAMMER2 (seems it will have those features), pNFS, AFS.

    What I mean is not just a new, scalable and proper and efficient network filesystem, but also RAID-like capabilities. That could make commodity hardware to get into cheap RAID solutions to avoid data losing.



    So what about an article about that instead beating a dead horse like ZFS? Despite all the hype in the past, it reduces interoperability between al UNIXes (and non-UNIXes).
    The best I've found is snapraid. It is similar to unraid except it doesn't do realtime raid. Using snapraid with mhhdfs will let you "jbod" a bunch of disks together, designate another disk or two for parity BUT doesn't have the arrary limitations of raid so there's no risk of losing all your data, and it does integrity checks so no bit rot. The developer is working on adding a third parity disk, based on zfs code, but its not there yet.
    Its really a fantastic and well engineered project.

  9. #19

    Default

    Quote Originally Posted by timofonic View Post
    So what about an article about that instead beating a dead horse like ZFS? Despite all the hype in the past, it reduces interoperability between al UNIXes (and non-UNIXes).
    It anyone's reducing the interoperability between al UNIXes its Linux with its restrictive GPL, BSDs/Illumos does not have these issues.

    Dead horse? Nigga please ... ZFS thrives, both on Illumos and BSDs, You can even use now Boot Environments on FreeBSD: http://forums.freebsd.org/showthread.php?t=31662

    Quote Originally Posted by linuxoid View Post
    The other thing I've heard about ZFS is it's a RAM hog. You need at least 2GB just to get going with it.And if I play games or just want a good desktop experience isn't low-latency essential?
    Depends what You want to do with ZFS, it can work with 512 MB RAM (I used that size many time in virtual machines) and also with 512 GB RAM (for serious SAN work).

    ZFS can use all memory if YOU ALLOW it for, but You can limit ARC size to the size You want, and ZFS will stop there, for example You can 'sacrifice' 256MB for ZFS ARC (CACHE) and it will not take more.

    Deduplication is other thing, You need about 2-3GB RAM for every 1 TB of data, but if You have 40TB for example, You do not need 120GB RAM, You can successfully use ZFS with about 40GB RAM with 80 GB SSD for L2ARC. You can also use 40TB pool under ZFS with, for example 4GB RAM, but reading all hashes directly from disk will be dead slow, RAM in deduplication is needed to hold the hash table for the deduplicated blocks.

  10. #20
    Join Date
    Aug 2010
    Posts
    59

    Default

    Quote Originally Posted by mgmartin View Post
    SMP works just fine. The restriction is on the preemption model: You can't use CONFIG_PREEMPT ( low latency desktop ) , you need either CONFIG_PREEMPT_VOLUNTARY or CONFIG_PREEMPT_NONE .

    Linux xxxxxx 3.3.4+ #8 SMP Fri Apr 27 15:30:00 MDT 2012 x86_64 x86_64 x86_64 GNU/Linux
    Sorry that's what I meant. Tried it and vary bad for a desktop multitasking situation.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •