Page 6 of 9 FirstFirst ... 45678 ... LastLast
Results 51 to 60 of 84

Thread: ZFS On Linux Is Now Set For "Wide Scale Deployment"

  1. #51
    Join Date
    Jan 2012
    Posts
    59

    Default

    Quote Originally Posted by mo0n_sniper View Post
    It would be good if the linux installers would include some sort of zfs support.
    You install the zfs package on the running install media, then you are able to partition and install to zfs. This way the zfs is not provided in the install media and no patent issues arise.

    I was trying to do this with a fedora 18 live install but it's not very straight forward.
    It's not a particularly simple problem to solve, for a number of reasons. One is that the whole installer image needs to fit in memory, so you add a lot to resource requirements if you need to compile from source (ie, the way the kernel mods are currently installed via DKMS). The only way to avoid that would be to precompile binaries for all the installers you want to support, then serve them via HTTP or something for the installer to download. The problem with that is, what happens when you don't have a net connection at install time? It's also pretty inefficient, since you need to serve the bins up every time an install occurs, rather than just once when the installer is downloaded.

    An alternative to serving via HTTP might be to offer some sort of ability to use supplementary media, like a USB stick, to load the binaries but you've still got to compile those binaries to match every installer version you want to support (and add those media hooks to all the relevant installers). That's a fairly significant (and ongoing) task, particularly when installation media may get point releases that require new bins.

    If you have the memory, using a livecd should already give you a moderately simple path to doing an initial install onto ZFS, since you should be able to boot to the live environment, install the packages as normal, prepare the disks as ZFS and perform the install onto them, then chroot into the install and perform any final configuration (here's a link for HOWTO install Ubuntu on ZFS root, haven't read it in a while, but from memory that was the gist of it).

  2. #52
    Join Date
    Dec 2011
    Posts
    2,105

    Default

    Quote Originally Posted by pdffs View Post
    It's not a particularly simple problem to solve, for a number of reasons. One is that the whole installer image needs to fit in memory, so you add a lot to resource requirements if you need to compile from source (ie, the way the kernel mods are currently installed via DKMS).
    ZFS is unsuitable for low memory systems anyways.
    ZFS is intended for 64-bit systems with 4 GB RAM or more.

  3. #53
    Join Date
    Jan 2009
    Location
    Vienna, Austria; Germany; hello world :)
    Posts
    640

    Default

    Quote Originally Posted by ryao View Post
    CPU utilization will likely increase by a few percentage points. However, I cannot speculate on what the effect will be on battery life. ARC could improve battery life while the periodic transaction commit (every 5 seconds) could harm battery life. If I were you, I would put everything on ZFS, rather than just /home.

    With that said, you should consider LZ4 compression. It has a few properties that make it more appealing. One is that LZ4's throughput is an order of magnitude greater than gzip, which saves CPU time. Even more appealing is that LZ4 is extremely quick at identifying incompressible data. This property saves even more CPU time. As far as mobile use are concerned, these properties should translate into power savings in comparison to the situation where you use gzip. If you are interested in reading about LZ4, the following links are rather informative:

    https://extrememoderate.wordpress.com/2011/08/
    http://denisy.dyndns.org/lzo_vs_lzjb/
    http://wiki.illumos.org/display/illumos/LZ4+Compression
    https://code.google.com/p/lz4/

    There are a few things to keep in mind when looking at those links:
    1. The first link involves LZ4 r11. The version that ZFS imported is r67 (+/- 2, I forget the exact revision that was imported) and LZ4 has seen plenty of improvements since r11. Of particular interest is the time spent detecting incompressible data.
    2. The second link compares LZO with LZ4. LZO is considered to be quick, but LZ4 beats it in every metric.
    3. The third link is Illumos' writeup on LZ4, which compares it to ZFS' lzjb. lzjb was invented to obtain fair compression, have high throughput and detect incompressible data quickly. Those metrics are considered desireable for use in filesystems. LZ4 does so much better than lzjb in all of them that the Illumos developers initially thought that it was too good to be true when it was first discussed on their mailing list.
    4. The fourth link is the LZ4 project page, which has a chart comparing the throughput and compression ratio of LZ4 to other compression algorithms. Of particular interest it that it shows that LZ4 has an average compression ratio of 2.1 while gzip -1 has an average compression ratio of 2.7, so you do not really pay much in terms of space for the benefits of LZ4


    thanks a lot ryao !

    that's a more in-depth answer than I had anticipated


    I'm currently trying out ZFS/ZOL with lz4-algorithm on one of my backup disks and it looks good

    I see one issue which hinders me from using lz4 on my laptop / on the root partition: there's no liveCDs with ZOL available that support the LZ4 algorithm

    in case things go wrong I would have no access to my data at all if all partitions were using the LZ4 compression algorithm - am I correct ?

  4. #54
    Join Date
    Jan 2009
    Location
    Vienna, Austria; Germany; hello world :)
    Posts
    640

    Default

    not exactly sure what's going on here:

    I created the pool with sub-pools/volumes and copied already some data to it (around 800 GB), exported it

    booted into windows


    now came back to linux and wanted to import it:

    zpool import WD30EFRX
    cannot import 'WD30EFRX': one or more devices is currently unavailable

    zpool import
    pool: WD30EFRX
    id: 6937543019016739157
    state: FAULTED
    status: One or more devices contains corrupted data.
    action: The pool cannot be imported due to damaged devices or data.
    see: http://zfsonlinux.org/msg/ZFS-8000-5E
    config:

    WD30EFRX FAULTED corrupted data
    WDred_zfs UNAVAIL corrupted data


    these are pools with compress=lz4 set

    I reproducibly got the same error when creating a new pool and after exporting trying to re-import it


    any ideas ?



    this is on an GPT partition table -> partition

    might that be the reason ?


    edit:


    the same happens when using lzjb
    Last edited by kernelOfTruth; 04-20-2013 at 10:25 PM.

  5. #55
    Join Date
    Jan 2009
    Location
    Vienna, Austria; Germany; hello world :)
    Posts
    640

    Default

    Quote Originally Posted by kernelOfTruth View Post
    not exactly sure what's going on here:

    I created the pool with sub-pools/volumes and copied already some data to it (around 800 GB), exported it

    booted into windows


    now came back to linux and wanted to import it:





    these are pools with compress=lz4 set

    I reproducibly got the same error when creating a new pool and after exporting trying to re-import it


    any ideas ?



    this is on an GPT partition table -> partition

    might that be the reason ?


    edit:


    the same happens when using lzjb



    found the reason:

    I was testing the live ebuild and some recent changes might have caused this

    now using "stable" 0.6.1 release and everything's fine



    so a heads-up - something could be broken/regression in the current developmental state

  6. #56
    Join Date
    Jan 2009
    Location
    Vienna, Austria; Germany; hello world :)
    Posts
    640

    Default

    ryao, you by chance know if suspend-to-ram, so freezing of the filesystem's contents, work with ZFS (I doubt it - but it would be a pleasant surprise)

  7. #57
    Join Date
    Sep 2011
    Posts
    161

    Default

    Quote Originally Posted by finalzone View Post
    That was a deliberate decision from SUN when they were losing their market share against Linux system especially IBM and Red Hat.
    As a result, binary ZFS on Linux cannot legally included out of box nor integrated into Linux kernel. When Oracle will decide to change ZFS license for GPL compatibility (unlikely), then it can. For now, ZFS is a legal minefield that out-weights its technical merit.

    Amen!

    Quote Originally Posted by Sergio View Post
    I would love to see Linux adopting ZFS as 'standard'; I though free/open source was all about merithocracy...
    Not going to happen until ZFS is GPL which is maybe never. The CDDL is made as a fuck you to the Linux community, as such I say fuck you to the CDDL and ZFS.

  8. #58
    Join Date
    Nov 2008
    Posts
    418

    Default ZFS is open source

    Those of you who say that ZFS is closed source: it is not. ZFS have been forked, and the Oracle ZFS is closed source, yes. The Illumos (Solaris kernel) have forked ZFS and it is completely open sourced under CDDL. Several OSes use ZFS today: FreeBSD, Mac OS X (Z-410), OpenSolaris, etc.

    Both of the head architects of ZFS have quit Sun and one of the them have joined Joyent who also created nodejs. All DTrace creators have joined Joyent too. They work on Illumos, and Joyent has the strongest Solaris kernel hackers outside Oracle. Illumos have several new ZFS functions that even Oracle Solaris does not have. Some believe that Illumos ZFS will surpass Oracle Solaris. Also, a FreeBSD hacker have coded up LZ4 compression algorithm, which is very clever. So, there is lot of momentum in open source ZFS outside Oracle Solaris.

    BTW, the well known compression algorithm lzjb (lzJB) is named after Jeff Bonwick, the other head architect of ZFS. Matt Ahrens at Joyent is the other.

  9. #59
    Join Date
    Dec 2011
    Posts
    2,105

    Default

    Quote Originally Posted by kebabbert View Post
    Those of you who say that ZFS is closed source: it is not. ZFS have been forked, and the Oracle ZFS is closed source, yes. The Illumos (Solaris kernel) have forked ZFS and it is completely open sourced under CDDL. Several OSes use ZFS today: FreeBSD, Mac OS X (Z-410), OpenSolaris, etc.

    Both of the head architects of ZFS have quit Sun and one of the them have joined Joyent who also created nodejs. All DTrace creators have joined Joyent too. They work on Illumos, and Joyent has the strongest Solaris kernel hackers outside Oracle. Illumos have several new ZFS functions that even Oracle Solaris does not have. Some believe that Illumos ZFS will surpass Oracle Solaris. Also, a FreeBSD hacker have coded up LZ4 compression algorithm, which is very clever. So, there is lot of momentum in open source ZFS outside Oracle Solaris.

    BTW, the well known compression algorithm lzjb (lzJB) is named after Jeff Bonwick, the other head architect of ZFS. Matt Ahrens at Joyent is the other.
    Interesting post.
    It is too bad ZFS is covered by patents, and that it is not available under the BSD license or GPL.

  10. #60
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by uid313 View Post
    Interesting post.
    It is too bad ZFS is covered by patents, and that it is not available under the BSD license or GPL.
    True, ZFS is covered by patents.

    But it still open source under CDDL, and several OSes use it. FreeBSD can use it, why can not Linux use it? Mac OS X use it. All OpenSolaris distros use it. Also, Linux use it. Here are all OSes that use it, it is quite a list.
    http://en.wikipedia.org/wiki/ZFS#Platforms

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •