Results 1 to 9 of 9

Thread: EnhanceIO: New Solid State Drive Caching For Linux

  1. #1
    Join Date
    Jan 2007
    Posts
    15,389

    Default EnhanceIO: New Solid State Drive Caching For Linux

    Phoronix: EnhanceIO: New Solid State Drive Caching For Linux

    A commercial company has opened up their Linux driver that is based upon their SSD (Solid-State Drive) caching software product. This code is designed to use SSDs as cache devices for traditional rotating hard drives. This new SSD caching driver is based upon Facebook's Flashcache...

    http://www.phoronix.com/vr.php?view=MTI3Mzc

  2. #2
    Join Date
    Sep 2011
    Posts
    40

    Default

    Good news; I had zero trust towards Facebook getting it included in the Linux kernel.

    There's another external module doing what flashcache does, I wonder what it became...

  3. #3
    Join Date
    Oct 2012
    Posts
    148

    Default

    Nice! I hope they will push to get it included in mainline.

  4. #4
    Join Date
    Aug 2010
    Posts
    28

    Default

    This is good news. I've been using the capabilities of the zfs/zfsonlinux l2arc, but was just testing out flashcache again yesterday on non-zfs filesystems. Bcache is the only other current product that also provides a cache solution.

    I may be wrong, but I couldn't see a way to install Bcache into an pre-existing linux kernel--the install method is to download a full blown, pre-patched kernel--too much work and to intrusive. Let me know if there is simpler way.

    The latest flashcache was a small git clone and compile--took just a few minutes, and there were no problems with my existing 3.7 installed kernel source. I did have an issue with flashcache not working which I need to look into more. Flashcache caused sector read errors when used on an md raid device-- a raid0 device with two different sized raid1 devices. Flashcache worked fine on a simple md device.

  5. #5

    Default

    Quote Originally Posted by mgmartin View Post
    This is good news. I've been using the capabilities of the zfs/zfsonlinux l2arc, but was just testing out flashcache again yesterday on non-zfs filesystems. Bcache is the only other current product that also provides a cache solution.

    I may be wrong, but I couldn't see a way to install Bcache into an pre-existing linux kernel--the install method is to download a full blown, pre-patched kernel--too much work and to intrusive. Let me know if there is simpler way.

    The latest flashcache was a small git clone and compile--took just a few minutes, and there were no problems with my existing 3.7 installed kernel source. I did have an issue with flashcache not working which I need to look into more. Flashcache caused sector read errors when used on an md raid device-- a raid0 device with two different sized raid1 devices. Flashcache worked fine on a simple md device.
    You can use L2ARC and SLOG devices with non-ZFS filesystems. Just put them on a zvol.

  6. #6
    Join Date
    Aug 2010
    Posts
    28

    Default

    Understood. The need was testing the ability of using a cache device without the zfs/spl dependencies.

  7. #7
    Join Date
    Aug 2010
    Posts
    28

    Default

    A follow up to my flashcache issue I made reference too--seems flashcache has issues working with my 3TB 4k sector drive.

  8. #8
    Join Date
    Oct 2009
    Posts
    2,137

    Default

    Good news, I only use obsolete magnetic disks for bulk data storage, where caching is irrelevant.

    I wouldn't waste an SSD on this.

  9. #9
    Join Date
    Aug 2010
    Posts
    28

    Default

    My initial impressions with EnhanceIO are very positive.

    The installation was simple, it required copying the source directory to a linux kernel source folder, running a patch to get the source into the kernel make system, then compiling the kernel modules ( I just did a full make to re-build my my entire kernel ). It's a little more work than flashcache which builds the modules entirely outside the kernel source tree, but the directions and process were clear enough. The code also looks fairly small which hopefully means easy to maintain.

    A few things I really like with EnhanceIO:

    1. The cache is completely transparent. This means you can add and remove a cache device to a mounted disk or individual partition. I added a cache device to a mounted, in-use partition, then removed the device with no issues. It also means no separate dm mapping to the physical device, so you continue to mount and access the cached device through it's default /dev entry.

    2. Along with being transparent, the SSD cached device can fail and reads/writes to the actual device will continue. To prevent the loss of data in a write-back configuration, you can mirror SSD devices.

    3. Everything is done through the /proc interface. There is one python script used to create and manage the cache devices. Lots of stats are available through the /proc interface.

    4. Different cache replacement modes: random, FIFO, and LRU .

    My favorite feature, and the one I think sets EnhanceIO apart from other cache solutions ( outside of zfs), is the transparency. I'm used to adding cache devices to running zfs filesystems, so it was strange for me--at first-- when setting up flashcache to have to create/add a cache prior to mounting and accessing the underlying, cached device.

    What we need now is a feature matrix comparing the available cache solutions as they continue to mature and prepare for inclusion in the kernel along with some performance benchmarks.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •