Results 1 to 4 of 4

Thread: UFS vs. ZFS File-System Performance On FreeBSD 9.0

  1. #1
    Join Date
    Jan 2007

    Default UFS vs. ZFS File-System Performance On FreeBSD 9.0

    Phoronix: UFS vs. ZFS File-System Performance On FreeBSD 9.0

    Back in January I posted some ZFS, HAMMER, and Btrfs file-system benchmarks and in July of last year FreeBSD ZFS benchmarks, but for those wanting a new look at the ZFS file-system under FreeBSD 9.0, here are some updated numbers...

  2. #2

    Default Yet another irrelevant comparisson

    ZFS is made for systems with lots of disks, ideally spread over many controllers.

    It is like saying Puppy Linux is faster (which most people read as better) than Ubuntu after testing it on a system with 256 MB ram.

    And again, default settings are irrelevant in a performance test. Defaults are made to work everywhere, not made to be fast. The fastest system is the one which can make best use of the hardware, eg after someone skilled at doing so have optimized its settings for performance. This is like comparing two submarines to see which one goes deepest but not noticing that one is more user-friendly because the hatch it open by default.

    Don't get me wrong: I love what Phoronix is doing (otherwise I would not be reading the articles), but often the execution is wrong at a conceptual level.

    The design objectives must be considered. In this particular case:
    ZFS is designed for systems with lots of disks and it does trade off CPU for reliability.
    It is designed for environments where customers are willing to spend money on the extra hardware to make it run well. ZFS wants lots of ram.
    ZFS delivers features which provides ease of management at the cost of underlying complexity. If you don't use these features then they represent bloat!
    ZFS delivers features which are not seen anywhere else. In particular one special feature: If multiple copies of data exist (volume management layer function) then when a read is detected to be invalid (file system layer), the file system will a) retry the read from another copy, and if successful will b) repair the errant data. Other file systems cause the system to throw an exception when the file system detects invalid data returned from the lower level layers (as they generally don't know how many copies of the data exist)

    Despite all of this and more counting against it, ZFS still performs acceptable on systems with limited resources!

    Please don't pull a Tom's Hardware on us.

  3. #3


    Interesting, but it will be nice to see more benchmarks.

  4. #4
    Join Date
    Jan 2009


    I just wanted to mention that the self healing thing is present in several other places(at min).
    Btrfs,glusterfs,software raid0.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts