Page 1 of 3 123 LastLast
Results 1 to 10 of 30

Thread: Can DragonFly's HAMMER Compete With Btrfs, ZFS?

  1. #1
    Join Date
    Jan 2007
    Posts
    15,636

    Default Can DragonFly's HAMMER Compete With Btrfs, ZFS?

    Phoronix: Can DragonFly's HAMMER Compete With Btrfs, ZFS?

    The most common Linux file-systems we talk about at Phoronix are of course Btrfs and EXT4 while the ZFS file-system, which is available on Linux as a FUSE (user-space) module or via a recent kernel module port, gets mentioned a fair amount too. When it comes to the FreeBSD and PC-BSD operating systems, ZFS is looked upon as the superior, next-generation option that is available to BSD users. However, with the DragonFlyBSD operating system there is another option: HAMMER. In this article we are seeing how the performance of this original creation within the DragonFlyBSD project competes with ZFS, UFS, EXT3, EXT4, and Btrfs.

    http://www.phoronix.com/vr.php?view=15605

  2. #2

    Default Some reactions on FBSD performance mailing list


  3. #3
    Join Date
    Jan 2011
    Posts
    8

    Default

    There is something seriously wrong with the Threaded I/O tester results. There is simply no way possible that the ZFS writes got faster when switching from linear to random writes, especially considering that all but HAMMER and btrfs went down by an order of magnitude. Are you sure the result wasn't supposed to be 4.96?

    I feel that this makes all of your results quite suspect.

  4. #4
    Join Date
    Jan 2011
    Posts
    8

    Default

    Quote Originally Posted by thesjg View Post
    I feel that this makes all of your results quite suspect.
    Also, the blogbench results indicate that you ran the benchmark and seperated the read and write results to create the independent graphs. You should point this out in your article. For heavy concurrent read/write workloads where read performance is important DragonFly would recommend the use of the fairq disk scheduler.

  5. #5

    Default

    Quote Originally Posted by thesjg View Post
    Are you sure the result wasn't supposed to be 4.96?

    I feel that this makes all of your results quite suspect.
    Everything is automated and reproducible from test installation to graph generation.

  6. #6
    Join Date
    Jan 2011
    Posts
    8

    Default

    Quote Originally Posted by Michael View Post
    Everything is automated and reproducible from test installation to graph generation.
    That's cute, but the results don't jive. Unless you can explain why ZFS was miraculously faster when all of the other file systems were slower I have to assume there is something wrong with your test.

  7. #7
    Join Date
    Jul 2010
    Posts
    69

    Default

    Quote Originally Posted by thesjg View Post
    There is something seriously wrong with the Threaded I/O tester results. There is simply no way possible that the ZFS writes got faster when switching from linear to random writes, especially considering that all but HAMMER and btrfs went down by an order of magnitude. Are you sure the result wasn't supposed to be 4.96?

    I feel that this makes all of your results quite suspect.

    It is of course possible. You just need to know who ZFS works. And know for what kind of workloads it was designed for.

  8. #8
    Join Date
    Jul 2010
    Posts
    69

    Default

    It is quite easy to explain (log structured writes, more scalable techniques, and probably some bugs in btrfs), but I just do not have time

    PS. Stupid 1 minute limit.

  9. #9
    Join Date
    Jan 2011
    Posts
    8

    Default

    Quote Originally Posted by baryluk View Post
    It is quite easy to explain (log structured writes, more scalable techniques, and probably some bugs in btrfs), but I just do not have time

    PS. Stupid 1 minute limit.
    Fortunately, I am familiar with the internals of UFS, ZFS and HAMMER. HAMMER is "log structured" in the same fashion as ZFS. Being log structured does nothing to explain the -increase- in performance seen between the final two graphs. These benchmarks are absolutely useless unless the inconsistencies can be explained.

  10. #10
    Join Date
    Jul 2010
    Posts
    69

    Default

    Hmm, @thesjg you are right. Last graph alone (random write) could be valid, but comparing it to the previous (continious write) indeed bring some questions. Maybe data was not written to disk and combined in cache before doing sync. One should use LOT bigger files than avalable RAM (but unfortunetly random write will take lots of time), and good random generator. Needs more investigation.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •