Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 39

Thread: Where The Btrfs Performance Is At Today

  1. #21
    Join Date
    Jul 2007
    Posts
    59

    Default

    Quote Originally Posted by LinuxID10T View Post
    Can we please get a filesystem benchmark using a mechanical drive? Everything here is tested using SSDs which is extremely flawed.
    Big plus to this idea. The time I need an SSD is the time I only need to store 100GiB of data.

  2. #22
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by voyager_biel View Post
    .. guest you mean data integrity in case of power off.
    I didnt understand that sentence. Could you please rephrase?

  3. #23

    Default

    Quote Originally Posted by devius View Post
    So it's probably a good idea to stay away from BTRFS for web/email/database servers. Everything else seems fine.
    If you want your data to store in RAM then stay away from it. Btw. the Phoronix Apache benchmarks doesn't measure real Apache server performance.

  4. #24

    Default

    Quote Originally Posted by kebabbert View Post
    I would much rather focus on data integrity. Is your data safe with BTRFS? ReiserFS, JFS, XFS, ext3, etc is not.
    http://www.zdnet.com/blog/storage/ho...ta-at-risk/169

    But, researchers shows that ZFS is safe in another research paper.
    The guy you linked is a damn troll. He compares Linux file systems (which are superior to the thing he compares them to) to apple's old, messed up hfs+. He praises apple's time machine as it will resolve some problems. In summary the article is pro apple and anti Linux and little anti Windows. He mentions ZFS, just because it will be available in os x. As you didn't link to a paper like you probably never did, but to some idiot then stop spreading FUD.

  5. #25
    Join Date
    Jan 2010
    Location
    Portugal
    Posts
    945

    Post

    Quote Originally Posted by LinuxID10T View Post
    Can we please get a filesystem benchmark using a mechanical drive? Everything here is tested using SSDs which is extremely flawed.
    I already mentioned before that filesystem reviews should ALWAYS include both SSD and HDD tests, and not what is usually done - randomly using either SSDs or HDDs, thus leaving users with the other type of drive wondering if the results apply to them as well. The response I got was that my proposition didn't make sense because that would be introducing another variable into the mix. Yeah... right. And that's supposed to be a bad thing? Having more info and a more complete review is bad?

  6. #26
    Join Date
    Oct 2007
    Posts
    178

    Default

    Quote Originally Posted by mutlu_inek View Post
    I would love some file system tests which include a) cpu usage and b) LUKS encryption.
    Wouldn't we expect mainly some CPU overhead with LUKS, but little impact on disk performance?

  7. #27

    Default

    Quote Originally Posted by kebabbert View Post
    I didnt understand that sentence. Could you please rephrase?
    data integrity in case of power outage, power cut or blackout....because journal is not persisted to disk... if write I/O corrupts your data or fs, then it doesn't matther which mount options you used...because corruption will be successfuly writen to fs. Only backup or old snapshot can help then to get integrity back...

  8. #28
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by kraftman View Post
    The guy you linked is a damn troll. He compares Linux file systems
    No, you got it wrong again. He is not comparing Linux file systems. He talks about some computer sciencie researchers that compare Linux file systems in a research paper on his web page.

    I hope you dont claim that PhD thesis and research papers are "damn troll"? If it was false and lies, then that research would never have passed the PhD trials. He got his PhD title, that research is valid. If it is not valid, then please mail his professor and point out the errors, then his PhD title will be withdrawn and he loose his diploma. You will instead soon get a PhD thesis if you find errors in current research and can improve it. If you can not point out the errors, then please be more careful before you accuse someone of Trolling. As we know, you are very quick to call people Troll, however you have admitted yourself that you have Trolled earlier.

    Quote Originally Posted by kraftman View Post
    He mentions ZFS, just because it will be available in os x. As you didn't link to a paper like you probably never did, but to some idiot then stop spreading FUD.
    Ive told you, that ZFS also has been subject to research from data integrity. And ZFS detected all the artificially introduced errors, whereas Linux filesystems did not even detect all errors, how then can errors be fixed? Impossible! Whereas ZFS would have corrected all errors, if they have used raid - in the research they only used ZFS on one drive which provides no redundancy.

    Here is a research paper documenting the research on ZFS. If you see some errors, please produce a paper how to improve research, and quite soon you will have a PhD thesis, you too.
    http://www.zdnet.com/blog/storage/zf...ity-tested/811

  9. #29
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by voyager_biel View Post
    data integrity in case of power outage, power cut or blackout....because journal is not persisted to disk... if write I/O corrupts your data or fs, then it doesn't matther which mount options you used...because corruption will be successfuly writen to fs. Only backup or old snapshot can help then to get integrity back...
    With ZFS it doesnt matter too much if you cut power. If you edit old data, all new changes are written to disk but all old data is still left intact on disk. Lastly, the file will point to the new data which is only one operation. Old data is not touched, it is still there on disc.

    This means that ALL changes was written to disc, or no changes was written to disc. I can not happen that only half of the changes where written to disc, and the other half of the changes got lost. No corruption. The state is always correct.

    If power is cut before the pointer points to the new data, all old data is left intact and no corruption has occured.

  10. #30
    Join Date
    Jan 2007
    Location
    Germany
    Posts
    2,177

    Default

    2.6.35-rc3 got a lot of btrfs fixes, at least one of them is a regression fix. Maybe retest the performance with the final kernel.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •