Page 4 of 7 FirstFirst ... 23456 ... LastLast
Results 31 to 40 of 68

Thread: With Linux 2.6.32, Btrfs Gains As EXT4 Recedes

  1. #31
    Join Date
    Jul 2008
    Location
    Wrocław/Poland
    Posts
    37

    Default

    Quote Originally Posted by kraftman View Post
    Definitely.
    What for? I think they're correct. Or maybe you meant to put EDIT: when editing them?
    No no, I mean please merge your posts if you're replying to more than one recipient in a row. For instance

    Quote Originally Posted by SomeDude
    [...]
    Yes, you're right

    Quote Originally Posted by SomeOtherDude
    [...]
    No, you're wrong
    edited: It prevents one-liners from "polluting" thread.
    Last edited by reavertm; 12-15-2009 at 01:21 PM.

  2. #32

    Default

    Quote Originally Posted by reavertm View Post
    No no, I mean please merge your posts if you're replying to more than one recipient in a row. For instance



    edited: It prevents one-liners from "polluting" thread.
    Ok, no problem

  3. #33
    Join Date
    Jun 2009
    Posts
    58

    Default

    Quote Originally Posted by next9 View Post
    Again. Strange comparison based upon Ubuntu system. Why?

    It should be noticed, that Ubuntu Ext3 does not use barriers by default in order to look much faster. But this is big lie, putting users data into the danger!

    Typical Ext3 speed on distribution, that care safety of users data, would be much slower in these graphs.
    Ouch! This is lame at best. Editors please put a big fat flashy red warning on every page of the article that the tests are massively deceiving.

  4. #34
    Join Date
    Jul 2008
    Posts
    1,720

    Default

    first test (dbench) looks like ext3 never hits the platter - no surprise since ext3 has barriers default off. Unlike reiserfs.

    You guys should really stop looking at ext3. It is not a filesystem meant for serious usage. Just pretty numbers.

  5. #35
    Join Date
    May 2009
    Posts
    3

    Default Please do not use SSDs on your reviews

    SSDs access times are almost identical for all sectors, thus eliminating all allocation optimizations of modern file systems.

    Access time impact is huge

    AFAIK, the 'nobarrier' and 'data=writeback' mount options might have a performance effect even if you don't have a journal (Theo says that the 'nojournal' feature only disables the journal writes to disk and not the journal logic)
    Last edited by tmo71; 12-15-2009 at 06:21 PM.

  6. #36

    Default CFQ change in 2.6.33

    I'm surprised it wasn't mentioned that the CFQ scheduler in 2.6.33 underwent a change to increase system responsiveness at the expense of throughput. This would significantly affect the benchmarking results if you are comparing them to results on previous kernels. In addition, CFQ optimizes file allocation for rotational media, however this is entirely unnecessary on solid state drives and just results in extra overhead. As I said earlier, it'd be better to use a different I/O scheduler like deadline or noop when testing SSDs. This would eliminate the extra variable of an I/O scheduler that changes each kernel release and would likely yield better performance as well.

  7. #37
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by lordmozilla View Post
    I'm tired of these tests where default options are used everywhere. What's the point? Show us the potential of these filesystems not just the fact that no configuration = crap performance.
    I think the point of default options is because most users do not know, or are able to use the correct switches to get greatest performance. Only few people know which switches to use. Why not bench with default options, which everyone will use then?

    Another thing is that if you aggressively tailor for performance you often loose other functionality. For instance, reliability. Who wants to use a very fast file system where your data is unsafe? I prefer a slow filesystem where my data is safe, and not subject to silent corruption and bit rot as all file systems are (except ZFS). Of course, if it fast, the better. But the point of a file system is that your data is safe. Better slow and safe, than fast and unsafe?

  8. #38
    Join Date
    Dec 2009
    Posts
    2

    Default

    Quote Originally Posted by kebabbert View Post
    Another thing is that if you aggressively tailor for performance you often loose other functionality. For instance, reliability. Who wants to use a very fast file system where your data is unsafe? I prefer a slow filesystem where my data is safe, and not subject to silent corruption and bit rot as all file systems are (except ZFS). Of course, if it fast, the better. But the point of a file system is that your data is safe. Better slow and safe, than fast and unsafe?
    Well. 3D Video driver benchmarks for instance, some comparison articles try not to only measure frame rate, but also judge the quality. They provide side-by-side screenshots and such. Benchmarking HD video playback, they not only provide the raw numbers but try to subjectively talk about the quality of the presented video.

    A filesystem benchmark suite, in my opinion, isn't complete unless it attempts to also index reliability or at least to subjectively mention it as a caveat. If number 1 and number 2 are seperated by microseconds, but number 1 increases your likelihood of data by a non-trivial amount, ... well.. you get it.

    I like these phoronix benchmarks though since they indicate slowdowns or speed bumps as these filesystems evolve.

  9. #39
    Join Date
    Jul 2008
    Posts
    1,720

    Default

    kebabbert, then the distros should set some good defaults in fstab.

    But the current situation is a mess. extX is unsave by design and unsave by default. Tuned for mbenchmarks and people are like 'OMG EXTX IS SO ÜBER' while other filesystems are file safety first - and loose in such stupid benchmarks like phoronix does.

    Turn on barriers for ext3 and see it loose badly. Or make ext4 not do stupid-but-fast allocations and see it completly break down.

  10. #40
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by energyman View Post
    But the current situation is a mess. extX is unsave by design and unsave by default. Tuned for mbenchmarks and people are like 'OMG EXTX IS SO ÜBER' while other filesystems are file safety first - and loose in such stupid benchmarks like phoronix does.
    Yes, but try to tell them that EXT is not that reliable nor fast, and they start to call you names. Even if EXT developers confessed that is unreliable they would not accept when you tried to link to the interview.

    Anyway, I think that a tailored benchmark is misleading. It is like benching a normal CPU (but it is overclocked and has special functions), this will not help normal users. The benches would be misleading, then. Same with special tailored benches. Of course, sometimes it is really bad, for instance benches where OpenSolaris used gcc v3.2 in 32bit mode vs Linux gcc 4.3 in 64bit mode, but such is life. If Solaris people could tailor the compiler, vs Linux people it would be more fair. But not many people has that expertise.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •