Page 5 of 7 FirstFirst ... 34567 LastLast
Results 41 to 50 of 66

Thread: Large HDD/SSD Linux 2.6.38 File-System Comparison

  1. #41
    Join Date
    Feb 2009
    Posts
    22

    Default

    Quote Originally Posted by locovaca View Post
    I wouldn't want a benchmark for your particular scenario because I'll never enter a situation like that, and I would venture that a majority of users would not either; any such data would skew opinions of file systems unnecessarily. I don't care how well a Corolla tows a 3 ton camper, just tell me how well it drives in basic conditions (city, highway) and I'll go from there.
    Sure but you're begging the question of what is "normal conditions". Are you always going to fill a file system to 10% of capacity, and then reformat it, and then fill it to 10% again? That's what many benchmarkers actually end up testing. And so a file system that depends on the garbage collector for correct long-term operation, but which never has to garbage collect, will look really good. But does that correspond to how you will use the file system?

    What is "basic conditions", anyway? That's fundamentally what I'm pointing out here. And is performance really all people should care about? Where does safety factor into all of this? And to be completely fair to btrfs, it has cool features --- which is cool, if you end up using those features. If you don't then you might be paying for something that you don't need. And can you turn off the features you don't need, and do you get the performance back?

    For example, at $WORK we run ext4 with journalling disabled and barriers disabled. That's because we keep replicated copies of everything at the cluster file system level. If I were to pull a Hans Reiser, and shipped ext4 with its defaults to have the journal and barriers disabled, it would be faster than ext2 and ext3, and most of the other file systems in the Phoronix file system comparison. But that would be bad for the desktop users for ext4, and that to me is more important than winning a benchmark demolition derby.

    -- Ted

  2. #42
    Join Date
    May 2009
    Posts
    3

    Default weird results

    Hi! i am looking at http://www.phoronix.com/data/img/res...38_large/2.png
    with the results of sqlite ...
    is really ext3 2 times slower on SSD? how can this be? is this the efect of lack of garbage collection or not trimming?

    Thanks for info!
    Adrian

  3. #43
    Join Date
    Jul 2009
    Posts
    416

    Default

    Michael, for the graphs could you put a larger separator between the HDD and the SSD? I see there's a little hash mark, and the colors repeat. But at first glance it was kind of hard to tell where one ends, and the other begins.

  4. #44
    Join Date
    Feb 2009
    Posts
    22

    Default

    Quote Originally Posted by adrian_sev View Post
    Hi! i am looking at http://www.phoronix.com/data/img/res...38_large/2.png
    with the results of sqlite ...
    is really ext3 2 times slower on SSD? how can this be? is this the efect of lack of garbage collection or not trimming?
    I'm pretty sure that ext3 is winning very big on the SQLite benchmark because it does a large number of random writes to the same blocks --- and since ext3 has barriers off by default, on the hard drive the disk collapses the writes together and most of the writes don't actually hit the disk platter. Good luck to your data if you have a power hit, but that's why ext3 wins really big on an HDD.

    On an SSD, at least OCZ, it's not merging the writes, and so the random writes result in flash write blocks getting written, so that's why ext3 appears to be much worse on the OCZ SSD. Other SSD's might be able to do a better job of merging writes to the same block, if they have a larger write buffer. This would be very SSD-specific.

    I suspect that JFS didn't run into this problem, even though it also doesn't use barriers, because its write patterns happened to fit within the OCZ's write cache, so it was able to collapse the writes. Personally I don't think it really matters, since running a database like SQLite which is trying to provide ACID properties without barriers enabled is obviously going to (a) result in a failure of the ACID guarantees, and (b) result in very confusing and misleading benchmark results.

  5. #45
    Join Date
    Jan 2011
    Posts
    427

    Default

    Someone said it before, but I don't get the point of benchmarking ext4 on an SSD without the discard option (and maybe noatime.)

    An SSD benchmark would in fact be a good place to tell people they should use discard, for the few who wouldn't know it already.

  6. #46
    Join Date
    Nov 2010
    Location
    Stockholm, Sweden
    Posts
    429

    Default

    Quote Originally Posted by squirrl View Post
    Reiser3 is still the best all around choice.
    * Fault Tolerant
    * Efficient
    * Static
    But sadly it degenerates and fragments like a motherfokker. After one and a half year it's at 20% of the speed it started at. And there's no known way of defragmenting it, except copying all the files from-and-to the filesystem again.

  7. #47
    Join Date
    Apr 2010
    Posts
    271

    Default

    Quote Originally Posted by tytso View Post
    Sure but you're begging the question of what is "normal conditions". Are you always going to fill a file system to 10% of capacity, and then reformat it, and then fill it to 10% again? That's what many benchmarkers actually end up testing. And so a file system that depends on the garbage collector for correct long-term operation, but which never has to garbage collect, will look really good. But does that correspond to how you will use the file system?

    What is "basic conditions", anyway? That's fundamentally what I'm pointing out here. And is performance really all people should care about? Where does safety factor into all of this? And to be completely fair to btrfs, it has cool features --- which is cool, if you end up using those features. If you don't then you might be paying for something that you don't need. And can you turn off the features you don't need, and do you get the performance back?

    For example, at $WORK we run ext4 with journalling disabled and barriers disabled. That's because we keep replicated copies of everything at the cluster file system level. If I were to pull a Hans Reiser, and shipped ext4 with its defaults to have the journal and barriers disabled, it would be faster than ext2 and ext3, and most of the other file systems in the Phoronix file system comparison. But that would be bad for the desktop users for ext4, and that to me is more important than winning a benchmark demolition derby.

    -- Ted
    Well, since the distribution is the end user version of Ubuntu which is marketed to a more casual user I would expect the file system to receive a modest load of files (installation), then see mainly small reads and writes over the course of its lifetime (logs, home folder) with some occasional larger writes (software installation, cd rip maybe). I believe Ubuntu's default partitioning scheme is one big file system + a swap partition so this is the configuration I'd expect to see with this test. So yes, assuming a 10% full file system is probably ok given this set of assumptions.

    Quote Originally Posted by TonsOfPeople
    Wah, the default didn't set xxx, that's horrible
    If the defaults of the file system are not ok, either link to the bug report or it's not really an issue.

  8. #48
    Join Date
    Jul 2008
    Posts
    65

    Default

    Quote Originally Posted by stqn View Post
    Someone said it before, but I don't get the point of benchmarking ext4 on an SSD without the discard option (and maybe noatime.)

    An SSD benchmark would in fact be a good place to tell people they should use discard, for the few who wouldn't know it already.
    Here are my results with noatime and discard on OCZ Vertex 2:
    http://openbenchmarking.org/result/1...SKEE-110309125

  9. #49
    Join Date
    Feb 2009
    Posts
    22

    Default

    Quote Originally Posted by locovaca View Post
    Well, since the distribution is the end user version of Ubuntu which is marketed to a more casual user I would expect the file system to receive a modest load of files (installation), then see mainly small reads and writes over the course of its lifetime (logs, home folder) with some occasional larger writes (software installation, cd rip maybe). I believe Ubuntu's default partitioning scheme is one big file system + a swap partition so this is the configuration I'd expect to see with this test. So yes, assuming a 10% full file system is probably ok given this set of assumptions.
    Yes, but you're not constantly reformatting the file system (i.e., reinstalling the distribution) over and over again. That is, the file system is allowed to age. So a month later, with a copy-on-write file system, the free space will have all been written will potentially get quite fragmented. But the benchmarks don't take this into account. They use a freshly formatted file system each time --- which is good for reproducibility, but it doesn't model what you will see in real life a month or 3 months later.

    The right answer would be use something like the fs impressions tool to "age" the file system before doing the timed benchmark part of the test (see: http://www.usenix.org/events/fast09/...es/agrawal.pdf).

    The fundamental question is what are you trying to measure? What is more important? The experience the user gets when the file system is first installed, or what they get a month later, and moving forward after that?

  10. #50
    Join Date
    Jul 2008
    Posts
    69

    Default

    Quote Originally Posted by locovaca View Post
    Well, since the distribution is the end user version of Ubuntu which is marketed to a more casual user I would expect the file system to receive a modest load of files (installation), then see mainly small reads and writes over the course of its lifetime (logs, home folder) with some occasional larger writes (software installation, cd rip maybe). I believe Ubuntu's default partitioning scheme is one big file system + a swap partition so this is the configuration I'd expect to see with this test. So yes, assuming a 10% full file system is probably ok given this set of assumptions.


    If the defaults of the file system are not ok, either link to the bug report or it's not really an issue.
    You do realize that you're responding to Ted Tso the creator of the Ext4 file system, right?

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •