No no, I mean please merge your posts if you're replying to more than one recipient in a row. For instance
Originally Posted by kraftman
edited: It prevents one-liners from "polluting" thread.
Originally Posted by SomeDude
Yes, you're right
Originally Posted by SomeOtherDude
No, you're wrong
Last edited by reavertm; 12-15-2009 at 01:21 PM.
Ouch! This is lame at best. Editors please put a big fat flashy red warning on every page of the article that the tests are massively deceiving.
Originally Posted by next9
first test (dbench) looks like ext3 never hits the platter - no surprise since ext3 has barriers default off. Unlike reiserfs.
You guys should really stop looking at ext3. It is not a filesystem meant for serious usage. Just pretty numbers.
Please do not use SSDs on your reviews
SSDs access times are almost identical for all sectors, thus eliminating all allocation optimizations of modern file systems.
Access time impact is huge
AFAIK, the 'nobarrier' and 'data=writeback' mount options might have a performance effect even if you don't have a journal (Theo says that the 'nojournal' feature only disables the journal writes to disk and not the journal logic)
Last edited by tmo71; 12-15-2009 at 06:21 PM.
CFQ change in 2.6.33
I'm surprised it wasn't mentioned that the CFQ scheduler in 2.6.33 underwent a change to increase system responsiveness at the expense of throughput. This would significantly affect the benchmarking results if you are comparing them to results on previous kernels. In addition, CFQ optimizes file allocation for rotational media, however this is entirely unnecessary on solid state drives and just results in extra overhead. As I said earlier, it'd be better to use a different I/O scheduler like deadline or noop when testing SSDs. This would eliminate the extra variable of an I/O scheduler that changes each kernel release and would likely yield better performance as well.
I think the point of default options is because most users do not know, or are able to use the correct switches to get greatest performance. Only few people know which switches to use. Why not bench with default options, which everyone will use then?
Originally Posted by lordmozilla
Another thing is that if you aggressively tailor for performance you often loose other functionality. For instance, reliability. Who wants to use a very fast file system where your data is unsafe? I prefer a slow filesystem where my data is safe, and not subject to silent corruption and bit rot as all file systems are (except ZFS). Of course, if it fast, the better. But the point of a file system is that your data is safe. Better slow and safe, than fast and unsafe?
Well. 3D Video driver benchmarks for instance, some comparison articles try not to only measure frame rate, but also judge the quality. They provide side-by-side screenshots and such. Benchmarking HD video playback, they not only provide the raw numbers but try to subjectively talk about the quality of the presented video.
Originally Posted by kebabbert
A filesystem benchmark suite, in my opinion, isn't complete unless it attempts to also index reliability or at least to subjectively mention it as a caveat. If number 1 and number 2 are seperated by microseconds, but number 1 increases your likelihood of data by a non-trivial amount, ... well.. you get it.
I like these phoronix benchmarks though since they indicate slowdowns or speed bumps as these filesystems evolve.
kebabbert, then the distros should set some good defaults in fstab.
But the current situation is a mess. extX is unsave by design and unsave by default. Tuned for mbenchmarks and people are like 'OMG EXTX IS SO ÜBER' while other filesystems are file safety first - and loose in such stupid benchmarks like phoronix does.
Turn on barriers for ext3 and see it loose badly. Or make ext4 not do stupid-but-fast allocations and see it completly break down.
Yes, but try to tell them that EXT is not that reliable nor fast, and they start to call you names. Even if EXT developers confessed that is unreliable they would not accept when you tried to link to the interview.
Originally Posted by energyman
Anyway, I think that a tailored benchmark is misleading. It is like benching a normal CPU (but it is overclocked and has special functions), this will not help normal users. The benches would be misleading, then. Same with special tailored benches. Of course, sometimes it is really bad, for instance benches where OpenSolaris used gcc v3.2 in 32bit mode vs Linux gcc 4.3 in 64bit mode, but such is life. If Solaris people could tailor the compiler, vs Linux people it would be more fair. But not many people has that expertise.