Quote Originally Posted by mtippett View Post
To some extent I disagree with this. The upstream developers, lead developers who push to the kernel and the distribution vendors all have a part to play in the cycle of getting things to production.

Phoronix is just reporting on the state of whatever made it into the kernel.

My view is that the question isn't so much what Phoronix has highlighted with 2.6.30 and why didn't he discuss with the upstream developers before publishing, but rather why didn't the maintainers have an awareness of the benefits and deficiencies prior to pushing up into the kernel.

For most people, they will review the general performance, select the filesystem that best performs for their intended use, and then tune only that platform.

I appreciate the position that the developers are in, but PTS is trivial enough for the developers to be self-aware before pushing downstream. It shouldn't take any real effort to test before pushing.


I get your point, but going just a -little- deeper to analyze the results (beyond "shocking!") would do a great service.

Imagine this instead:

The ext4, xfs, and btrfs results were much, much, slower than ext3, but that is due to the extra data integrity measures they employ, critical for databases. When we spoke with the developers of these filesystems, we learned that this is largely only needed when a volatile write cache is present in the storage, and when that is not the case (for example, with a battery backed raid cache, or reliable UPS), barriers may be safely disabled to enhance the speed. To verify this, we re-tested with this tuning, and here are the results:
Sure, developers could run PTS, but to be honest, most fs devels I know don't think the PTS filesystem tests are worth a warm pile of spit. And I say that as nicely as I can. FS developers certainly do run relevant benchmarks before pushing.

I'm hoping to go through some of the PTS tests and make suggestions to make them more repeatable & relevant. But for example the "mp3 file encryption" tests which used to be run, as noted over and over, simply do not stress the filesystem.

Really, even the same is likely true for the apache static page serving test; in my testing, the only significant FS activity was writing errors to the error logs.

Another oddity is the iozone 4G read tests; the report is that ext4 is doing 220MB/s on a single sata drive, which is most likely faster than the drive can actually get data off the disk. It may be measuring in-cache performance to some degree. A baseline of the storage performance would go a long ways towards putting results in context. Clearing caches before the read portion would show what the filesystem itself can do. Doing a dataset significantly larger than the system's memory would make it more relevant.

You are right that the tests simply run & report, but there is some onus on the tests to be running things that matter, when the results are presented as they are.

I really would like to see Phoronix be a genuinely useful resource for developers & users alike, but I think there's a ways to go.