Is there a link missing from this article? I would expect it to explain the benchmarking approach, i.e. how many runs, what efforts are taken to eleminate or account for other processes distorting results, etc. etc.
Originally Posted by phoronix
I'd also expect to see the standard deviation for every result.
Without the standard deviation, or some measure of variance, and an explanation of methodology, it is difficult (polite way of saying practically impossible) to conclude anything.
If this is all 'mumbo-jumbo' then I'd recommend looking at "Statistics Hacks" by Bruce Frey, published by O"Reilly. It helped me a lot.
You aren't alone in not providing enough information, so please don't think I am trying to pick on you. If you feel like being beaten up for it, go have a look at Zed Shaw (http://www.zedshaw.com/rants/programmer_stats.html), but I would recommend "Statistics Hacks" as less painful and more productive :-)
Over the past three years at Phoronix we have established strict standards for Linux benchmarking that does involve multiple runs, managing processes, and we even reformat the hard drive(s) and do a complete reinstall of the Linux distribution and updates on each test system in between articles / rounds of testing.
I have commented more on our benchmarking process in other threads here on the forums and in some of our other articles. There is no single page that lists all the steps involved and all of the work that goes into our testing. Routinely we make minor tweaks and other refinements to our testing procedures, but as the executive editor I personally oversee each and every benchmark. We also maintain a variety of scripts for managing and automating the benchmarks while ensuring accuracy. In the near future I will work on a manifest that contains all of this information