Quote Originally Posted by Rexilion View Post
These were the only differing benchmarks AFAIK. No point in showing benchmarks were everything is the same.
Agreed that showing benchmarks that are the same across all platforms is not really intuitive as it just shows that nothing was done in that particular area of development, but there was a great variety of results in the game benchmarks. Between the F and both U's, this really highlit the steps forward in only 6 months of development. This in part answers another of your questions below. My argument is in part in relation to where you would use a cutting edge distro and IMO as a storage device, i would strongly advise against using the latest of any distribution in this role. In reality these distro's are more likely to be used for gaming than RHEL or an LTS release, so, why not show the benchmarks for that target??


Quote Originally Posted by Rexilion View Post
Different compositors and different distributions are indeed a seperate comparison. So yes, the outcomes can be different. What is the point?
Well, the point of benchmarking is to eliminate as many variables as possible and provide comparative results. As gnome-shell is available on the U's it would be greatly beneficial to provide comparative results with gnome-shell to give a more accurate cross distribution comparison. By showing the in game results on this F vs U benchmark it makes it easier for people to correlate those results with the cross compositor benchmarks of http://www.phoronix.com/scan.php?pag...desktop1&num=1


Quote Originally Posted by Rexilion View Post
FPS figures are subjective? Please explain.
the subject in the case of benchmarking is, as i put just above, benchmarking that which something is mostly used for. For current/modern distro's game performance is becoming an important metric, so why leave it out?


Quote Originally Posted by Rexilion View Post
This argument does not in any way invalidate said results. You argue about the different distributions used in this test. So what? Furthermore, Ubuntu and Fedora both have approx. 6 month development cycles.
During the life cycle of a product many tweaks, patches and updates are applied. For server/enterprise class tests (web perf / db perf / io capacity) it is a bit futile to use a development distribution. For those tests it would be more appropriate to benchmark the U LTS and Current RHEL versions as they share a similar market space that is targeted toward http/db/fs.


Quote Originally Posted by Rexilion View Post
I'm not sure, packages are frozen before the final version is rolled out.
so, why if the package versions are still in flux is it in a benchmark that is designed to assess consistency? Plus there is the added fact that while in beta generally full debugging is enabled which impacts performance.


I'm not out looking for a fight, just seeking equality, consistency and unbiased journalism. It's pretty much a known fact that people only read what's in front of them so to have relevant data hidden away in another post is a bit irresponsible for providing openly comparative data. To this it'd be great to see some SuSe results in the mix also in addition to Debian, but I understand that there are only so many hours in a day and limited resources.