I think the more interesting comparison is with the series of tests which showed Ubuntu's performance decline very sharply after 7.04 and recover a little with 8.10. The fact that Fedora 10 and Ubuntu 8.10 are in effect identical performers leads me to wonder if all desktop distributions have suffered a big performance hit after kernel 2.6.15 (the Ubuntu 7.04 kernel).
Sorry, but you are only the 100000st person who believes that the test Phoronix had run with Ubuntu 7.04, 7.10 etc. were correct. There is enough prove that something went wrong during the tests (e.g. my P3-1000Mhz gets nearly the same numbers for Ubuntu 8.10 as the tested Core2Duo 1,87Ghz and my P-Mobile 1,7GHz gets nearly the same numbers for Ubuntu 8.10 as Ubuntu 7.04 in the Phoronix test, etc), so the numbers for the old tests can't be trusted.
Now, as Ubuntu 8.10 and Fedora 10 are marked stable, this would be a good time to rerun the tests with Ubuntu and Fedora (7.x, 8.x, etc.) on the same hardware as the "Fedora 10 vs. Ubuntu 8.10"-test
OS vs OS comparisons should really be done utilizing the distro's packages anyhows as that is how most people run those packages. In a hardware comparison test then the same version of OS should be used and the packages recompiled.
OS vs. OS can use the defaults, sure, but comparing performance differences between library versions is even better so that users will know that they can get better performance by installing the newer or older libraries.
Compilation doesn't have to be required, that doesn't really have anything to do with this, that's what binary packages are for, but whatever. The point is, you should be able to pinpoint the cause of slowdowns to differences in the libraries hopefully, but if those are the same then you know the performance loss lies elsewhere obviously.
You're probably right though, most users probably don't care so much, though maybe if it was easier to install newer libraries and compilation was required less of the time, more users would, and IMO that's where the focus should be is on the actual programs that cause the differences in performance. If you don't direct the problems to where they actually are then they'll never get solved.
I was looking through the recent set of tests (vs Mac, vs Fedora, vs OpenSolaris, etc) I'm surprised to see that there doesn't seem to be a consistent set of test results published. While you seem to use the same test suite the results that are shown are cherry picked.
Perhaps this is just the highlights that shows the interesting comparisons, but it would be good to publish links to entire set or results, otherwise there's the chance that an unfair comparison is being mode - showing only the favorable results.