Another interesting "environment" to test PTS under would be the Microsoft/Interix SUA, "Subsystem for UNIX Applications", which is a pretty low-level userspace subsystem that completely bypasses the Windows API and directly talks to the kernel using a POSIX-based C library. It even has an init.exe process, a bash shell, and a lot of other stuff that you can download online. All you need to run it is Vista/7 Enterprise, or any Server version of Windows since 2003 (although Vista/7 ship the latest version of SUA and there is no way to upgrade to the latest if you use XP/2003).
If you can bring up the PHP stack on SUA, you could run some benchmarks (CPU, network, and disk only probably, unless you can run a hardware-accelerated X server on SUA... not likely) comparing the Win32 subsystem against the POSIX subsystem on the NT kernel. This would be an interesting way to see whether the Win32 API is a bunch of slow fluff, or if the POSIX API isn't that much of a winner. And it's quite a lot better than Cygwin, because Cygwin is implemented on top of the Windows API, while Interix SUA is not.
GNU Hurd has a really unique architecture that's going to take a long time for the conventional computer market to understand and grow around. The concepts of "applications" and "drivers" are totally different and it's going to take time for developers to wrap their heads around these new ways of doing things.
As far as the "for serious" part goes, it is ALREADY being used for "serious" thinking about what it means to have a secure operating system, and to most people that is a far more "serious" application than "does it run Firefox"
I would like to see a list from smartphones to Intel/AMD server processors as Toms Hardware does, with this openbenchmark index instead of futuremark or 3dmark.
Do you know how useless those Toms Benchmarks are? I caught them last year or the year before that fudging the numbers. They don't even actually benchmark the all the tests on the processors, they bench mark one and then they say "OK if intel beat AMD by 26% in this test it must hold true, so we will just create a linear graph and interpolate the rest of the results." And their linux benches are a joke. They compile the bench on one intel machine with the intel optimization flags and then carry over that binary and test on other processor families and even use the same binary on the AMD systems, negating any optimization that the compiler would would do for the AMD CPU's.
They compile the bench on one intel machine with the intel optimization flags and then carry over that binary and test on other processor families and even use the same binary on the AMD systems, negating any optimization that the compiler would would do for the AMD CPU's.
Well, if it's an non-optimized build, then that would be a valid way of doing it (using the same binary on each computer), since it would remove a variable (assuming you want to test the same software on two sets of hardware). If it's an optimized build, then the test is flawed, certainly.
I won't argue about those other points you made, as they're obviously bad practice (although, it'd be nice if you had some proof to go with your claims, to be fair).
Anyway, I'm looking forward to the benchmarks. Would be nice to see how well it performs, even if only under a VM. It'll be especially interesting when the Hurd starts supporting more current hardware (either through native, or Linux drivers via DDE).