Phoronix: What Shall We Benchmark Next? Let Us Know!
With Phoromatic we make it easy to build a test farm for benchmarking and automated regression management and to prove it we began monitoring the Linux kernel performance on a daily basis where we track the performance of the latest Linux kernel code on a daily basis using multiple systems. This has been going on for months and then yesterday we announced that Phoromatic reached a 1.0 status and that we have launched Phoromatic Ubuntu Tracker, a way to monitor the performance of Ubuntu Linux as a whole on a daily basis by benchmarking the most recent development packages. A day has passed so now we are already thinking of what next to add to our test farm for continuous performance tracking...
Why not do some testing of popular applications on Wine on a daily basis? Wine's git HEAD is rather stable (Alexandre does unit tests + compile test before he commits anything) so that should work fairly well.
Why don't you track the performance of another kernel, such as FreeBSD? It will be interesting to compare the development approach taken by the different camps. Does FreeBSD experience performance regressions (as Linux recently did with ext4fs)? Does FreeBSD get faster or slower with every release?
I wouldn't expect UFS2 to change much as it is tried and tested (although the introduction of Soft Updates + Journaling [SUJ] might change that). ZFS, on the other hand, may experience significant fluctuations as fixes and improvements are implemented. The kernel is also slowly moving away from the GIANT lock (similar to the python GIL) so that may result in performance increases (or decreases).
Another area of benchmarking that I do not recall Phoronix doing is on the network stack. How does Linux, FreeBSD and other OS's compare in TCP/IP(6) throughput, load balancing, firewalls, etc. How resilient are the kernels to DOS attacks and other maliciousness.
With clang maturing and FreeBSD taking a close look at adopting it that might be another area for consideration. I would expect the compilation time, using clang, to increase as more features (especially C++) are implemented yet the runtime (of the compiled binaries) should decrease as more optimizations are implemented. This could also be compared to GCC 4.x progression.
Preferably comparing the time it takes to compile a system with -Os -02 and -O3 compare the disk space they take up and the responsiveness of the system, memory usage etc. I'd be interested to see the differences
Or use -O2 and compare different versions of GCC
Or use the same version of GCC and compare CPU schedulers or kernel versions
Or have a system that's always running the latest Xorg components and latest next-drm kernel doing a compile test cycle effectivly monitoring git for regressions
We are seeking requests for trackers to continuously monitor performance, not individual performance article benchmarks. Post those requests in another thread as I will just be ignoring them in this thread.
A wine benchmark would be cool, if you could base it on a native game, which also has an openGL and a Direct3D renderer for the Windows version.
(ie Unreal 2004)
So you could then test with lower spec cards from AMD and NVidia, and whatever graphics Intel have, and see what it looks like with the native performance, and the wine performance in the drivers with Direct3D and openGL.