Phoronix: Plans For Phoronix Test Suite 3.4-Lillesand
As announced this morning on Twitter, the next major release of the Phoronix Test Suite is version 3.4 and it's codenamed Lillesand. Here's some initial details on this next version of our open-source benchmarking platform for Linux / BSD / Solaris / Windows that is to be released in September...
Michael: I have an idea for an improvement to your services. I'm not sure where it'd fit in on the server-side, but I'm sure you could integrate the client side into PTS.
Simply put, imagine a conceptual fusion of BOINC and Phoronix Test Suite. From BOINC you would draw the following concepts:
People or organizations who want to see certain tests run on hardware that meets certain criteria could put out "test requests", which get sent out into the cloud/grid for anyone to run. There's no limit to the number of people who can run a single test, because more data points are useful, but there should be a minimum number of tests to determine statistical significance.
Like BOINC, the client would download and install any software components that the test requester is particularly interested in. All of the relevant components would be downloaded and installed specifically for the test request.
The client should have a working GUI (can't get php-gtk2 to work here) where the user can select from existing well-known, featured projects to download test requests from.
Things that you would maintain that are already available in PTS:
The deep software knowledge of how to run, manage, and collect results from tests.
Automatically uploading results to openbenchmarking or other services for aggregation.
Be sensitive to both hardware and software configuration, to determine (1) whether a user can run a given test on their hardware, and (2) whether their software needs to be downloaded or reconfigured to run the test.
The whole thing should fit on a Live CD, with the option of storing downloaded data persistently on a USB or Linux-compatible existing HDD volume, under a user-chosen directory. To allow users to retain their downloaded test data across reboots.
The user experience would look something like this:
1. Download and burn ISO.
2. Boot up live distro.
3. Login to openbenchmarking.org account.
4. Select tests from available test requests to run.
5. Run them and wait for them to download, configure, run, and report results.
The organization/enterprise experience would look something like this:
1. Compile test binaries.
2. Ship packaged binaries and a Phoronix test profile to openbenchmarking servers.
3. Using web interface, customize exactly which hardware configurations are allowed to run the test (whitelist), preventing the upload of results from hardware that doesn't meet the criteria. Of course, the criteria can be completely wide open if the test requester desires it. Or they can select something like "any ATI card", or "any card supporting OpenGL 2.1", or "at least 2GB of RAM", etc.
4. Wait for test results to come in from users with matching hardware.
Then, you can incentivize the whole process by adding small monetary rewards to people who successfully run tests and upload their results. The rewards would come from payments made by the enterprise customer who wants to have tests run for them.
You could take it a step further and offer a billing model that looks something like Nokia's Qt. For open source projects, it's 100% free to submit test requests, and the only money you'd make on that would be advertising in the openbenchmarking.org website. Open source project developers would be allowed to donate money into a reward pool that successful testers can partake of to encourage users to participate, but this would be a royalty-free service (Phoronix would not retain any of the money).
For proprietary products, you would have to figure out some kind of fee schedule (flat rate? per-test-result fee?) to charge the enterprise customer, and pass a small fraction of those profits down to the end-users submitting the tests, to incentivize their participation.
Why am I recommending this? Because the #1 problem that I notice with people who download PTS is that they inevitably ask, "So what tests should I run, and why should I care about the results?" -- With test requests, both of these questions are answered:
What tests should I run: Well, either run ones that personally benefit you (pay you money for your time/computing power), or run ones for open source projects you care about. Example: I care about the development and progress of Mesa, so I'd love to satisfy test requests queried by Mesa developers. Example 2: I care about making money, so I'd love to satisfy test requests queried by Unigine developers wanting to measure performance on their Heaven engine.
Why should I care: Well, if the tests are for an open source project I care about, I can only hope that my results will help the developers improve the project, leading to a better experience for me. If the tests are for a proprietary product and I'll make money, that's an obvious reason to care. Plus, I might also make the proprietary product a better one as a result.
Once you got all that done and working, you could add an "icing on the cake" feature, and have an "Auto-Pilot" feature, where the client autonomously goes out and satisfies test requests for any tests that it meets the hardware requirements. The user could just set some parameters like "only tests where I might get a monetary reward" or "only open-source projects", then leave their computer running while they're on vacation for a week, come back and have a few dollars in their account or a lot of open source projects thankful for their contribution.
Last edited by allquixotic; 07-03-2011 at 03:40 PM.