I had similar problems, which I tracked down. mplayer probably failed due to RAM usage. I ran this first on a system with 2GB of RAM and probably 1GB swap (not modern -- a Compaq SP750 with dual P3-866s.. an AMD Phenom on the site is exactly 5x faster per core, 10x faster overall due to having twice the cores... but the SP750 sure seems snappy.) No problems. I then tried running it on my home systems, which are much newer Athlon XPs.. they essentially locked up. Well, these have 512MB of RAM and 1.5GB swap instead, one I had top running on showed all 20 or so processes as "cc1" before it started thrasing too hard for even update top. I'd guess mplayer bench died for you because you have somewhere in the 512MB-2GB range of RAM and instead of swapping to death it just plain was out of memory and killed some copies of cc1.
I found the phoronix-test-suite has uses of "NUM_CPU_JOBS" through it, but the variable's never set. The builds run "make -j $NUM_CPU_JOBS foo.." Make's "-j" option sets how many jobs (usually copies of gcc) to run at once.. and "-j" by itself means an unlimited number. So, the mplayer build lets like 50 gcc's fire off at once and the system runs out of RAM.
I ran "export NUM_CPU_JOBS=2" before running the phoronix test suite and the compiling looks much more under control. (Adjust NUM_CPU_JOBS as appropriate to your number of cores etc.. I used 3 on my hyperthreader.
Oops, this will be fixed in git. I didn't realize I changed the variable from NUM_CPU_JOBS to SYS_CPU_JOBS for what was being exported.... I'll turn it back to NUM_CPU_JOBS so that all the scripts should be working again properly.