The system info in the begining of the article is unreadably small.
Other than that, I am surprised to finally see a Gentoo benchmark happen here even though it seems it was set up by external forces. Regardless what compiler was used it would be interesting to see the USE and CFLAGS, though I suspect them to be in this unreadably small picture...
I guess that depends on the amount of packages. I compiled Gentoo with various packages on a lot of x86 styles arches til now. KDE + QT on a VIA C3 1200 MHz took iirc. about 3 days, I compiled on an Athlon XP1800+, a Geode LX 500 MHz, an AMD 4850e (2 cores, max. 2500 MHz), an AMD x645 (4x3100 MHz) (everything rather quick), and a VIA C7 1200 MHz. So if you have a script at hand that will emerge your stuff and if you are certain that there are no blocks, if you have configured everything... it takes from some hours to get to a command line to a few days.
Though it really depends. If you have a working .config for your kernel, this will save you 2h of going through a make menuconfig and checking all the stuff in the kernel config. Similar things are valid for make.conf, to get your fstab in shape, grub.conf, xorg.conf and all the package.use or package.use.d stuff. Configurating things can take a lot of time, if you can copy paste it and have done it several times before it is much faster.
The rest is download time for the sources (your connection plus the mirror you're downloading from), the compile time (pure CPU stress, some RAM and HDD also in use, but unless you are on <<512 M RAM this is not a bottleneck. Use of CCACHE. Or even crazy things like distributed compiling. Or if you crosscompile on a big box.
But with an automated script, no dependency problems whatsoever, a standing config, empty CCACHE... like let's say "emerge -D --emptytree world" or something, recompiling everything, on a system with Kernel, bash, X, libreoffice (+ boost libs and stuff), Mozilla browser(s) like SM or FF (plus libs), KDE 4.8/4.9 plus Qt, ehhh... expect on a really hot machine less than a day up to more than a week of constant compiling.
Basically the biggest impact on the results is legacy arm vs. thumb2 code generation. Gentoo is still not so progressive/experimental as linaro/ubuntu Though using the same versions of compilers would be definitely a cleaner experiment for showing off the current state of the art thumb2 performance.
The article claims that Gentoo Stable is using GCC 4.7. This is not the case. Stable ("arch") is using GCC 4.5.4, while Testing ("~arch") is using 4.6.3. GCC 4.7 is actually hard-masked and marked as experimental. You cannot install it, unless you unmask it (in Gentoo, that means telling the system "I'm about to potentially shoot myself in the foot, and yes, that's what I want.")
So I'd say this benchmark is bogus. If you installed GCC 4.7 on Gentoo, you should have done the same for Linaro. Since you didn't do that, the benchmark is highly biased.
Also, if GCC was upgraded to a non-stable (and even non-testing) package version, then who knows what else was. I suspect the person who made the images for Phoronix didn't even mention this.