Phoronix: Calxeda Claims 15x Advantage Over Intel Xeon
Calxeda has put out its first benchmark of their forthcoming Calxeda ARM Server. The company is claiming a 15x performance-per-Watt advantage over a recent Intel Xeon CPU...
However, this gives me pause:
I don't think the standard TDP measurements are generally very accurate. At least, I know they aren't on the desktop. Maybe it's better for Xeon servers, but i doubt it.The Intel (Sandybridge) platform is based on published TDP values for the CPU and I/O chipset, along with an estimate for DDR memory. Unfortunately, at the time of this blog post, we didn’t have a way to measure actual power consumption with the same level of fine detail.
Also, Apache is pretty much the best case for these servers, or at least that was always true for those old servers Sun used to create that were similar (low-power/high-core), i can't remember what they were called. But since Apache is actually a decent use-case, that's valid enough.
They seem to be serving static content; no, apache is far from the top in that. But if they're maxing the pipe either way...
1. They compare a measured value against the TDP of the Xeon, not the actual usage. The Xeon was at 15%, and on top of that TDP != power draw.
2. They have 4 GB in the ARM, 16 GB in the Xeon. Using BS TDP that extra 12GB uses 12 W extra. Unfair!
3. They exclude the hard drive and the power supply, which is not logical.
4. The Xeon saturated the link at 15% load. You would never use that chip in that workload with a single NIC. You would have a 4-port board at least.
5. The V2 Xeon's are out which give performance increases and power-savings.
So here's my equally bogus made up comparison, assumptions in ()'s:
Xeon E3-L-V2 (17W at 100%) with 4 NIC board (linear scaling), 4GB RAM (4W), HD (7W), and PSU (~90% efficient)
(6950×4)÷((4+6.7+(17×1)+7)*1.1) = 728.3 req/W
5500÷((5.26+7)*1.1) = 407.8 req/W
Well, my-oh-my! Calxeda lose! Even if some of my assumptions and figures were worse, it would certainly be nowhere near a 15x advantage.
But my point was that if a single-core was providing 15x more speed, it's not a surprise that it would be 15x less efficient per watt.
Edit - and again i take this back, if what the above person is saying is true. (yep, it is)I'll just give up now, since i don't have the time to actually look into this myself.The Sandybridge system saturated the single 1Gb NIC with less than 15% CPU utilization.
But if the xeon is saturating the link without maxing out the cpu, then it seems likely that the ARM cpu really does suck compared to it performance-wise.
So my original argument is back - cut down that Intel CPU or beef up the arm one, and things are likely to be very different. And it's just stupid to try and include full TDP for one platform when you arent' even maxing it out.
Last edited by smitty3268; 06-20-2012 at 02:13 PM.
And now it's hit engadget! Cue the misplaced fawning over the misrepresented and bogus results.
As an aside, the guy at http://www.servethehome.com/ has done tests of systems with Xeon's power usage at idle and load. Typically a 70W delta! So at 15% load that chip would have been using around a quarter of the value they put into their chart.