A quick bit of math:
Originally Posted by SavageX
*CPU is really 125W
*12V line is 168W
if those are true, then the VRM eff is 74.4%. That seems a bit low, but not crazy where I dismiss it outright. Considering the temps the VRMs on my AM3 run I won't be surprised if they need to get rid of ~40 watts.
Heise.de does not state the actual voltage on the 12V line (it's not exactly 12V) nor do they specify the quality of measuring device, or how it was measured. Is it +/-3%? +/-5%? +/-0.001%? what sort of voltage drop is there over the ammeter? Were they accidentally over-volting the CPU? what is the 12V ripple (get out your ocilliscope to measure that) and use that to calculate the RMS voltage. Anyways, there is enough possible error in here that the real TDP could be 125W or so and we have no way of knowing for sure.
In defend of heise.de
I think the other poster that mentioned the heise.de article, did not provide all information
that is included in the original article.
(in German: http://www.heise.de/ct/meldung/AMD-A...i-1734298.html )
- All tests were done using Cinebench
- They wrote that the CPU used up to 168 Watts, while talking -indeed- about TDP
(but also said that the voltage (so called bucket) converters draw some power).
- They said that "125 Watt TDP ist pures Wunschdenken" meaning that, 125 Watt
TDP is plain wishful thinking on AMDs side.
So, at least for me, heise says that "using the 12V CPU supply connector as sample point on our
mainboard this particular AMD CPU used well over 125 TDP which AMD claims
it has". They did not mention however, what their test rig exactly is. I would
very much like to know what the integration time for power measurements
(and by the way: Given above is correct, I agree to the OP, that AMD's CPU suck, powerwise. :-D
Mind you that TDP is per (Intel) definition a MAXIMUM figure. As otherwise
you weren't be able to properly design and dimension a cooling solution for
a given processor. Note: Power drawn might be higher than TDP for a (very) short
time if the average power is =TDP.
Greeting from Germany, and please forgive bad English you encounter.
Last edited by multics; 10-23-2012 at 11:02 PM.
Again, you are probably saving power going with the AMD chip because of the lower idle power usage. Very few people run their machines at 100% and then shut them off immediately when finished. Most people are running at idle 99% of the time.
Originally Posted by necro-lover
The reason a high TDP is bad news, is because it indicates that AMD is clocking the chip up as fast as they possibly can, meaning there may not be a lot of extra headroom easily available under this design. But as far as this chip itself, that shouldn't be a problem.
- I asked somebody from heise, in the heise forum, whether they are sure that
Originally Posted by cynyr
only the cpu is fed from the 12V cpu power supply. He says yes. He even
said that sometimes small parts of the cpu (e.g. DRAM drivers) get power from the /other/ power
supply connector, which means the real cpu power might even be higher than
- The VRMs (in this case bucket converters) do certainly much(!) better than 74.4%.
- "Heise.de does not state the actual voltage on the 12V line" => they use a
for measuring. Which means they do a proper "multiply volts by amps at any given time giving watts".
And here is how they do it:
Last edited by multics; 10-24-2012 at 05:14 AM.
These are measurements from Anandtech:
Intel gets the same job done in about the same time, but the whole system consumes about half the power. Idle power consumption is also lower.
A more modern way to do the testing
Since with specific workloads there's a huge difference between the two in some tests as shown in an older article, I'd like to see all these tests both on intel and amd cpus also done with the code compiled using -Ofast instead of -O3 (also wouldn't mind -O2 since for workloads that don't involve raytracing, computational fluid dynamics or other applications using very large data sets which use a memory addressing pattern which results in a lot of cache misses it seems to be faster over -O3, even postgresql test in mentioned article shows -O2 as better than -O3 and it would be something significant for servers, also some graphicsmagick operations which use adjacent memory locations thus resulting in fewer cache misses).
I was wondering why Anand's results were not brought up yet. Michael's tests are heavily mutithreaded, but Anand's seem to be more balanced/representative, imho. Plus, the power consumption graphs are pretty clear-cut.
I develop CAE post processing software for a living. There are days when I spent a LOT of time waiting for code to recompile after catching up my local source tree with our source code repository. At work I have an i7-x990 CPU, which is insanely expensive. However, it's SO worth it to run "make -j10" to do parallel builds, and still be able to get other background tasks done while it grinds away.
We're also working very hard to leverage those multiple cores in our products. It's not easy!
Anyway, I'd love to be able to do something similar when I work from home. There's NO WAY I can afford a high end Intel CPU on my personal budget. I'd be VERY interested to see how the 8250 or 8350 performs running parallel builds. I have a Phenom 2 1090 now, and am wondering if I'd get a significant performance boost from 8 cores. I don't overclock much, because I don't want to risk an unstable overclock causing erratic behavior in our code.
You're exaggerating. You can have the latest from intel (i7-3770) for less than $300. Of course, if you're happy with an 8350, that will be cheaper.
Originally Posted by jjmcwill2003
So how would a i7-3770 compare to the AMD 8350 on something like "make -j8" ?
I was referring to what I paid for the i7-990X when I had my work PC built. Is the 3770 considered a high end Intel CPU? I thought only the 3930K and 3960X fit that category?