Incidentally, Intel claimed that the i860 was a 64-bit CPU in the 90s, but I think that only applied to certain components while most remained 32-bit... and it really sucked as a desktop Unix CPU.
I also suspect that the push to make Itanium 'the next big thing' was what made Intel try to avoid going to 64-bit on x86 until AMD left them with no choice; a 64-bit P4 would probably have beaten Itanium just as AMD's 64-bit CPUs often did... if not on raw performance, then at least on price/performance.
(I'm planning on trying to build something like that with some PS3 spare parts some day xD)
But really, after a decade of development and billions of dollars in investments from HP, Intel and their partners, Intel totally failed with Itanium. It doesn't offer any real improvements over what is AMD64.
Last edited by jntesteves; 09-25-2009 at 03:59 PM.
Does anyone own this i5 lynnfield? Is it safe buy for linux?
In fact now that you have a Lynnfield based board how about exercising your computer skills and mocking up a Hackentosh. The only reason I want to see such a platform is to study the performance of Apples new kernel and Grand Central Dispatch feature. I'm very curious about this advance and it would be interesting to see performance figures vs lets say a two core platform. Right at the moment it is my position that the Mac community is underplaying the importance of lots of hardware threads and evidence supporting the value of the new multi core architectures is in order.
Of course even Apple is playing catch up here with software releases that truly leverage GCD but it is an interesting tech that needs the focus of the user community. Especially now that Apple has released the source to libdispatch. If the tech is as good as we hope I wouldn't be surprised to see the Linux community adopt GCD wholesale.
So while this reporting, from Phoronix, is very interesting it doesn't mean a lot if it doesn't reflect what will happen under average or reasonable conditions. Especially when it comes to multithreading and Turbo Boosted numbers. What we need here is a little honesty, like what happens to each platform in a 80 deg room with a stock cooler on the CPU. It is something the community needs to be aware of because if you only benefit from Turbo Boost or multithreading part of the time then you might as well run an AMD chip. Actually I have to think that Turbo Boost was a sneaky way for Intel to deliver good benchmarks on margianl processors.
Extend these thoughts a bit to the newly introduced Clarksfield processors. A laptop is an even more restricted thermal environment. Are we going to see benchmarks where Clarksfield gets outstanding numbers on things like video encoding of short clips and then have these numbers fall apart when somebody tries to encode a real movie? That is what happens to Clarksfield when put in a position where it has to perform for more than a few minutes at a time. Like it or not we need to see things in a different light and it becomes important that we factor in other elements into the benchmark numbers like video clip lengths. That is video benchmarks should show times to encode clips of lengths like 10, 30 60 minutes. This to give us an idea of what any possible thermal throttling looks like, which would seem to be highly likely in a notebook.
That is video encoding but this needs to be extended to any sorts of operations that a normal user may expect to take a lot of CPU time and thus heat up the processor possibly triggering throttling. This means three D renderings, and many of the other tests in the bench mark suite. The only point here is to discover just how sensitive the processors are to throttling based on stock and tricked out configurations.
Well, one of the first Anandtech Core i7 articles tested a Core i7 with stock HSF with a Radeon HD 4870 in a closed case. Turbo never engaged.
Intel has made honest benchmarking that much more complicated now. Because we all know that we run our processors with premium air cooling on an open test bench in an air conditioned room. :P
One of the big promises of this generation of processor was or is, that it can run older single threaded code really well and at the same time offer up impressive results for the more heavily threaded code of the future. It is beginning to look like this is not the case for the stock processor on an average PC board and heat sink combo.
I'm just hoping the benchmarking community clears this up. It certainly looks like skipping this series of i7 & i5 processors might be a good idea. Or at the very least pay real close attention to how a manufacture cools that processor. In a notebook I'm really wondering if the processor would ever break out of the base speed configuration, or even possibly throttle back from there.
Frankly it is a good thing the economy is so bad, it just means I won't rush out and buy something that is half assed.
It isn't clear what a "not so suitable environment" actually is. I'm guessing Summer time will see a lot less (if any) Turbo action in a lot of places.One thing makes honest benching really difficult, that is neighter /proc/cpuinfo nor cpufreq-info show multis above 21x even if benchmark results clearly indicate that the cpu ran at 3-3.2GHz. Here I can disable temperature based downclocking in the bios but the tdp limit is fixed, so temps will still be a limit in not so suitable environments. It would be nice to monitor the cpu's current power consumption via lm_sensors and to get the right multis displayed in /proc/cpuinfo.
I do like the idea of Turbo, it makes logical sense. Though Intel's current implementation and software support for it still seems to be a bit immature (like you said incomplete monitoring support). Supposedly AMD's next arch will have something akin to Turbo, hopefully by then both companies' implementations will support complete monitoring .. then we could see Phoronix Test Suite monitor and provide results which take turbo modes into account, relative to temperatures etc.