Musings on architecture
I'd really love to see some more competition to the traditional PC computing platform.
Right now Intel seems to driving the market at this moment, after they learned from their very costly mistakes with the "NetBurst" architecture.
I think they *get it* about the need to go with massively general multicore cpus.
But I think they're vulnerable.
The Atom is an interesting processor design. Going back to low power and simplicity. But based on some articles I've read the x86 can't efficiently be replicated on a die because of the vast real estate overhead of the x86 decoders. But intel has doggedly decided that they will do x86 multicore.
So the opportunity is: use an architecture which has a very good & easily executed instruction set which allows for vastly wide replication on silicon.
I guess IBM could do this but they really don't seem interested in going after a general consumer market. they're hapy with providing cores for gaming consoles or charging an arm & a leg for high end stuff.
Could ARM actually try to pull this off? It seems they have plans to release a "quad core" but that's not until 2009/10 and well, quad cores aren't so interesting.
Anyways, just musing. I really want a good computer that's cheap, power and runs low power without burning the house down and allows for new exciting paradigms that aren't limited by old legacy junk.
But you need to maintain backwards compatability with older software, which is easy to do cheaply: stuff an x86 unit into you chip. Now what were we trying to do again? Plain x86 could be massively parallelised on silcon in a flash: it's small, so would take up very little silicon space. It's only if you want the entire assembly of other features a modern CPU carries that you find yourself more limited by silicon space.
On the subject of Intel having it right about massively multi-core cpus? I suspect there is a point of diminishing returns beyond which you will not see much benefit: other factors will limit your performance. There are normally overhead costs with running any algorithm in parallel.
AMD are still competition, sure their compute cores don't compete with Intels top cores, but in the same price bracket they are a bit more competitive. I assume you are also aware that AMD get a little bit of money for evey 64bit x86 chip Intel sells...
As much as I agree that a cleaner instruction set and hardware platform would be nice, I've come to believe that the main thing really tying us down with "old legacy junk" is software. As much of an improvement as Windows 2K/XP and Linux are over the bad old days of DOS/9x, we're still essentially using OS architectures from the 70s meant for running mainframes (Linux and Mac OS X obviously deriving from UNIX, Windows less obviously from VMS), with poorly-integrated support for anything more complicated than disk/tape drives and character terminals. (ioctl, sockets, X...). And this is just kernel/API stuff; applications collectively get away with far more crap.
It's not that no one has tried to do better; Plan 9 and Inferno show that a cleanly network-aware OS (with most applications truly network-agnostic) is entirely feasible, and L4 (among others) shows that microkernels don't have to be slow and can really enable interesting OS designs.
I guess it comes down to the vicious circle of "I must develop for this platform because it's popular" / "I must use this platform because it has the apps I want". Virtually no one wants to be the first, or the thousandth for that matter, to break the chain and move to something cleaner. The existing stuff is "good enough", and we just keep incrementally building on that. If something's ugly, just add another abstraction layer to hide it; no need to go around actually changing things...
Well the *junk software* is the stuff tied to x86. Why should it be tied to x86?
Probably the best example of their actually being a market beyond what's available today are these little EEE PC type devices. Consumers seem to not mind that their system doesn't run a microsoft os (microsoft really cares though).
As for the massive multi core, people like to speculate that they can't get things to work properly. What it really comes down to is that some systems need to be available cheaply for developers to take advantage of. The software I'm working on now it made a huge difference to be able to actually be able to get an 8 core intel setup for just over $2k in early 2007. A similar system is now less than half what I paid then. But step up to 16 cores and the price of the cpus goes up 4-5x. From what I've seen there is tons of room for experimentation and taking advantage of the hardware by thinking outside the box for userspace applications. It's just foolish to dismiss multicore stuff as useless when almost no one has access to these systems.
Last edited by bnolsen; 07-13-2008 at 09:05 AM.
No, it's not. Compared to other, real SoCs, it is way too big (footprint of cpu + nb),
Originally Posted by bnolsen
consumes far too much power, and lacks lots of peripherals.
The only reason it is popular is that its a lower power x86 chip which runs windows...
Seriously, the embedded world has for more interesting and powerful products, all they lack is the marketing prowess of intel.
Of course I made that point up above. Atom is not well suited at all for turning into a massively multicore fabric. I really believe that a Cortex may be a better candidate for that.
Originally Posted by mlau
Well one of the research papers I was reading some time back envisons the future processors as much simpler cores (reduces design complications and power draw) tied together by very fast switching interconnects