Tim Sweeney outlines "The End of the GPU Roadmap"
Still sifting through the bounty of amazing things that came out of SIGGRAPH '09, but this one caught my eye as potentially relevant to the Linux using population, especially here:
It's rather thought provoking, and I'm curious what other people think.
Very interesting read. I think he could be right, growing demands on software complexity (computer games are a great example) cause hardware to get more complex as well. Because cpu's rely more and more on multithreading and gpu's are used for gpgpu, software development becomes harder. This could (or perhaps already has) reach a point at which it isn't economical to achieve higher quality while the computing power is available.
A combined cpu+gpu approach like Larrabee and AMD's Fusion (I don't expect Fusion to be comparable to Larrabee though) could be the solution on the hardware side. On the software side I think OpenCL could provide (a partial) answer. If OpenCL becomes a much used standard, that would be great for the Linux community I think. Because it can run on any platform, it becomes more interesting and less complex to create portable software.
It could be fun to look back at these slides in 5 - 10 years time and see whether Tim Sweeney was right.
That was sort of my thought too. In particular, I think it gives much more creedence to the claims Intel has made about Larrabee and, to a lesser extent, dispels some of the random FUD people have been spreading about CBE (Phrack had a really good writeup on that subject recently).
It might be a bit silly and idealistic, but with the Intel ventures of the last couple years in open source, and the way they're starting to leverage it, it seems like Larrabee is almost made to get along swimmingly with Gallium. I'm hopeful that it gains some amount of mind-share, even as I rather dislike x86 (ARM is much neater (Down with the CORE scum! )).
In terms of programming, it's interesting to note that while it's not all there yet, a lot of what he was talking about reminded me strongly of Walter Bright's D programming language. I'd like it if that took off real hard.
Another point whose validity should still be checked from GPU designers is a suspection that GPU's nowadays are actually fail-fast and that's what causes you to screw up so bad. They can't handle weird situations, they simply lockup. I'm not sure I'd want to use such a processor as a general-purpose one.
It's really the fixed-function portions that have to be set up "just right" - the general purpose portions of the chip seem to be pretty programmer-friendly.
Last edited by bridgman; 08-16-2009 at 12:55 PM.