Results 1 to 6 of 6

Thread: Tim Sweeney outlines "The End of the GPU Roadmap"

  1. #1
    Join Date
    Jan 2009
    Location
    Columbus, OH, USA
    Posts
    323

    Default Tim Sweeney outlines "The End of the GPU Roadmap"

    Still sifting through the bounty of amazing things that came out of SIGGRAPH '09, but this one caught my eye as potentially relevant to the Linux using population, especially here:
    http://graphics.cs.williams.edu/arch...TimHPG2009.pdf

    It's rather thought provoking, and I'm curious what other people think.

  2. #2
    Join Date
    Apr 2008
    Posts
    127

    Default

    Very interesting read. I think he could be right, growing demands on software complexity (computer games are a great example) cause hardware to get more complex as well. Because cpu's rely more and more on multithreading and gpu's are used for gpgpu, software development becomes harder. This could (or perhaps already has) reach a point at which it isn't economical to achieve higher quality while the computing power is available.

    A combined cpu+gpu approach like Larrabee and AMD's Fusion (I don't expect Fusion to be comparable to Larrabee though) could be the solution on the hardware side. On the software side I think OpenCL could provide (a partial) answer. If OpenCL becomes a much used standard, that would be great for the Linux community I think. Because it can run on any platform, it becomes more interesting and less complex to create portable software.

    It could be fun to look back at these slides in 5 - 10 years time and see whether Tim Sweeney was right.

  3. #3
    Join Date
    Jan 2009
    Location
    Columbus, OH, USA
    Posts
    323

    Default

    That was sort of my thought too. In particular, I think it gives much more creedence to the claims Intel has made about Larrabee and, to a lesser extent, dispels some of the random FUD people have been spreading about CBE (Phrack had a really good writeup on that subject recently).

    It might be a bit silly and idealistic, but with the Intel ventures of the last couple years in open source, and the way they're starting to leverage it, it seems like Larrabee is almost made to get along swimmingly with Gallium. I'm hopeful that it gains some amount of mind-share, even as I rather dislike x86 (ARM is much neater (Down with the CORE scum! )).

    In terms of programming, it's interesting to note that while it's not all there yet, a lot of what he was talking about reminded me strongly of Walter Bright's D programming language. I'd like it if that took off real hard.

  4. #4
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,806

    Default

    Another point whose validity should still be checked from GPU designers is a suspection that GPU's nowadays are actually fail-fast and that's what causes you to screw up so bad. They can't handle weird situations, they simply lockup. I'm not sure I'd want to use such a processor as a general-purpose one.

  5. #5
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,574

    Default

    It's really the fixed-function portions that have to be set up "just right" - the general purpose portions of the chip seem to be pretty programmer-friendly.
    Last edited by bridgman; 08-16-2009 at 01:55 PM.

  6. #6
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,806

    Default

    Quote Originally Posted by bridgman View Post
    It's really the fixed-function portions that have to be set up "just right" - the general purpose portions of the chip seem to be pretty programmer-friendly.
    Thank God. I was already looking forward to a session with a lot of locking up when learning OpenCL with my ATi card.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •