Page 10 of 10 FirstFirst ... 8910
Results 91 to 93 of 93

Thread: Why More Companies Don't Contribute To X.Org

  1. #91
    Join Date
    Dec 2009
    Posts
    110

    Default

    Quote Originally Posted by deanjo View Post
    Would you want to work with a bunch of juvenile dipshits like that?
    With a thought about all things Ballmer and other MS-spokepersons burps out from time to time, and all the people/companies still wanting to work with Microsoft I would say yes.:-P

    I do think it was a funny joke, I do not however think it was executed that well. Had it been made in a form not making people questionable the integrity of the git-repos or by abusing root-permissions (say post it as a proposed patch or pull-request on a mailing list) then I have had a much better time handling it.

  2. #92
    Join Date
    Oct 2008
    Posts
    3,137

    Default

    I agree that APUs will take over the entire low-end market. Probably the middle of the market as well, at least after enough years go by to make it possible. Where I'm not so sure about is the high-end. It's entirely possible the high-end market could die out, especially if Windows gaming is relegated entirely to console ports. But I think it's large enough to survive.

    Are CPU manufacturers really going to want to stick 50 billion extra transistors on their CPUs that are much more complicated (and likely to fail, therefore wasting the entire chip)? Or will they stick to a simpler, cheaper chip good enough for 95% of people that will give them higher yields and tell consumers to buy $600 graphics cards for those who really need the extra power? I think this is an open question, and probably something that not even AMD or Intel has figured out yet, or will even attempt to figure out for many years to come.

    Also, your assumption that you can just plug in extra APUs for more power doesn't seem like a good solution to me - look at crossfire and SLI - even with 2 GPUs it doesn't always scale very well. Stick in 4 GPUs and watch scaling go way down. Just being APUs won't fix the scaling problem, at least not all the way.

  3. #93
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by smitty3268 View Post
    Are CPU manufacturers really going to want to stick 50 billion extra transistors on their CPUs that are much more complicated (and likely to fail, therefore wasting the entire chip)?
    They've had no problem with exponentially increasing transistor counts before now, so why would this be any different?

    Also keep in mind that they already deal with bad transistors on chips just fine. Many of those dual-core CPUs you can buy today are really just quad-cores with two cores turned off because they failed to pass verification. I imagine the same is done with graphics chips, where the difference between the $100 parts and the $200 parts is often just that the exact same chips are running at different frequencies with some block of their SPUs disabled due to verification failures after fab.

    Or will they stick to a simpler, cheaper chip good enough for 95% of people that will give them higher yields and tell consumers to buy $600 graphics cards for those who really need the extra power?
    Nobody (with a brain) buys $600 graphics cards. Or even $300 graphics cards.

    Also, your assumption that you can just plug in extra APUs for more power doesn't seem like a good solution to me - look at crossfire and SLI - even with 2 GPUs it doesn't always scale very well. Stick in 4 GPUs and watch scaling go way down. Just being APUs won't fix the scaling problem, at least not all the way.
    There are other issues to solve, certainly. Memory bandwidth is one of the big ones, and one that isn't getting solved particularly quickly. Throwing more processing into a package that is starved for data or bottlenecked in writing out data is not going to help, for sure.

    That said, some of the other-other issues are solved by moving those SPUs into the CPU. It's going to be a while before the heavy-duty GPUs are integrated, though, due both the constrained memory bandwidth when sharing with the CPU as well as the heat issue. (The heating issue is actually solvable, as I understand it; we just haven't actually started manufacturing chips using those solutions. 3D circuit layout allowing chips to be more compact and hence lose less energy to impedance while simultaneously increasing cooling surfaces via internal micro-ducts and hence increasing the efficiency of cooling. Not a EE/CE guy though so maybe I have that wrong.)

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •