Results 1 to 10 of 20

Thread: Intel Doing Discrete Graphics Cards!

Threaded View

  1. #9
    Join Date
    Sep 2006
    Posts
    714

    Default

    We heard there could be as many as 16 graphics cores packed into a single die.


    That's a lot of cores.

    How complex is a current GMA X3000 core? If you shrink down the proccess to CPU-size, how many could you pack into a current P4-sized, or maybe Core-Duo2, peice of silicon?

    Using the X3000 core as a basis would get you 128 programmable pipelines in 16-way core. So that's probably wrong... (me assuming that they are going to use x3000 design fairly directly.(


    32nm
    I don't think so. 45nm is more likely, I figure.

    The only thing I know about this sort of thing is that when you shrink the proccess of making a cpu down a step you basicly have to rebuild the entire assembly line. The whole plant. Also because at the same time you usually make the silicon wafer bigger to get higher yeilds per wafer.

    So since Intel would have all this spare assembly line laying around then it would make sense to turn it into massive multicore gpu designs. You could be cheaper about it and cut more corners then you can with cpus also. If you have a flaw in the chip or in the silicon wafer then you just deactivate those chips that has the flaw... so a 'pure' core would be the high-end with all 16 GPUs, with 1/3 of the core goobered up then you have a mid-range video card with 12 cores, then with half or more of the cpu gone you have a 'low end' card with 6-8 cores.

    So that way the video card fabrication proccess will always follow 1 generation behind the latest proccess used in the CPU. So it will probably be the size, power requirements, and expense of the current Core Duo 2 cpus if I am right.

    These things range from 150 to about 700 dollars right now, just for the cpu. Of course the top of the line cpu is incredably overpriced. So I figure $350-500 with the entire card to start of with?

    It's quite a competative advantage that Intel is going to have over Nvidia. Nvidia will have to build all new plants to move up to the next generation of fabrication.. while Intel can use the old stuff already bought and paid for by CPU sales and still be just as or more advanced.




    BTW on the Linux-intel front..


    Keith Packard gave a nice presentation at the Debian miniconf. I beleive the following is the right one, I am not sure as it's been a while since I looked at it and I can't realy check it out right now.
    http://mirror.linux.org.au/pub/linux...450_Debian.ogg


    But if that's the right video he talks alot about 7.2 and the future direction of X.org 7.3.

    Also he gives a nice overview of him working with Intel hardware (mostly on how it relates to x.org 7.3 and suc). Also mentions Intel's intentions with Linux driver support.

    They now do Linux driver development in-house with Keith's (and other hacker's) assistance.. Traditionally Linux driver development has lagged behind Windows. However it is now Intel's goal to ship working (and completely open source) Linux drivers the same day the corrisponding hardware ships.

    This means, hopefully, that as soon as these things start showing up in stores you can just buy them and they will run on Linux with cdrom-supplied drivers.
    Last edited by drag; 02-12-2007 at 07:35 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •