Page 1 of 2 12 LastLast
Results 1 to 10 of 20

Thread: Intel Doing Discrete Graphics Cards!

Hybrid View

  1. #1

    Default Intel Doing Discrete Graphics Cards!

    We are focused on developing discrete graphics products based on a many-core architecture targeting high-end client platforms.
    http://www.intel.com/jobs/careers/visualcomputing/

    This should be interesting with a discrete Intel graphics cards and the open-source Linux drivers!

  2. #2
    Join Date
    Jan 2007
    Location
    Trinidad and Tobago
    Posts
    1

    Default

    This is great news. It'll put pressure on the other discrete graphics makers to open source something. Right now I want to replace my X800... I want some AIGLX magic!

    - rmjb

  3. #3

    Default

    Quote Originally Posted by rmjb View Post
    This is great news. It'll put pressure on the other discrete graphics makers to open source something. Right now I want to replace my X800... I want some AIGLX magic!

    - rmjb
    Why not switch to the open-source Radeon drivers then? You can have AIGLX with that.

  4. #4

    Default

    Awesome! Hopefully they'll do with graphics the same they've been doing with processors. Superior performance at a superior heat/energy ratio.

    With open drivers, I'd buy it.

    Can't wait for the Phoronix reviews!

  5. #5
    Join Date
    Jun 2006
    Location
    Portugal
    Posts
    521

    Default

    Excellent! Currently the intel drivers are on the cutting edge of features thanks to the employment of people like Keith Packard to work on them.

    If you could get an intel card + superior features on any computer, and not just on lame htpc motherboards and on laptops, that would rule!

  6. #6
    Join Date
    Jun 2006
    Location
    Denver
    Posts
    54

    Default

    Well, one's definition of "superior features" is bound to vary. I wouldn't expect an R600-burner out of Intel anytime Real Soon. OTOH, Intel generally doesn't aim for a pile if it doesn't think it can hit the top, so Nvidia and AMD have rights to be nervous.

    I'm currently running a PowerColor x700 card with the Open Source driver under FC-6. Works fine.

  7. #7
    Join Date
    Jun 2006
    Posts
    3,046

    Default

    Quote Originally Posted by pipe13 View Post
    Well, one's definition of "superior features" is bound to vary. I wouldn't expect an R600-burner out of Intel anytime Real Soon. OTOH, Intel generally doesn't aim for a pile if it doesn't think it can hit the top, so Nvidia and AMD have rights to be nervous.

    I'm currently running a PowerColor x700 card with the Open Source driver under FC-6. Works fine.
    I don't expect an R600 burner (Or a GeForce8 burner for that matter... ) but I'd expect an R400 (NVidia GeForce5/6) burner, possibly an R500 (GeForce 7) harrier out of a dedicated X3000- the most critical thing is that it performs respectively well and that it's got open info and open sourced drivers. I expect this to happen because UMA does nothing but drag a GPU to it's worst case performance levels.

  8. #8
    Join Date
    Sep 2006
    Posts
    714

    Default

    I too am looking forward to this. This should be very welcome.

    As for a R600 burner? I seriously doubt it also. But who cares?

    Intel are doing some things right, however. I know clock speed doesn't matter so much, but even if Intel's designs are not quite as specialized or optimized as Nvidia's or ATI's hopefully they can make up for it by simply cranking up the mhz.


    Also they mention 'many-core' quite a bit, don't they?



    What would be the effect of dropping something like 3 GMA X3000-style cores, clocked at 800mhz-1ghz on a discrete card with 256 megs of DDR4 RAM and the ability to grab additional RAM over the PCIe port? (all on the same die, probably. How much silicon does a GMA core take up vs a previous generation pentiums that were made in those now-idle Intel fab plants?)

    You'd end up with something like 24 programmable pipelines... It would be a very flexible card for a wide veriaty of situations wouldn't it? And I know that companies like multicore designs sometimes since power management is effective; you just shutoff cores you don't need, but you can fire them up on demand.

    Does something like this even make any sense?
    Last edited by drag; 01-30-2007 at 07:13 PM.

  9. #9
    Join Date
    Jun 2006
    Location
    One day I'll reach Alaska, or die trying!
    Posts
    287

    Default

    Wow, this sure is a surprise! The dedicated GPU market definitely needs another heavy-weight competitor. Go Intel Go!

  10. #10
    Join Date
    Sep 2006
    Posts
    714

    Default

    We heard there could be as many as 16 graphics cores packed into a single die.


    That's a lot of cores.

    How complex is a current GMA X3000 core? If you shrink down the proccess to CPU-size, how many could you pack into a current P4-sized, or maybe Core-Duo2, peice of silicon?

    Using the X3000 core as a basis would get you 128 programmable pipelines in 16-way core. So that's probably wrong... (me assuming that they are going to use x3000 design fairly directly.(


    32nm
    I don't think so. 45nm is more likely, I figure.

    The only thing I know about this sort of thing is that when you shrink the proccess of making a cpu down a step you basicly have to rebuild the entire assembly line. The whole plant. Also because at the same time you usually make the silicon wafer bigger to get higher yeilds per wafer.

    So since Intel would have all this spare assembly line laying around then it would make sense to turn it into massive multicore gpu designs. You could be cheaper about it and cut more corners then you can with cpus also. If you have a flaw in the chip or in the silicon wafer then you just deactivate those chips that has the flaw... so a 'pure' core would be the high-end with all 16 GPUs, with 1/3 of the core goobered up then you have a mid-range video card with 12 cores, then with half or more of the cpu gone you have a 'low end' card with 6-8 cores.

    So that way the video card fabrication proccess will always follow 1 generation behind the latest proccess used in the CPU. So it will probably be the size, power requirements, and expense of the current Core Duo 2 cpus if I am right.

    These things range from 150 to about 700 dollars right now, just for the cpu. Of course the top of the line cpu is incredably overpriced. So I figure $350-500 with the entire card to start of with?

    It's quite a competative advantage that Intel is going to have over Nvidia. Nvidia will have to build all new plants to move up to the next generation of fabrication.. while Intel can use the old stuff already bought and paid for by CPU sales and still be just as or more advanced.




    BTW on the Linux-intel front..


    Keith Packard gave a nice presentation at the Debian miniconf. I beleive the following is the right one, I am not sure as it's been a while since I looked at it and I can't realy check it out right now.
    http://mirror.linux.org.au/pub/linux...450_Debian.ogg


    But if that's the right video he talks alot about 7.2 and the future direction of X.org 7.3.

    Also he gives a nice overview of him working with Intel hardware (mostly on how it relates to x.org 7.3 and suc). Also mentions Intel's intentions with Linux driver support.

    They now do Linux driver development in-house with Keith's (and other hacker's) assistance.. Traditionally Linux driver development has lagged behind Windows. However it is now Intel's goal to ship working (and completely open source) Linux drivers the same day the corrisponding hardware ships.

    This means, hopefully, that as soon as these things start showing up in stores you can just buy them and they will run on Linux with cdrom-supplied drivers.
    Last edited by drag; 02-12-2007 at 07:35 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •