Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: NVIDIA Prepares 195.xx Linux Driver, Carries Fermi Support

  1. #11
    Join Date
    Jan 2008
    Posts
    772

    Default

    My guess would be that any Fermi chips shipping in 2009 will be on compute cards for a handful of deep-pocketed HPC customers. NVidia's marketing for Fermi has been extremely GPGPU-heavy, to the point that the only directly graphics-related thing on their Fermi page is a mention of raytracing as a possible application.

    Also, the white paper is hilarious:

    The graphics processing unit (GPU), first invented by NVIDIA in 1999

  2. #12
    Join Date
    Dec 2007
    Location
    Merida
    Posts
    1,099

    Default

    They are the first that started calling it that. All we had before were VPUs .

  3. #13
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by Melcar View Post
    They are the first that started calling it that. All we had before were VPUs .
    Where V = Voodoo? Virtual? Video?

    Voodoos, Mystiques, Verites, Rivas, Rages were all called "graphics accelerators" back in the day. It's possible that the "GPU" name didn't come into existence until 1999, but Nvidia certainly didn't invent graphics processors - this claim is completely laughable.

    I have a feeling that Fermi won't end well for Nvidia... (I hope to be proven wrong.)

  4. #14
    Join Date
    Jan 2008
    Posts
    772

    Default

    Given the year, I assume what they're referring to is having transformation+lighting+rasterization all happening in a single chip. I'm pretty sure that 3DLabs had already been doing this for several years using multiple chips.

  5. #15
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by Ex-Cyber View Post
    My guess would be that any Fermi chips shipping in 2009 will be on compute cards for a handful of deep-pocketed HPC customers. NVidia's marketing for Fermi has been extremely GPGPU-heavy, to the point that the only directly graphics-related thing on their Fermi page is a mention of raytracing as a possible application.

    Also, the white paper is hilarious:
    The fact is that nvidia did invent the GPU in 1999. The term was never used before and was defined by nvidia as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second."

  6. #16
    Join Date
    Jan 2008
    Posts
    772

    Default

    The term had been used before, but it wasn't mainstream (check Google Scholar, for example). Either way, the common use of the term now is not ruled by the self-serving "definition" crafted by Nvidia's marketing department (philosophical question: if you downclock a GPU so that it can't do 10 million polygons per second, does it stop being a GPU?).

    And either way, this doesn't say a lot about Fermi, which doesn't look like it'll be shipping in an actual graphics card for a while unless I've missed some big announcement/leak. Given Jen-Hsun Huang's statement that they're "ramping Fermi for [...] GeForce" I assume we will see them at some point, but so far I've seen almost no talk about its launch or features as a GPU except that it will support DX11. All indications seem to be that the first generation Fermi is going to be a top GPGPU performer but also brutally expensive. I wouldn't be surprised if they end up going exclusively for the super high-end HPC and gaming markets and ceding the mainstream gaming market to ATI for a little while.

  7. #17
    Join Date
    Oct 2007
    Posts
    25

    Default

    Quote Originally Posted by Ex-Cyber View Post
    Fermi is going to be a top GPGPU performer but also brutally expensive.
    AMD's Hemlock cards will have perform over 5 TFlops. Fermi is projected to do 1.5 given its shader count (512) and architecture, with a similar die size. FLOPS isn't everything but as a measure of raw compute power, Fermi will be behind.

    With its dedicated caches and simpler shader hierarchy (AMD has 5 shaders to a cluster) it will perform better in the real world, but anything you could do with Fermi should run faster on AMD if you optimise it. And the price you pay for the two things I mentioned is much greater die size, since neither feature helps in games but they takes up area.

  8. #18
    Join Date
    Jun 2009
    Posts
    14

    Talking

    Quote Originally Posted by phoronix View Post
    Phoronix: NVIDIA Prepares 195.xx Linux Driver, Carries Fermi Support

    It was just last week that NVIDIA had finally released a stable 190.xx Linux driver after this driver series had been in beta for months. The 190.xx driver series brought new hardware support, OpenGL 3.2 support, VDPAU improvements, and a fair amount of other changes...

    http://www.phoronix.com/vr.php?view=NzY4MA


    http://thepiratebay.org/torrent/5155383

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •