Yes Nvidia runs into a Death of Bankrupt.
Originally Posted by mirv
but this article is FUD! but there is another article with much more "True"
"Nvidia GT300 yields are under 2%"
the fermiG300 is a monster big very super big chip.....
the chip by 80% yield rate are very very expensive!
but @2% yield rate.. fermiG300 is a Bankrupt!
they sould make mid chips passive Cooled with high Yield-rate and Opensource drivers.........
but..... they only do bullshit megabigchips low yield rate extrem hot chips and no opensource driver supportet by nvidia...........
the G300 Yield rate bankrupt nvidia be sure!
You gotta be kidding me. At least I think you're spreading more FUDs than the quoted articles are.
Originally Posted by Qaridarium
nVidia does more things than just consumer graphics. Keep in mind that they also have Tegra, the Ion platform and the upcoming GeForce based on the Fermi architecture (read up on MIMD). On the market end, you should really compare the relative size of both companies. nVidia, with its graphics portfolio *alone*, garners a market cap of $10.040 billion, as compared to $6.64 billion from AMD's combined portfolio of CPU, GPU and chipset technologies. (Just to put things into perspective, Intel's market cap is a whopping $112.26 billion)
Now, the issue today is not about who has the higher theoretical GFLOPS (it says "theoretical", you see). The actual throughput is all that matters. As you can see, the default configuration from ATI Radeon HD5870 has 2720 GFLOPS of (theoretical) single-precision floating point performance. Compare that to a similarly-placed nVidia product, the nVidia Geforce GTX295, which has 1788 GFLOPS in performance. Now, I would expect the Radeon HD5870 to have 52% higher performance in every benchmark. In actuality, this isn't the case. The Radeon HD5870 is only marginally better, and sometimes worse than the GTX295. This 3DMark Vantage benchmark proves this. Remember, the current ATI cards (RV6xx - RV8xx) still use five-way VLIW units, and this architecture is one that is not easy to optimize for. Not forgetting the fact that Fermi boosts its efficiency by having parallel kernel support which previously didn't exist in its architecture. There are also a myriad of architecture changes - the SFUs are now decoupled from the SMs in a warp.
In addition, the yield problems with the 40 nm manufacturing process used at TSMC did affect *both* ATI and nVidia. If you haven't heard yet, the blog poster who said that nVidia has a < 1.7% yield on the GT3xx is a denounced ATI fanboy. I shan't comment further on this.
You should really read up on *real* sources - like Anandtech and the like. Disclaimer: I am typing this on a laptop with ATI Mobility Radeon HD4670, Catalyst 9.12 driver on Ubuntu.
are you paid for posting this bullshit?
Originally Posted by Kano
Fermi will be no choice for most people. No matter which OS they use. Because it is broken by design, fat and expensive. But dream on.
Lets see when you can buy it, the cards for testing are already out there. Most likely soon for the press too.
which cards? the pre-castrated or the castrated ones? the ones with the original planned clock speed or the downclocked ones?
oh - and don't forget - Fermi is huge and expensive. How many 300$ computers will be sold with a 600$ card? Or 400$ computers? 500$, 600$, 700$ computers? You know - that stuff that makes up the majority of sold systems? The mainstream is sub 200. No way that Fermi will be ever reach that. What is Nvidia's answer?
Oh wait! They renamed the G92 AGAIN.
That's no G92 as it has got VP4 and DX10.1 support, but i don't get the (re)naming scheme too.
As a hardcore ATI user you certainly feel satiesfied about your card judgement when NV has got production problems (ATI does too, but at least they sold some DX11 cards). But that does not make fglrx better.
Some users definitely have got direct comparision about features - not only about OpenGL speed. fglrx "shines" in some older Linux PTS benchmarks but is unable to run Unigine Heaven. I would really like to know if Fermi correctly renders it NOW. But when such a benchmark is delayed for ages just because of fglrx driver FAULTS then thats pure crap.
What is your gain with a DX11 capable card with Linux when it has several known drawbacks and absolutely no aspect where it is better? If it would allow you to play games with Tessellation (like Q always mentions) NOW then you would have got at least a small advantage, but i definite see none. I would not wonder if NV shows a working Linux solution with that engine on CES and ATI does not.
unigine heaven works on fglrx-9.12 but they found some bugs in the engine of unigine they fix it and release it with the fglrx-10.1 driver!
Originally Posted by Kano
"I would really like to know if Fermi correctly renders it NOW."
you realy sould check your temperatur i think you are ill and you have High-temp-vissions@dreaming...
there is no Fermi to render unigine heaven...
in the end opengL3.2 isn't the counterpart to DX11 unigine use an AMD-expansion for the tessellation.
i think we need to wait for OpenGL4 for full DX11 support as a counterpart.
The 5870 is a $350 card (mid-high end). The GTX295 is a $500 card (high-end). The former beats the latter in most benchmarks - I'll let you work out the math.
Originally Posted by yjwong
For Ati's high-end card, you need to move up to the 5970. The GTX295 doesn't look all that hot now, does it?
Or maybe it does, if you take GTX295's power consumption into account. Literally.
Face it, there's very little reason to go with Nvidia's previous generation cards with Ati's current generation in the wild now. The only nvidia cards I'd touch are their lowest end $30 chips and only for Linux HTPCs. For almost everything else, an Ati card is a better bet *right now*. (Maybe Fermi will change that, maybe not. Time will tell).