Page 5 of 16 FirstFirst ... 3456715 ... LastLast
Results 41 to 50 of 159

Thread: AMD Catalyst 9.12 Hotfix released

  1. #41
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by mirv View Post
    here is some info - but of course that's from semiaccurate.
    Yes Nvidia runs into a Death of Bankrupt.

    but this article is FUD! but there is another article with much more "True"

    "Nvidia GT300 yields are under 2%"

    the fermiG300 is a monster big very super big chip.....

    the chip by 80% yield rate are very very expensive!

    but @2% yield rate.. fermiG300 is a Bankrupt!

    they sould make mid chips passive Cooled with high Yield-rate and Opensource drivers.........

    but..... they only do bullshit megabigchips low yield rate extrem hot chips and no opensource driver supportet by nvidia...........


    the G300 Yield rate bankrupt nvidia be sure!

  2. #42
    Join Date
    Dec 2007
    Location
    Merida
    Posts
    1,102

    Default


  3. #43
    Join Date
    Sep 2007
    Location
    Singapore
    Posts
    5

    Default

    Quote Originally Posted by Qaridarium View Post
    Yes Nvidia runs into a Death of Bankrupt.

    but this article is FUD! but there is another article with much more "True"
    You gotta be kidding me. At least I think you're spreading more FUDs than the quoted articles are.

    nVidia does more things than just consumer graphics. Keep in mind that they also have Tegra, the Ion platform and the upcoming GeForce based on the Fermi architecture (read up on MIMD). On the market end, you should really compare the relative size of both companies. nVidia, with its graphics portfolio *alone*, garners a market cap of $10.040 billion, as compared to $6.64 billion from AMD's combined portfolio of CPU, GPU and chipset technologies. (Just to put things into perspective, Intel's market cap is a whopping $112.26 billion)

    Now, the issue today is not about who has the higher theoretical GFLOPS (it says "theoretical", you see). The actual throughput is all that matters. As you can see, the default configuration from ATI Radeon HD5870 has 2720 GFLOPS of (theoretical) single-precision floating point performance. Compare that to a similarly-placed nVidia product, the nVidia Geforce GTX295, which has 1788 GFLOPS in performance. Now, I would expect the Radeon HD5870 to have 52% higher performance in every benchmark. In actuality, this isn't the case. The Radeon HD5870 is only marginally better, and sometimes worse than the GTX295. This 3DMark Vantage benchmark proves this. Remember, the current ATI cards (RV6xx - RV8xx) still use five-way VLIW units, and this architecture is one that is not easy to optimize for. Not forgetting the fact that Fermi boosts its efficiency by having parallel kernel support which previously didn't exist in its architecture. There are also a myriad of architecture changes - the SFUs are now decoupled from the SMs in a warp.

    In addition, the yield problems with the 40 nm manufacturing process used at TSMC did affect *both* ATI and nVidia. If you haven't heard yet, the blog poster who said that nVidia has a < 1.7% yield on the GT3xx is a denounced ATI fanboy. I shan't comment further on this.

    You should really read up on *real* sources - like Anandtech and the like. Disclaimer: I am typing this on a laptop with ATI Mobility Radeon HD4670, Catalyst 9.12 driver on Ubuntu.

  4. #44
    Join Date
    Jul 2008
    Posts
    1,726

    Default

    Quote Originally Posted by Kano View Post
    @Qaridarium

    Fermi will be definitely the first choice for LINUX users. Most likely for Win too if you have got enough money Just because of the working features - not because of features that will work in the future. I gave up the idea that xvba will be ever fixed and that gallium3d approach to simulate vdpau/vaapi or whatever using shaders will most likely not work on low end cards - and buying medium/highend cards for that is wasted money. Using oss drivers for those cards is like paying for a porsche and driving a fiat. I definitely do NOT need kms, it does not matter if it flickers when it switches from vt to X. The 2-3s delay is not optimal, but how often do you need to do that?
    are you paid for posting this bullshit?

    Fermi will be no choice for most people. No matter which OS they use. Because it is broken by design, fat and expensive. But dream on.

  5. #45
    Join Date
    Aug 2007
    Posts
    6,634

    Default

    Lets see when you can buy it, the cards for testing are already out there. Most likely soon for the press too.

  6. #46
    Join Date
    Jul 2008
    Posts
    1,726

    Default

    which cards? the pre-castrated or the castrated ones? the ones with the original planned clock speed or the downclocked ones?

  7. #47
    Join Date
    Jul 2008
    Posts
    1,726

    Default

    oh - and don't forget - Fermi is huge and expensive. How many 300$ computers will be sold with a 600$ card? Or 400$ computers? 500$, 600$, 700$ computers? You know - that stuff that makes up the majority of sold systems? The mainstream is sub 200. No way that Fermi will be ever reach that. What is Nvidia's answer?
    Oh wait! They renamed the G92 AGAIN.
    http://www.semiaccurate.com/2009/12/...renamed-gt240/

  8. #48
    Join Date
    Aug 2007
    Posts
    6,634

    Default

    That's no G92 as it has got VP4 and DX10.1 support, but i don't get the (re)naming scheme too.

    As a hardcore ATI user you certainly feel satiesfied about your card judgement when NV has got production problems (ATI does too, but at least they sold some DX11 cards). But that does not make fglrx better.

    Some users definitely have got direct comparision about features - not only about OpenGL speed. fglrx "shines" in some older Linux PTS benchmarks but is unable to run Unigine Heaven. I would really like to know if Fermi correctly renders it NOW. But when such a benchmark is delayed for ages just because of fglrx driver FAULTS then thats pure crap.

    What is your gain with a DX11 capable card with Linux when it has several known drawbacks and absolutely no aspect where it is better? If it would allow you to play games with Tessellation (like Q always mentions) NOW then you would have got at least a small advantage, but i definite see none. I would not wonder if NV shows a working Linux solution with that engine on CES and ATI does not.

  9. #49
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by Kano View Post
    That's no G92 as it has got VP4 and DX10.1 support, but i don't get the (re)naming scheme too.

    As a hardcore ATI user you certainly feel satiesfied about your card judgement when NV has got production problems (ATI does too, but at least they sold some DX11 cards). But that does not make fglrx better.

    Some users definitely have got direct comparision about features - not only about OpenGL speed. fglrx "shines" in some older Linux PTS benchmarks but is unable to run Unigine Heaven. I would really like to know if Fermi correctly renders it NOW. But when such a benchmark is delayed for ages just because of fglrx driver FAULTS then thats pure crap.

    What is your gain with a DX11 capable card with Linux when it has several known drawbacks and absolutely no aspect where it is better? If it would allow you to play games with Tessellation (like Q always mentions) NOW then you would have got at least a small advantage, but i definite see none. I would not wonder if NV shows a working Linux solution with that engine on CES and ATI does not.
    unigine heaven works on fglrx-9.12 but they found some bugs in the engine of unigine they fix it and release it with the fglrx-10.1 driver!

    "I would really like to know if Fermi correctly renders it NOW."

    you realy sould check your temperatur i think you are ill and you have High-temp-vissions@dreaming...

    there is no Fermi to render unigine heaven...

    in the end opengL3.2 isn't the counterpart to DX11 unigine use an AMD-expansion for the tessellation.

    i think we need to wait for OpenGL4 for full DX11 support as a counterpart.

  10. #50
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,146

    Default

    Quote Originally Posted by yjwong View Post
    The Radeon HD5870 is only marginally better, and sometimes worse than the GTX295. This 3DMark Vantage benchmark proves this
    The 5870 is a $350 card (mid-high end). The GTX295 is a $500 card (high-end). The former beats the latter in most benchmarks - I'll let you work out the math.

    For Ati's high-end card, you need to move up to the 5970. The GTX295 doesn't look all that hot now, does it?

    Or maybe it does, if you take GTX295's power consumption into account. Literally.

    Face it, there's very little reason to go with Nvidia's previous generation cards with Ati's current generation in the wild now. The only nvidia cards I'd touch are their lowest end $30 chips and only for Linux HTPCs. For almost everything else, an Ati card is a better bet *right now*. (Maybe Fermi will change that, maybe not. Time will tell).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •