Page 14 of 20 FirstFirst ... 41213141516 ... LastLast
Results 131 to 140 of 195

Thread: Goodbye ATI

  1. #131
    Join Date
    Aug 2007
    Posts
    6,679

    Default

    Maybe you use 2 displays and RealNC only 1?

  2. #132

    Default

    I have both of one screen and dual screen setup's. With AMD and two display setup there is tearing on second display too, but not on first.
    Last edited by RussianNeuroMancer; 03-09-2012 at 07:15 PM.

  3. #133
    Join Date
    Jan 2010
    Posts
    27

    Default

    Quote Originally Posted by RealNC View Post
    No, not by that much. The 560 is usually about 60-70% faster than a 4870 and the 560 Ti is about 70-80% faster. It's very, very rare to see a 2x performance difference.

    Here are benches of the 560 Ti vs the 4870. Note that they don't have benches of the plain 560 (non-Ti):

    http://www.anandtech.com/bench/Product/304?vs=330
    Those are Windows benchmarks. Under Linux + wine, you'd be hard pressed to get a 6970 to give you the same frame rates as an 8800GT in some directx-only titles. So yes, a 560Ti can be 2x, 4x and even 8x faster than a 4870 in some Windows games under wine.

    Some of that is wine optimizations for NV hardware, but much of that is fglrx blowing goats.

    Recent fglrx versions have closed the gap quite a bit. Before 11.11 my 5830s were about 1/8 to 1/4 the frame rates of my 8800GT in e.g. Champions Online (about 2-10 fps with 5830s, around 18-60 with the 8800gt). As of 11.11 I get about half the performance of an 8800GT with my 5830. On Windows the 5830 is about double the frame rates of a 8800GT in pretty much everything.

  4. #134
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by v8envy View Post
    Those are Windows benchmarks. Under Linux + wine, you'd be hard pressed to get a 6970 to give you the same frame rates as an 8800GT in some directx-only titles. So yes, a 560Ti can be 2x, 4x and even 8x faster than a 4870 in some Windows games under wine.

    Some of that is wine optimizations for NV hardware, but much of that is fglrx blowing goats.

    Recent fglrx versions have closed the gap quite a bit. Before 11.11 my 5830s were about 1/8 to 1/4 the frame rates of my 8800GT in e.g. Champions Online (about 2-10 fps with 5830s, around 18-60 with the 8800gt). As of 11.11 I get about half the performance of an 8800GT with my 5830. On Windows the 5830 is about double the frame rates of a 8800GT in pretty much everything.
    This problem is because of the 5D/4D VLIW architecture because you really need strong optimizations in the AMD driver to get the full speed and they just don't do it for Linux and wine.

    so yes you are right for all hd2000-hd6000 cards.

    but the hd7970 for example isn't VLIW anymore.

    this means the hd7000 cards should bring a much better result for linux and wine.

    maybe you should check an hd7000 card

  5. #135
    Join Date
    Apr 2010
    Posts
    1,946

    Default

    Quote Originally Posted by Qaridarium View Post
    This problem is because of the 5D/4D VLIW architecture because you really need strong optimizations in the AMD driver to get the full speed and they just don't do it for Linux and wine.

    so yes you are right for all hd2000-hd6000 cards.

    but the hd7970 for example isn't VLIW anymore.

    this means the hd7000 cards should bring a much better result for linux and wine.

    maybe you should check an hd7000 card
    If this is correct then VLIW is a tomb.

    VLIW is as good as optimized. This means anyone who is doing the driver needs specific tools. If that tools are unavailable, you can do nothing regardless of amount of people busy around it.

    This means if AMD does not release VLIW driver tools, they are the only one who can do driver.

    And if they rotate chicken-egg problem, to no sales-no driver (instead of driver->sales), and resist to do driver, VLIW architecture becomes real tomb. Because the card will only work where they (AMD) see it fit.

  6. #136
    Join Date
    Apr 2011
    Posts
    387

    Default

    Measuring general flops is wrong, the right thing is measuring mac-flops. Example: madd(x=a*b) is 1 general flop because does a *action or a +action, but is 2-mac-flops because does action in 2 objects a,b. fmac(x=a*b+c) is 2-general-flops because does 2 fused actions a* and a+, but is 3-mac-flops because does action in 3 objects a,b,c. So fmac is 1,5-2 times faster than madd. Examples:

    gtx280: 240mimd-cores(32bit)*1,4ghz*2-3(madd+mul)= 1-mac-tflop

    radeon6900: 384vliw4-cores*4-32bit-executions*900mhz= 1,35-general-tflops(32bit) *2(madd)=2,7mactflops(32bit)

    gtx580: 512mimd(dual issue)cores*1-64bit or 2-32bit executions*1,55ghz*2-3(fmac)= 1,6-general-tflops(64bit) or 3,2-general-tflops(32bit) or 4,7-mac-tflops(32bit)

    radeon7000: 512simd4-cores*4-32bit-executions*900mhz*2-3(fmac)= 3,8-general-tflops(32bit) or 5,7-mac-tflops(32bit)

    gtx600: 2xCores(gtx580)*2xBitrate(128bit quad issue, at the same transistors)= 6,5-general-tflops(64bit) or 13-general-tflops(32bit) or 20-mac-tflops(32bit).

    Wile amd gains 2xFlops/watt per generation, nvidia gains 4x after gtx280. Also amd has mediocre opengl and bad d3d to ogl translation, regardless of the generation. Amd is not even close in programmability to fermi and kepler, no full native integer, no full native 64bit, no good vm like cuda for wide many language support. Amd is not cheap, I can buy a gtx-460 new for 100 box in my country, overclocking to 1,8-1,9ghz and I have near gtx580 or 75%-radeon7000 performance, and I use wine-mediacoder_cuda for h264 encoding.

  7. #137
    Join Date
    Oct 2007
    Posts
    321

    Default

    Quote Originally Posted by karmakoma View Post
    I've also switched to nvidia (GTX 560). Solved problems (using closed nvidia driver):
    • games under wine work,
    • accelerated video decoding without problems (more formats supported),
    • hibernation/sleep problems solved (under flgrx it usually worked),
    • vsync works,
    • KDE works without screen problems (seems like uninitialized video memory)


    Before I used HD4870, this card is great but only under Windows (it is almost as powerful as my current nvidia card). Using fglrx wasn't so bad, problem was that there were many many small issues that made the overall experience quite hard.
    The vertical sync issue seems to be getting a bug fix within the next two releases (See the Unofficial bug report)

  8. #138
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by artivision View Post
    Measuring general flops is wrong, the right thing is measuring mac-flops. Example: madd(x=a*b) is 1 general flop because does a *action or a +action, but is 2-mac-flops because does action in 2 objects a,b. fmac(x=a*b+c) is 2-general-flops because does 2 fused actions a* and a+, but is 3-mac-flops because does action in 3 objects a,b,c. So fmac is 1,5-2 times faster than madd. Examples:

    gtx280: 240mimd-cores(32bit)*1,4ghz*2-3(madd+mul)= 1-mac-tflop

    radeon6900: 384vliw4-cores*4-32bit-executions*900mhz= 1,35-general-tflops(32bit) *2(madd)=2,7mactflops(32bit)

    gtx580: 512mimd(dual issue)cores*1-64bit or 2-32bit executions*1,55ghz*2-3(fmac)= 1,6-general-tflops(64bit) or 3,2-general-tflops(32bit) or 4,7-mac-tflops(32bit)

    radeon7000: 512simd4-cores*4-32bit-executions*900mhz*2-3(fmac)= 3,8-general-tflops(32bit) or 5,7-mac-tflops(32bit)

    gtx600: 2xCores(gtx580)*2xBitrate(128bit quad issue, at the same transistors)= 6,5-general-tflops(64bit) or 13-general-tflops(32bit) or 20-mac-tflops(32bit).

    Wile amd gains 2xFlops/watt per generation, nvidia gains 4x after gtx280. Also amd has mediocre opengl and bad d3d to ogl translation, regardless of the generation. Amd is not even close in programmability to fermi and kepler, no full native integer, no full native 64bit, no good vm like cuda for wide many language support. Amd is not cheap, I can buy a gtx-460 new for 100 box in my country, overclocking to 1,8-1,9ghz and I have near gtx580 or 75%-radeon7000 performance, and I use wine-mediacoder_cuda for h264 encoding.
    old and wrong story again. this is only true if you need FMA for general use you can check the speed with bitcoin as a benchmark and in fact the AMD card RIP the nvidia cards.

    but yes you can explain to us why amd cards are sooo good in bitcoin.

    its the same with AMD-FX8150 vs intel and FMA4 most of the programms don't use FMA4 because of this intel win all the time.

    with the nvidia card same shit. FMA is really specific and not "Generally"

  9. #139
    Join Date
    Apr 2011
    Posts
    387

    Default

    FMA is not instruction set and it is general, but not certain. It is just the two functions of Madd(*,+) fused. You must have a well written and clean code in order to achieve more than 95% fusion. The BitcoinXXX lacks, here some benchmarks: http://www.xbitlabs.com/articles/gra...k_5.html#sect3 In Unigine with heavy tessellation, Fermi crushes Radeon by 1.5-2x!!! Any way I want to correct my previous post for gtx600. I have new information: gtx600 has only 3.5 billion transistor 28nm against 3bt 40nm of gtx580, has 1024 64bit cores (gtx500 kind), and 1.8ghz clock with "turbo bust" near 2ghz, with only 180w (30% less than gtx580). Offers 2.5x the gtx500 performance and 3.4x the per/watt efficiency (4Tflops@64bit - 8Tflops@32bit - 12macTflops). Also offers 2.1x the radeon7970 performance with less watt, that means nVidia can give a 1536 cores card in the near future (3 months later). Any way here is PHORONIX-linux(libre), we not speak for something that does not work on Linux. So why some of you speak about Radeon cards here?

  10. #140
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by artivision View Post
    FMA is not instruction set and it is general, but not certain. It is just the two functions of Madd(*,+) fused. You must have a well written and clean code in order to achieve more than 95% fusion. The BitcoinXXX lacks, here some benchmarks: http://www.xbitlabs.com/articles/gra...k_5.html#sect3 In Unigine with heavy tessellation, Fermi crushes Radeon by 1.5-2x!!! Any way I want to correct my previous post for gtx600. I have new information: gtx600 has only 3.5 billion transistor 28nm against 3bt 40nm of gtx580, has 1024 64bit cores (gtx500 kind), and 1.8ghz clock with "turbo bust" near 2ghz, with only 180w (30% less than gtx580). Offers 2.5x the gtx500 performance and 3.4x the per/watt efficiency (4Tflops@64bit - 8Tflops@32bit - 12macTflops). Also offers 2.1x the radeon7970 performance with less watt, that means nVidia can give a 1536 cores card in the near future (3 months later). Any way here is PHORONIX-linux(libre), we not speak for something that does not work on Linux. So why some of you speak about Radeon cards here?
    "In Unigine with heavy tessellation, Fermi crushes Radeon"
    you life in the past because the hd7970 is faster in tessellation than your gtx580
    means old storry

    "FMA is not instruction set and it is general, but not certain. It is just the two functions of Madd(*,+) fused. You must have a well written and clean code in order to achieve more than 95% fusion."

    bla bla bla bla and you don't count it if your software don't use FMA

    same with amd and FMA4 if your software use it then the AMD is 56 times faster than the intel.
    FMA is the only case were nvidia wins and you make it generall only to get a point but this is just not true!

    "Any way I want to correct my previous post for gtx600. "

    the hd7970 runs qt 1,4ghz (base is 900) and the Guinness book of records list the hd7970 as the fastest card on earth and not an nvidia gtx680

    to beat a gtx680 you only need high resulutions then the 2gb vram hit as a bottle neck and the hd7970 win because of the 3gb vram

    also nvidia is known to manipulate many benchmarks just because physX work only on nvidia.

    also there comes a hd7970 with 6gb vram(not a dualcard means 6gb is useable) only stupid people think 2gb vram in the gtx680 is better vor compute gpu tasts than 6gb vram.

    the gtx680 is only for poor people.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •