03-09-2012, 06:03 PM
Maybe you use 2 displays and RealNC only 1?
03-09-2012, 06:12 PM
I have both of one screen and dual screen setup's. With AMD and two display setup there is tearing on second display too, but not on first.
Last edited by RussianNeuroMancer; 03-09-2012 at 06:15 PM.
03-13-2012, 02:20 AM
Those are Windows benchmarks. Under Linux + wine, you'd be hard pressed to get a 6970 to give you the same frame rates as an 8800GT in some directx-only titles. So yes, a 560Ti can be 2x, 4x and even 8x faster than a 4870 in some Windows games under wine.
Originally Posted by RealNC
Some of that is wine optimizations for NV hardware, but much of that is fglrx blowing goats.
Recent fglrx versions have closed the gap quite a bit. Before 11.11 my 5830s were about 1/8 to 1/4 the frame rates of my 8800GT in e.g. Champions Online (about 2-10 fps with 5830s, around 18-60 with the 8800gt). As of 11.11 I get about half the performance of an 8800GT with my 5830. On Windows the 5830 is about double the frame rates of a 8800GT in pretty much everything.
03-13-2012, 03:25 AM
This problem is because of the 5D/4D VLIW architecture because you really need strong optimizations in the AMD driver to get the full speed and they just don't do it for Linux and wine.
Originally Posted by v8envy
so yes you are right for all hd2000-hd6000 cards.
but the hd7970 for example isn't VLIW anymore.
this means the hd7000 cards should bring a much better result for linux and wine.
maybe you should check an hd7000 card
03-13-2012, 05:28 AM
If this is correct then VLIW is a tomb.
Originally Posted by Qaridarium
VLIW is as good as optimized. This means anyone who is doing the driver needs specific tools. If that tools are unavailable, you can do nothing regardless of amount of people busy around it.
This means if AMD does not release VLIW driver tools, they are the only one who can do driver.
And if they rotate chicken-egg problem, to no sales-no driver (instead of driver->sales), and resist to do driver, VLIW architecture becomes real tomb. Because the card will only work where they (AMD) see it fit.
03-13-2012, 01:00 PM
Measuring general flops is wrong, the right thing is measuring mac-flops. Example: madd(x=a*b) is 1 general flop because does a *action or a +action, but is 2-mac-flops because does action in 2 objects a,b. fmac(x=a*b+c) is 2-general-flops because does 2 fused actions a* and a+, but is 3-mac-flops because does action in 3 objects a,b,c. So fmac is 1,5-2 times faster than madd. Examples:
gtx280: 240mimd-cores(32bit)*1,4ghz*2-3(madd+mul)= 1-mac-tflop
radeon6900: 384vliw4-cores*4-32bit-executions*900mhz= 1,35-general-tflops(32bit) *2(madd)=2,7mactflops(32bit)
gtx580: 512mimd(dual issue)cores*1-64bit or 2-32bit executions*1,55ghz*2-3(fmac)= 1,6-general-tflops(64bit) or 3,2-general-tflops(32bit) or 4,7-mac-tflops(32bit)
radeon7000: 512simd4-cores*4-32bit-executions*900mhz*2-3(fmac)= 3,8-general-tflops(32bit) or 5,7-mac-tflops(32bit)
gtx600: 2xCores(gtx580)*2xBitrate(128bit quad issue, at the same transistors)= 6,5-general-tflops(64bit) or 13-general-tflops(32bit) or 20-mac-tflops(32bit).
Wile amd gains 2xFlops/watt per generation, nvidia gains 4x after gtx280. Also amd has mediocre opengl and bad d3d to ogl translation, regardless of the generation. Amd is not even close in programmability to fermi and kepler, no full native integer, no full native 64bit, no good vm like cuda for wide many language support. Amd is not cheap, I can buy a gtx-460 new for 100 box in my country, overclocking to 1,8-1,9ghz and I have near gtx580 or 75%-radeon7000 performance, and I use wine-mediacoder_cuda for h264 encoding.
03-16-2012, 03:35 AM
The vertical sync issue seems to be getting a bug fix within the next two releases (See the Unofficial bug report)
Originally Posted by karmakoma
03-16-2012, 03:54 AM
old and wrong story again. this is only true if you need FMA for general use you can check the speed with bitcoin as a benchmark and in fact the AMD card RIP the nvidia cards.
Originally Posted by artivision
but yes you can explain to us why amd cards are sooo good in bitcoin.
its the same with AMD-FX8150 vs intel and FMA4 most of the programms don't use FMA4 because of this intel win all the time.
with the nvidia card same shit. FMA is really specific and not "Generally"
03-16-2012, 03:43 PM
FMA is not instruction set and it is general, but not certain. It is just the two functions of Madd(*,+) fused. You must have a well written and clean code in order to achieve more than 95% fusion. The BitcoinXXX lacks, here some benchmarks: http://www.xbitlabs.com/articles/gra...k_5.html#sect3 In Unigine with heavy tessellation, Fermi crushes Radeon by 1.5-2x!!! Any way I want to correct my previous post for gtx600. I have new information: gtx600 has only 3.5 billion transistor 28nm against 3bt 40nm of gtx580, has 1024 64bit cores (gtx500 kind), and 1.8ghz clock with "turbo bust" near 2ghz, with only 180w (30% less than gtx580). Offers 2.5x the gtx500 performance and 3.4x the per/watt efficiency (4Tflops@64bit - 8Tflops@32bit - 12macTflops). Also offers 2.1x the radeon7970 performance with less watt, that means nVidia can give a 1536 cores card in the near future (3 months later). Any way here is PHORONIX-linux(libre), we not speak for something that does not work on Linux. So why some of you speak about Radeon cards here?
03-17-2012, 05:07 AM
"In Unigine with heavy tessellation, Fermi crushes Radeon"
Originally Posted by artivision
you life in the past because the hd7970 is faster in tessellation than your gtx580
means old storry
"FMA is not instruction set and it is general, but not certain. It is just the two functions of Madd(*,+) fused. You must have a well written and clean code in order to achieve more than 95% fusion."
bla bla bla bla and you don't count it if your software don't use FMA
same with amd and FMA4 if your software use it then the AMD is 56 times faster than the intel.
FMA is the only case were nvidia wins and you make it generall only to get a point but this is just not true!
"Any way I want to correct my previous post for gtx600. "
the hd7970 runs qt 1,4ghz (base is 900) and the Guinness book of records list the hd7970 as the fastest card on earth and not an nvidia gtx680
to beat a gtx680 you only need high resulutions then the 2gb vram hit as a bottle neck and the hd7970 win because of the 3gb vram
also nvidia is known to manipulate many benchmarks just because physX work only on nvidia.
also there comes a hd7970 with 6gb vram(not a dualcard means 6gb is useable) only stupid people think 2gb vram in the gtx680 is better vor compute gpu tasts than 6gb vram.
the gtx680 is only for poor people.
Tags for this Thread