GTX680 does 3.2TFlops@32 bit, 64 bit performance of GTX680 is very bad, a lot worse than HD7970.
EDIT: I still agree that fglrx is junk.
I see that on Phoronix many users without any knowledge about GFX say bullshits about some vendor. Simple example "artivision". This guy talk about fake GPGPU or broken OpenCL. When HD4k series was released OpenCL specification wasn't even in a final state, so problems on HD4k with OpenCL isn't AMD fault. What about GPGPU on X1x-HD3k? Please just look at Brook+... (on X1x GPGPU works via ASM, when calculation was done by pixel shaders). The most funny comment is "HD7000=driver cheats in quality" :D OMG... You talk bulshits like a troll...
I have simple answer for everyone which hate AMD. Just buy NV or crappy Intel HD2000-HD4000 with the slow 3D performance (3D perf on Linux is worse than on Windows, when on AMD and NVIDIA we have the same 3D performance on both systems) and only mesa drivers... I doesn't mean that mesa project is bad, this is great project but current open source drivers aren't enought for some cases, so I prefer a GFX which support BOTH open source (for users, which don't like proprietary software) and binary drivers (for users which require working all features of GFX, like 3D perf, GPGPU etc.). Thats why I choose AMD cards.
And you didn't notice that even amd oss devs are unhappy with some points like unreleased specs for powermanagement.
Listen well, its the last time I speak for this subject: When you have 2 Cards working together, you only have +50% Frame-Rate. The most ignorant people believe that 2 Cards are not scale well. But that is impossible for Stream Computing, and on top of that those 2 Cards run with Full Watt (so there are at their pic-Flops). The thing is that when you have 2 Cards, the Driver works on Quality-and-Precision Mode. So that's why you have only +50% Frame-Rate. This Modes are not exposed to Benchmarks. Today's Benchmarks are not capable to measure Quality-and-Precision (example: how much 64*64 vs 256*256 objects the screen have, for example 20/80 or 40/60). That trick AMD does and shows faster. Nvidia did it also with a gtx7900 vs an X1950, when you want 2*gtx7000 to take an X1950. And Fermi is 1.6Tflops at 64bit, wile Kepler is 3.2Tflops at 64bit, there is not a 32bit measurement: 1536-64bit CUDA Cores *1,1Ghz *FMAC(2generalOPS or 3streamOPS).
You are the one without knowledge. Even if I didn't want to learn, its impossible for me not to learn, because I have someone inside, and I am no saying more. As for AMD: Brook+ its a joke with a bad quality video converter inside Catalyst. And HD4000 doesn't have shared cache, that's why OpenCL discontinued for this card.
Nvidia wants to provide kepler GK110 gpus only as Tesla K20 this year (if you use that as kepler reference speed). That card is without video port, so if you want to use it in a workstation you have to combine it with a 2nd gpu, most likely something based on GK104 (similar to the desktop gtx 680, but usually sold as Tesla K10 for this purpose). GK110 should beat every amd gpu for high performance computing right now. But usually most users buy desktop cards and you could definitely find benchmarks where amd can run opencl apps faster than nvidia with those cards. Amd does not produce chips specifically optimized for hpc (yet?), basically they use the same for desktops. Amd cripples OpenGL features just like Nvidia does for their workstation brand. However those features are only relevant to apps similar to specviewperf - for linux i only see maya as example - maybe somebody could compare blender if it matters or not. As sidenote it is very simple to enable firegl features on linux ;) I would not decide on the OpenCL performance what card i want to buy, especially when you only look at the raw performance of the gpu that is extra stupid. I would decide on driver quality, support for the favorite distro i want to run and what os is the main os. If you only use Linux for office/internet and Win for gaming then the vendor does not matter much.
Some years ago when I was initially trying to understand how linux's graphic system worked I was really excited with the first tries to use compositing window managers for linux desktop environments, KDE in my case. I used to own at those dark ages a not so cheap (I bought it with hard work) Geforce FX 5200 card with 256MB DDR2 vRAM cause I had found out then that nvidia's driver for linux was the best around and I was so sure for my choice even if the ATIs were faster and cheaper cards than my FX 5200. I installed then a SUSE distro of the era, installed all the necessary packages in order to built the driver, installed the nvidia blob, configured xorg.conf file manually, and used the incredible Beryl wm with the trasparent cube and all the other exciting stuff!! I was really happy back then so I demonstrated my 3D desktop to all of my friends (all of them who knew some basic stuff about computers) and was really proud until I found out that if you open more than 3 windows and let them running at the same time all the next ones you open would be dead black!! Just black! Nothing to see, nothing to do!!:(
I then tried to read the driver's read me file to find a solution, nothing again.
So I decided to search for help at nvnews forum. I found there a thread about the problem full of wild guesses and dev answers in the style of - Is your BIOS latest?, Is your drivers latest?- and if all of them were latest - we will work on the issue-... How nice... A friend of mine had an ATI card of the era and could use Beryl with as many open windows as he liked using the XGL mod to do compositing. ( I had better performance here but who cared when I couldn't open more than 3 windows at the same time.)
I then entered the thread's conversation and after some time I found out that the problem was caused by nvidia's buggy way to do compositing via the direct rendering method and somehow with the help of the other guys(beryl-compiz devs included) there we found out that VRAM went somehow full, could not refresh, and after that no window contents could be displayed!
I then tried to use indirect rendering with much success via the fusion-icon small tray app and then we all tried to tweak the driver and beryl-compiz fusion to take the performance as close as it could be to the official direct rendering style.. Almost 70+ pages of discussion at nvnews for our thread. At that moment some strange members showed up praising nvidia, accusing all us for whining and the linux kernel for being buggy. I answered to one of them really badly (in another thread) and fell into the trap... Nvidia moderators found the excuse they needed to ban me for good from nvnews!
After 1 year or more they managed to make black windows appear at the 10nth window opened rather than 3rd and I can't recall when or even if they completely fixed the issue. I then continued providing my help to new users via the then compiz-fusion now compiz forums and irc(too bad forum is dead now..) for proper configuration of hardware for a good compositing experience both with ATI(new fglrx always) and nvidia cards, my guides are still there but now are mostly deprecated. Since then I never bought an nvidia card again even if fglrx was crap.
The point is that nvidia never used an acceleration or even a rendering method proposed by Xorg project, always their own proprietary arrogant buggy ways, never supported opensource tries (see nouveau, optimus usage for example) and generally it is full of arrogant idiots who I don't know what excuse they find to behave like that even now that AMD have them under with HD 7000 series and fglrx is as powerful as theirs blob.
On the other side maybe fglrx is still not what should be but we have public opensource support via AMD ( not as we would hope to be but it is there) and fglrx provides almost same day hardware support. Furthermore we have some people to talk to from AMD(see Bridgman et al).
So in conclusion I cannot agree more with Linus about nvidia!
Yes I agree to. But the most important future for a GPU is to work as intended (for all games), that's why they exist in the first place. And that's why I speak on two directions: a)If you are a gamer bay Nvidia, because if you don't you will go back on Windows for sure. b)If you not a gamer bay Intel, if you want really Libre hight quality things and someone working for you. AMD doesn't have quality nor Libre, other companies like Imagination Technologies are the same as AMD.
Since you mentioned it about Imagination Tech I must say that I hate them equally maybe more than nvidia!
I still have here an old PowerVR Kyro 2 AGP card that I could easily use it in an old system for the office, it was historically the first card I bought, if those more arrogant english fools could make the driver build for 2.6 kernels or provide some opensource documentation for a card that's 10 years old!
I used to run Mandrake 9.1 with their magical binary blob which back then was even better than nvidia's one and could be installed as a normal rpm.
No Imagination Tech has no way the same credit for me about linux community support with AMD. ImTech is by far the most proprietary, non care, closed, unsupportive, narrow minded, nasty, arrogant company today of all. Reminds me of Creative at sound cards.
You are right in 50% ;) If You don't know a 3D performance on AMD cards are the same for both Linux and Windows, so for games these cards are really good. Both AMD and NV are good choice for a linux gamers. I use AMD cards on Linux and what? I didn't back to Windows. Suprise?Quote:
a)If you are a gamer bay Nvidia, because if you don't you will go back on Windows for sure.
I see that talking with You is just a waste of my time, because Your arguments are just ridiculous...