Page 8 of 10 FirstFirst ... 678910 LastLast
Results 71 to 80 of 91

Thread: AMD's HD7970 is ready for action! The most effiency and fastest card on earth.

  1. #71
    Join Date
    Apr 2011
    Posts
    330

    Default

    For the last time: "FLOPSsp ≈ f n 2 (FMA)"=wrong, does not exist. Correct="FLOPSsp ≈ f n 2 (FMA2)". Fermi="FLOPSsp ≈ f n 3 (FMA3)". They don't write it because the third OP is not certain, but with a good driver is above 90% certain for games. They actually explain that in the past, and in g80-Gtx200 models they actually count it "FLOPSsp ≈ f n 3 (madd+mul). Second: gtx580=1.6TflopsSP is in 64bit Dual-Isue. Fermi cores are 64bit, you can't compare them with 32bit cores of Radeon6970=2.7Tflops. You must convert Fermi Flops in 32bit if you want to be accurate, thats 2x tested. Anyway Fermi is faster in 99% of all benchmarks, I don't thing you have an objection here, are you? Ad at the end, a card with 3 billion transistor@1.5ghz, can't lose to a card (same generation) with 2 billion transistor@880mhz. And Kepler with 3 billion transistor@3ghz, 640cores-128bit-quad-issue(probably not sure), will be 5 times faster than gtx580. Also don't forget all the Radeon problems on Linux.

  2. #72
    Join Date
    Oct 2010
    Posts
    311

    Default

    Quote Originally Posted by Kano View Post
    @Ansla

    W8 boots fine without EFI, but maybe you will need EFI in order to use OEM preactivation. For the W8 logo the system must support secure boot. Maybe you didnt notice that there was a 32 bit W8 preview, that will never require EFI, because thats something for 64 bit systems only.
    You don't really think some OEM will release a system without the Windows 8 logo long after it's release? They don't target advanced users that would install Windows themselves, even if it were possible.

  3. #73
    Join Date
    Aug 2007
    Posts
    6,628

    Default

    The most problematic part with EFI booting is the mac support. Usually you can add menu entries using efibootmgr but there this does not work - on standard UEFI systems it works. You just need to learn how to use it when you exchange the motherboard and your bootloader is not named "/efi/boot/bootx64.efi". You can not chainload to EFI when you boot via mbr, so you want grub2 to boot Win you need to use Linux via EFI too. os-prober does not find it, but grub2 itself can find it as well - a new 64 bit Kanotix Hellfire will feature grub2 hybrid mode iso image where you can directly test it on mbr, EFI and even mac. So 1 iso for every system...

  4. #74
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by artivision View Post
    For the last time: "FLOPSsp ≈ f n 2 (FMA)"=wrong, does not exist. Correct="FLOPSsp ≈ f n 2 (FMA2)". Fermi="FLOPSsp ≈ f n 3 (FMA3)". They don't write it because the third OP is not certain, but with a good driver is above 90% certain for games. They actually explain that in the past, and in g80-Gtx200 models they actually count it "FLOPSsp ≈ f n 3 (madd+mul). Second: gtx580=1.6TflopsSP is in 64bit Dual-Isue. Fermi cores are 64bit, you can't compare them with 32bit cores of Radeon6970=2.7Tflops. You must convert Fermi Flops in 32bit if you want to be accurate, thats 2x tested. Anyway Fermi is faster in 99% of all benchmarks, I don't thing you have an objection here, are you? Ad at the end, a card with 3 billion transistor@1.5ghz, can't lose to a card (same generation) with 2 billion transistor@880mhz. And Kepler with 3 billion transistor@3ghz, 640cores-128bit-quad-issue(probably not sure), will be 5 times faster than gtx580. Also don't forget all the Radeon problems on Linux.
    the raytracing and bitcoin benchmarks show you wrong.

    the nvidia is not magic magic faster and the HD7970 much faster than the old 6970.

    nvidia lose in nearly all "Calculating only" benchmarks and they won in graphic benchmarks only because of there tessellation unit but the tessellation unit of the hd7970 is faster.

    you claim that 64bit means double of the speed is proved wrong on the PC if you benchmark an old Pentium 4- 571 on 32bit and 64bit then the result is 32 bit is faster.
    only the amd cpus are faster in 64bit because they do have deactivated calculating units in 32bit and they activate it in 64bit mode.
    In fact in most of the cases 32bit is faster than 64bit if you use the same instruction set and the same pices of calculating units.

    You claim your nvidia is double speed just because 64bit is just wrong in most of the time its slower because of 64bit because 64bit just eat your memory bandwith.

  5. #75
    Join Date
    Jun 2010
    Posts
    150

    Default

    Quote Originally Posted by Ansla View Post
    Of course they must be committed to EFI, Windows 8 will not boot without EFI. I'm hopping they will go with coreboot with a EFI payload and linux users will be able to replace the payload with a useful one. BTW, your company wants desktops or servers? For servers there is a higher chance to find boards made specifically for linux.
    I'm thinking both servers and workstations. For servers there is a tiny hope something will show up, but we are not talking about large quantities of servers here, so custom ordering is out of the question. For workstations high performance per core is a requirement, so I believe Xeon E5(LGA2011) is the only choice here, and I don't know of any coreboot plans for these boards.

    Quote Originally Posted by Ansla View Post
    1 That's exactly what I said. For my work computer it may be ok to wait up to a month before upgrading to the latest kernel, I do that anyway, but my home computer is as bleeding-edge as possible.
    2 I don't use a Debian based distribution, and even if I did I don't think it would solve my problem, I'm talking about building my own custom kernel (make oldconfig; make -j5; make install; make modules_install) not using the one provided by the distribution or some ppa
    3 Did you do a `dmesg | grep -i taint` to check it or you just like to disagree? Debian wiki still mentions this as a problem http://wiki.debian.org/NvidiaGraphic...ints_kernel.22

    BTW, all the problems I mentioned are design problems that apply to all binary drivers (or even free drivers maintained out of the linux kernel) and will most likely never be fixed.

    Keeping the kernel not tainted is a big enough issue for me. I do use hardware that requires proprietary firmware and some user-space blobs as well, I try to replace them with free alternatives whenever possible, but proprietary kernel drivers it's too much.
    I don't think many consider waiting a few weeks for new drivers a problem. nVidia is very quick to immediately release drivers when new hardware or new OpenGL specifications arrive, and even sometimes before. Don't forget the beta drivers either.

    When both the official nVidia drivers and the open drivers has proprietary components, they both should be considered a "degree of evil", if considered evil at all. It would be wrong to consider the one as "evil" while the other one as "good". Having a buggy, unstable, low-performing, power hungry driver would certainly taint my system.

    BTW, are you expecting DeepColor(30-bit) through DisplayPort on the open drivers anytime soon? SLI support?


    Quote Originally Posted by Ansla View Post
    With OpenGL 3.0 being almost ready and most features required for OpenGL 3.3 except for newer GLSL being implemented, the biggest issue still remaining is OpenCL.
    The essential GLX_ARB_create_context is still missing, and GLX_ARB_create_context_profile which is used for context creation for every version since 3.2 is also missing. This is quite annoying for handling context creation on the open drivers, you'll have to write a fallback.

    While the open drivers still are playing catch-up struggling with version 3.0, version 4.3 could just be around the corner. A new revision is expected when Kepler arrives. There is also a difference between proof-of-concept support and performing production support.

  6. #76
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by efikkan View Post
    For workstations high performance per core is a requirement
    try the Opteron 6204. right now its the AMD cpu with the highest single-thereat performance.
    Its a Quatcore with 3,3ghz and 16MB L3 cache and quatchannel 256bit ram per cpu.
    this means a full 64bit ram channel per core. and full 4,5mb cache per core.

    for a workstation you can get a dualsocket mainboard with this kind of cpu.

    but yes its expensive: 474,00 per opteron 6204 cpu.

  7. #77
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by Qaridarium View Post
    try the Opteron 6204. right now its the AMD cpu with the highest single-thereat performance.
    Its a Quatcore with 3,3ghz and 16MB L3 cache and quatchannel 256bit ram per cpu.
    this means a full 64bit ram channel per core. and full 4,5mb cache per core.

    for a workstation you can get a dualsocket mainboard with this kind of cpu.

    but yes its expensive: €474,00 per opteron 6204 cpu.
    i found benchmark: http://www.spec.org/cpu2006/results/res2011q4/


    Opteron 6204 --> 151/8cores =18,87
    Opteron 6282 SE --> 355/32cores=11.09

    The Opteron is single threated 71,5% faster than the Opteron 6282 SE.

    the amd fx8150 makes 78,8 points with 8cores

    this makes the opteron 6204 single threated 91,6% faster .
    Last edited by Qaridarium; 01-01-2012 at 11:08 PM.

  8. #78
    Join Date
    Aug 2007
    Posts
    6,628

    Default

    Btw. i did some benchmarks i7-2600 (4cores,ht,4gb ram) vs 2x Opteron 6180 SE (24 cores,32 gb ram) and the i7 kicks ass in most benchmarks. When you do a short one, like compiling mplayer2 then the i7 is twice as fast, when you do a longer one, then running the complete kernel compile in ram the opterons need 17min and due to missing ram 21 min for i7 on simple 500 gb hd a few years old. 7zip benched would win the opteron, but not much more real live benchmarks - or the diff is a joke compared to the price. Btw. i did not compile a minimal kernel, but a full u release 3.0.0-15 kernel on 64 bit.

  9. #79
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by Kano View Post
    Btw. i did some benchmarks i7-2600 (4cores,ht,4gb ram) vs 2x Opteron 6180 SE (24 cores,32 gb ram) and the i7 kicks ass in most benchmarks. When you do a short one, like compiling mplayer2 then the i7 is twice as fast, when you do a longer one, then running the complete kernel compile in ram the opterons need 17min and due to missing ram 21 min for i7 on simple 500 gb hd a few years old. 7zip benched would win the opteron, but not much more real live benchmarks - or the diff is a joke compared to the price. Btw. i did not compile a minimal kernel, but a full u release 3.0.0-15 kernel on 64 bit.
    I don't get your point first of all the 2010 old Opteron 61xx is not the tropic.
    second of all the top speed is not the question there are many many faster systems.
    and the last joke: price dosn't matter because he ask about a workstation Intel Xeon computer and not a cheap super market I7-2600 price breaker.

    Show him a Workstation with Coreboot (not bios/UEFI) with TOP speed at SINGLE-TREATED performance.

    i just answers his question! your writing just do not answer this question!

  10. #80
    Join Date
    Jun 2010
    Posts
    150

    Default

    It's my understanding that Xeon E5-1620 would outperform Opteron 6204 in most single threaded uses, even when considering the slight overhead introduced by EFI. My personal concern regarding EFI is primarily regarding security and protecting company property. With EFI we might see the first real OS independent viruses, and with all this "surveillance" features embedded into EFI I fear this would become a target for economical criminality. So for servers I would consider this security very important, and maybe a little less important for the workstations, but that's just my opinion. One of the reasons for the requirement of high single threaded performance is graphical development. With Kepler's increased programmability the CPU might become less of a bottleneck tough...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •