Page 6 of 10 FirstFirst ... 45678 ... LastLast
Results 51 to 60 of 91

Thread: AMD's HD7970 is ready for action! The most effiency and fastest card on earth.

  1. #51
    Join Date
    Apr 2011
    Posts
    386

    Default

    The exact opposite. General calculating power is only related to Complex General Applications. Simpler calculations (stream processing), like raster or ray-tracing, only count instruction set operations.
    Thats why Risc beats Sisc on graphics (flop-flop comparison). Thats why Amd-Fma4 beats in ray-tracing all CPUs that are better in Office for example. And finally you got wrong, another thing that I said: Amd shows stream floating point operations wile Nvidia shows General calculating power. So in your terms: gtx580=512cuda-Cores=700dp32bitGflops. Radeon6970=384vliw4-Cores=300dp32bitGflops.

  2. #52
    Join Date
    Apr 2011
    Posts
    386

    Default

    And radeon7000 has 512scalar4-Cores=400-800dp32bitGflops, depends on the scalar new architecture.

  3. #53
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,153

    Default

    Quote Originally Posted by efikkan View Post
    I'm tired of fanatics.

    Please tell me what's wrong with the nVidia drivers?
    Let's see:

    1. Executable stack opening up security holes.
    2. Known memory management issues opening up yet more security holes.
    3. They don't support multiple monitors with different rotations.
    4. They don't support KMS.
    5. Instead of xrandr, they require a proprietary tool to change resolutions.
    6. Setting up multiple desktops requires an X-server restart.

    Is that enough?

  4. #54
    Join Date
    Jan 2011
    Posts
    100

    Default

    Quote Originally Posted by BlackStar View Post
    Let's see:

    1. Executable stack opening up security holes.
    2. Known memory management issues opening up yet more security holes.
    3. They don't support multiple monitors with different rotations.
    4. They don't support KMS.
    5. Instead of xrandr, they require a proprietary tool to change resolutions.
    6. Setting up multiple desktops requires an X-server restart.

    Is that enough?
    Friends of mine really hate the driver due to being unstable.

  5. #55
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by artivision View Post
    The exact opposite. General calculating power is only related to Complex General Applications. Simpler calculations (stream processing), like raster or ray-tracing, only count instruction set operations.
    Thats why Risc beats Sisc on graphics (flop-flop comparison). Thats why Amd-Fma4 beats in ray-tracing all CPUs that are better in Office for example. And finally you got wrong, another thing that I said: Amd shows stream floating point operations wile Nvidia shows General calculating power. So in your terms: gtx580=512cuda-Cores=700dp32bitGflops. Radeon6970=384vliw4-Cores=300dp32bitGflops.
    Quote Originally Posted by artivision View Post
    And radeon7000 has 512scalar4-Cores=400-800dp32bitGflops, depends on the scalar new architecture.
    just for the record you are wrong at the raw calculating power.
    (amd)1 TFLOP vs (nvidia)0,2 TFLOP and FMA3/4 and instruction set doesn’t matter in that question.

    also there is no SISC cpu architecture if i search in google there is only: "SISC - Second Interpreter of Scheme Code"

    do you mean CISC ?

    "Thats why Amd-Fma4 beats in ray-tracing all CPUs that are better in Office for example."

    show me the sources of your claim…

    " Amd shows stream floating point operations wile Nvidia shows General calculating power."

    most of the time its only about how to write the code to get the the speed out of a Vector SIMD unit. the hd7970 is similar to an x86 cpu+SSE(Vector-SIMD) but its RISC+Vector-SIMD

    in the end AMD beat all nvidia solutions in the Reality for example: Bitcoin as a benchmark amd is 6 times faster than any nvidia.

    "gtx580=512cuda-Cores=700dp32bitGflops. Radeon6970=384vliw4-Cores=300dp32bitGflops."

    LOL you really unterstand nothing about VLIW 4D really means multiply with 4

    with the right compiler and the right optimations its really 4 time more speed than the nvidia solution. Bitcoin written in OpenCL for example do this.

    you calculate it like an 1D architecture. but this is in fact just wrong.

  6. #56
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    "BlackStar;Let's see:

    1. Executable stack opening up security holes.
    2. Known memory management issues opening up yet more security holes.
    3. They don't support multiple monitors with different rotations.
    4. They don't support KMS.
    5. Instead of xrandr, they require a proprietary tool to change resolutions.
    6. Setting up multiple desktops requires an X-server restart.

    Is that enough?"

    7. no git bisect regression bug hunt on the consumer side.
    8. no Social cooperation between different company’s and consumers/hackers
    9. its not possible to check the source to make sure there is no back-door in the code.
    10. No open educating material for students to learn how to write source-code.
    11. No consumer side Speed optimisation by experimenting different source-code compiling options.

    i think there are many more.

  7. #57
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,052

    Default

    Quote Originally Posted by artivision View Post
    7. Nvidia is the best choice for now. For Linux is the only choice.
    Very good argument on its own... Not really...

    If you don't want to use multiscreen, maybe. But at presentations I see mostly laptops with nvidia struggling to get the correct resolution on the beamer with their nvidia-settings.
    And don't start even with rotating screens... First, you can't rotate one of two screens with twinview. Secondly, nvidia-settings can't rotate and you need some xorg.conf setting to enable xrandr rotation. Then, you have a good chance of rendering artifacts after rotation, or even crashing X just doing xrandr -o right. Yes, that has happened on Ubuntu 10.10 for me.
    And I'm not even beginning how the nvidia driver fails to tell the X server the correct refresh rate of the monitor but some range between 50-53Hz.

  8. #58
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Hell YES the HD7970 are the OC king 1700mhz this fucking fast card beat all world records in a single round:

    http://www.hardwareluxx.de/index.php...-gpu-takt.html


  9. #59
    Join Date
    Jun 2010
    Posts
    172

    Default

    Quote Originally Posted by Ansla View Post
    That goes both ways, only a fanatic would like EFI. BIOS may be crap, but EFI is an even bigger crap. Luckily I haven't encountered any EFI motherboard for AMD CPU's yet. Also, AMD is a supporter of coreboot so I hope i will never have to use EFI.
    I'm still waiting for boards shipping with Coreboot, which my company would consider buying. Replacing the EFI is ok for home use, but not for commercial use.

    Keep in mind AMD is committed to EFI.


    Quote Originally Posted by Ansla View Post
    My only experience with nVidia cards was about 6 years ago but i guess all the problems I had with the blob are still there:
    1 latest kernel or X server was never supported until several weeks later (not to mention using a rc kernel is out of the question)
    2 when rebuilding my own kernel I had to remember to rebuild the nVidia module (this is not necessarily a problem with binary drivers, but with out-of-tree drivers)
    3 the nVidia kernel module would taint the kernel making any bug report for a kernel bug worthless
    Then maybe you should check it out before dismissing the driver.
    1 Usually in the next or the following driver release, within a month.
    2 For Debian-based distributions you can just use the one in the ppa or add the custom ppa.
    3 I would disagree.

    Quote Originally Posted by Ansla View Post
    Not even Richard Stallman is against everything proprietary, that kind of fanaticism is only in your head. But, yes, proprietary drivers and EFI are evil.

    If I wanted enterprise quality 3D drivers and didn't care about my freedom I would use Windows, really.
    You didn't get my point. Both alternatives has proprietary componens, so by following your logic both should be evil. There is no way to avoid everything proprietary in the real world.

    Quote Originally Posted by Ansla View Post
    True, if you want latest OpenGL spec the free drivers are useless, but few graphics developers do, latest specs will only become important for most Phoronix readers in a few years when next-next-gen consoles will be launched so they will be widely used in games.
    If you do OpenGL development, you should write on the OpenGL 3.2+ platform which every following version is based on. Every modern hardware since GeForce 8000/Radeon HD 2000 series support this hardware feature level(SM4), and SM4 differs from older versions. There is no sensible reason to optimize for OpenGL 2.1 today, when every performing piece of hardware today is optimized for SM4, and SM3(or older) will be a serious bottleneck.

    Utilities like Blender and GIMP uses OpenGL and OpenCL, so proper support is actually important for a large group of people.


    Quote Originally Posted by BlackStar View Post
    Let's see:

    1. Executable stack opening up security holes.
    2. Known memory management issues opening up yet more security holes.
    3. They don't support multiple monitors with different rotations.
    4. They don't support KMS.
    5. Instead of xrandr, they require a proprietary tool to change resolutions.
    6. Setting up multiple desktops requires an X-server restart.

    Is that enough?
    1. Sources please.
    2. Sources please.
    3. Haven't tried, not a large problem.
    4. You can't expect nVidia to support technoligy they are not invited to participate in. Correct me if I'm wrong, but it's my understanding that KMS is not beneficial for their driver, and therefore not a priority.
    5. Wrong. Both the tool and the API is open source. Why do I know? Because I've used it myself.
    6. Yes for separate screens.
    Please check your "facts"!

  10. #60
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by efikkan View Post
    1. Sources please.
    2. Sources please.
    3. Haven't tried, not a large problem.
    4. You can't expect nVidia to support technoligy they are not invited to participate in. Correct me if I'm wrong, but it's my understanding that KMS is not beneficial for their driver, and therefore not a priority.
    5. Wrong. Both the tool and the API is open source. Why do I know? Because I've used it myself.
    6. Yes for separate screens.
    Please check your "facts"!
    nice try (i don't believe you)now answer the next numbers in the list:


    7. no git bisect regression bug hunt on the consumer side.
    8. no Social cooperation between different company’s and consumers/hackers
    9. its not possible to check the source to make sure there is no back-door in the code.
    10. No open educating material for students to learn how to write source-code.
    11. No consumer side Speed optimisation by experimenting different source-code compiling options.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •