Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

    Phoronix: NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

    One week after the launch of the Radeon RX 480, NVIDIA is lifting the lid this morning on the GeForce GTX 1060 Pascal graphics card with pricing at $249+ USD while delivering GeForce GTX 980 class performance. I already have been testing the GeForce GTX 1060 under Ubuntu Linux, but unfortunately that embargo doesn't expire today... But here's the run-down on all of the technical details on the GTX 1060.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Awesome. Glad Nvidia got you a review sample before the embargo for reviews lifts.

    I'm extremely happy with my custom GTX-1080 under Arch Linux and I look forward to your review of the 1060. I may grab a custom version for a secondary system.

    Comment


    • #3
      Wouldn't be surprised if this thing can beat a 980Ti once overclocked with that low TDP. Also good they went with 6GB memory and not 4.

      Comment


      • #4
        Originally posted by atomsymbol
        I am wondering why Nvidia can generally achieve equivalent performance with less ALU units compared to AMD.

        Is it because Nvidia has a better hardware architecture or is it because Nvidia has a better DirectX/OpenGL compiler?
        Usually the performance difference is related to the drivers.

        Nvidia drivers are not so spec compliant as AMD, but a big share of the developers use Nvidia, so it's a "work in my machine" situation, similar to Webkit-only websites, that use some of it's quirks and render incorrectly on Firefox...

        IIRC, I've read somewhere that Nvidia spent a lot of effort in the driver to optimize its performance replacing instructions in the shader or even replacing them entirely for customized version, like when a new AAA games is released with a new driver version "optimized" for that game

        Comment


        • #5
          Originally posted by atomsymbol
          I am wondering why Nvidia can generally achieve equivalent performance with less ALU units compared to AMD.

          Is it because Nvidia has a better hardware architecture or is it because Nvidia has a better DirectX/OpenGL compiler?
          Actually the difference is mostly in naming conventions for what a core is. Nvidia's convention is correct and AMD's isn't. An nvidia "core" is approximately equal to an amd "compute unit"

          AMD's definition for what a core is, is most definitely wrong.

          EDIT: An nvidia core is an in order scalar pipeline architecture, an amd compute unit is an out of order scalar pipeline architecture. AMD's architecture certainly has greater potential to scale, but nvidia's is simpler and easier to program and optimize, and probably has much less latency.

          Last edited by duby229; 07 July 2016, 09:51 AM.

          Comment


          • #6
            Well I put my money where my mouth is and as I said before I went and bought a RX 480. AMD are much more open-source friendly and have been making very good progress lately. I have a GTX 780 for work related reasons, and while I don't even need a new graphics card, I still bought the RX 480, because I would like to vote with my wallet. That is what I value, and that's why I buy their products.

            Of course if your only concern is gaming, then pick whatever card falls in your budget and performs the best for the games you play.

            Still exited for the results as I like comparing architectures, their strengths and weaknesses!

            Originally posted by duby229 View Post

            Actually the difference is mostly in naming conventions for what a core is. Nvidia's convention is correct and AMD's isn't. An nvidia "core" is approximately equal to an amd "compute unit"

            AMD's definition for what a core is, is most definitely wrong.

            EDIT: An nvidia core is an in order scalar pipeline architecture, an amd compute unit is an out of order scalar pipeline architecture. AMD's architecture certainly has greater potential to scale, but nvidia's is simpler and easier to program and optimize, and probably has much less latency.
            I would argue that both definitions are "wrong", because by their logic, I have a 68 core CPU (4 cores, and 512-bit AVX instructions/core, 512/32 = 16 "cores").
            Last edited by Oguz286; 07 July 2016, 09:59 AM. Reason: Explanation of the "16" cores

            Comment


            • #7
              Originally posted by Oguz286 View Post
              Well I put my money where my mouth is and as I said before I went and bought a RX 480. AMD are much more open-source friendly and have been making very good progress lately. I have a GTX 780 for work related reasons, and while I don't even need a new graphics card, I still bought the RX 480, because I would like to vote with my wallet. That is what I value, and that's why I buy their products.

              Of course if your only concern is gaming, then pick whatever card falls in your budget and performs the best for the games you play.

              Still exited for the results as I like comparing architectures, their strengths and weaknesses!



              I would argue that both definitions are "wrong", because by their logic, I have a 68 core CPU (4 cores, and 512-bit AVX instructions/core, 512/32 = 16 "cores").
              I'm not sure what you mean. RX480 for example has 36 compute units, each of which has 64 stream processors. Here's a diagram that shows the gist.


              AMD is calling the stream processors cores, but that's not true. From the diagram it's plainly obvious that it is the compute unit as a whole which implements the front end. Individually the stream processors have no way to fetch or decode or schedule loads. It takes a compute unit to be a core.

              Comment


              • #8
                Originally posted by Oguz286 View Post
                Well I put my money where my mouth is and as I said before I went and bought a RX 480. AMD are much more open-source friendly and have been making very good progress lately. I have a GTX 780 for work related reasons, and while I don't even need a new graphics card, I still bought the RX 480, because I would like to vote with my wallet. That is what I value, and that's why I buy their products.

                Of course if your only concern is gaming, then pick whatever card falls in your budget and performs the best for the games you play.

                Still exited for the results as I like comparing architectures, their strengths and weaknesses!



                I would argue that both definitions are "wrong", because by their logic, I have a 68 core CPU (4 cores, and 512-bit AVX instructions/core, 512/32 = 16 "cores").
                I wrote a reply, but it's in the mod que.

                EDIT: Basically the gist was that for AMD architectures it takes a compute unit to be a core.


                Take a good look at this diagram, you can clearly see how the front end, fetch, and decode are at the compute unit. the stream processors by themselves aren't capable of doing anything. The logic to function exists at the compute unit level, which means the compute unit is the core.
                Last edited by duby229; 07 July 2016, 10:29 AM.

                Comment


                • #9
                  So this will be 480 VS 1060. At last a real battle, the winner will get my money.

                  Fight!

                  Comment


                  • #10
                    It seems GTX 1060 is more power/price efficient then RX 480 due to memory cut (minus 1-2GB) and no SLI support

                    What is the price of 3GB model? Hopefully that isn't 2,5+0,5 GB

                    Comment

                    Working...
                    X