Announcement

Collapse
No announcement yet.

How AMD EPYC & Intel Xeon Gold Compare To Various Amazon EC2 Cloud Instances

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • How AMD EPYC & Intel Xeon Gold Compare To Various Amazon EC2 Cloud Instances

    Phoronix: How AMD EPYC & Intel Xeon Gold Compare To Various Amazon EC2 Cloud Instances

    Last week we began with our EPYC 7601 Linux benchmarking of this high-end AMD server CPU featuring 32 cores / 64 threads per socket. Earlier this week were also some 10-year old Opteron vs. EPYC benchmarks and power efficiency tests while the latest in our EPYC Linux testing is seeing how the new AMD processor compares to various Amazon EC2 cloud instances.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Typo:

    Originally posted by phoronix View Post
    Here's a look at pricing on the higher-end m4.16xlarge::
    (duplicate colon)

    Comment


    • #3
      c3.8xlarge - 32 vCPUs with 60GB of RAM at $1.680 per hour.
      I was reading this as $1,680 per hour.

      It's amazing what the difference between a dot and a comma can make you think. Who is going to pay for this? Are they just plain greedy? ... Something's wrong with my income!

      Comment


      • #4
        It's Amazing the variance in some of the Amazon cloud results. The results which stand out most are for Rodinia Streamcluster, here the c3.8xlarge solution outperforms r4.16xlarges despite the c3.8xlarge having significantly less cores and memory.

        This result seems to indicate that neither the memory or cores were a bottleneck on the c3.8xlarge solution; however, that then doesn't explain why the m4.16xlarge results were 25% slower than the r4.16xlarge results?

        Is this because Amazon cloud is a shared infrastructure and Amazon over allocate the CPU on the physical host across the hosted VMs? If so I'm not sure I'd be happy running a performance critical application on infrastructure which has uncontrollable 25% swings in underlying performance.

        Comment


        • #5
          I'm curious what the actual hardware is on Amazon's end. Regardless, Epyc seemed to fare pretty well against those systems.


          As an afterthought, I think a good approach to potential Amazon server customers would be to set up your project in BOINC. That way, you could potentially get your calculations processed for free. Meanwhile if you're having a hard time getting your project known, or, you have a finite amount of time to get it done, then you could pay to use these Amazon servers and run BOINC on them. In the end it becomes a win-win: you could have free processing power, but, you have Amazon's servers as a backup.

          Comment


          • #6
            I do believe each different class of service has differing underlying CPU/hardware combinations. so - an older instance type with more older cores with less IPC and memory bandwidth could explain these results.

            Comment


            • #7
              Once again it would be helpful if you took the handcuffs off the Xeon system by turning on the same NUMA interleaving that you went out of your way to enable for Threadripper.

              Comment


              • #8
                Originally posted by schmidtbag View Post
                I'm curious what the actual hardware is on Amazon's end. Regardless, Epyc seemed to fare pretty well against those systems.
                Amazon lists the details here. /proc/cpuinfo is also reliable on AWS systems. It is interesting to note that since Amazon, Google and now Microsoft buy CPUs by the truckload, Intel gives them early access to CPUs and has been known to design custom cpus for them. So a test instance may not be running on a commercially available processor. They also have more and more input into the feature set for next generation CPUs (it's good to be the king).

                Comment


                • #9
                  Originally posted by thesandbender View Post
                  Amazon lists the details here. /proc/cpuinfo is also reliable on AWS systems. It is interesting to note that since Amazon, Google and now Microsoft buy CPUs by the truckload, Intel gives them early access to CPUs and has been known to design custom cpus for them. So a test instance may not be running on a commercially available processor. They also have more and more input into the feature set for next generation CPUs (it's good to be the king).
                  Interesting... thanks for the info.

                  Comment


                  • #10
                    Michael Did you use shared or dedicated instances in AWS?

                    Comment

                    Working...
                    X