Intel Xeon Ice Lake vs. AMD EPYC Milan Server Performance, Efficiency & Value In 2023

Written by Michael Larabel in Processors on 8 June 2023 at 05:00 PM EDT. Page 4 of 10. 8 Comments.
TensorFlow benchmark with settings of Device: CPU, Batch Size: 256, Model: ResNet-50. Xeon Gold 6338 2P was the fastest.

If deploying new servers for AI, it's obviously worthwhile considering AMD Genoa for AVX-512 and Intel Sapphire Rapids for AMX, among other benefits from those latest generation processor designs. But if debating between AMD Milan and Intel Ice Lake, Intel Ice Lake has the strong advantage for software prepared to make effective use of AVX-512, like with TensorFlow.

TensorFlow benchmark with settings of Device: CPU, Batch Size: 512, Model: ResNet-50. Xeon Gold 6338 2P was the fastest.

TensorFlow is pretty much a clear Intel sweep in such a comparison until getting to AMD 4th Gen EPYC results.

Neural Magic DeepSparse benchmark with settings of Model: NLP Document Classification, oBERT base uncased on IMDB, Scenario: Asynchronous Multi-Stream. EPYC 7513 2P was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Document Classification, oBERT base uncased on IMDB, Scenario: Asynchronous Multi-Stream. Xeon Gold 6346 2P was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased, Scenario: Asynchronous Multi-Stream. Xeon Gold 6338 2P was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90, Scenario: Asynchronous Multi-Stream. Xeon Gold 6338 2P was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: CV Classification, ResNet-50 ImageNet, Scenario: Asynchronous Multi-Stream. Xeon Gold 6338 2P was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Token Classification, BERT base uncased conll2003, Scenario: Asynchronous Multi-Stream. EPYC 7513 2P was the fastest.

But in other AI software like Neural Magic's DeepSparse, it's a much more competitive race between AMD Milan and Intel Ice Lake. There still though are some areas where Ice Lake is able to deliver much better performance.

OpenVINO benchmark with settings of Model: Face Detection FP16, Device: CPU. Xeon Gold 6338 2P was the fastest.
OpenVINO benchmark with settings of Model: Person Detection FP16, Device: CPU. Xeon Gold 6338 2P was the fastest.
OpenVINO benchmark with settings of Model: Vehicle Detection FP16, Device: CPU. EPYC 7513 2P was the fastest.
OpenVINO benchmark with settings of Model: Face Detection FP16-INT8, Device: CPU. Xeon Gold 6338 2P was the fastest.
OpenVINO benchmark with settings of Model: Weld Porosity Detection FP16, Device: CPU. Xeon Gold 6338 2P was the fastest.

OpenVINO is another AI workload where AVX-512 (or AMX with Sapphire Rapids) can be very important for maximizing performance.


Related Articles