Intel Sapphire Rapids Performance With Google Cloud Compute Engine C3

Written by Michael Larabel in Processors on 27 March 2023 at 12:00 PM EDT. Page 7 of 7. 1 Comment.
GROMACS benchmark with settings of Implementation: MPI CPU, Input: water_GMX50_bare. c3-highcpu-8 SPR was the fastest.
GROMACS benchmark with settings of Implementation: MPI CPU, Input: water_GMX50_bare. c3-highcpu-8 SPR was the fastest.
MariaDB benchmark with settings of Clients: 4096. c3-highcpu-8 SPR was the fastest.
MariaDB benchmark with settings of Clients: 4096. c3-highcpu-8 SPR was the fastest.
PostgreSQL benchmark with settings of Scaling Factor: 100, Clients: 800, Mode: Read Only. c3-highcpu-8 SPR was the fastest.
PostgreSQL benchmark with settings of Scaling Factor: 100, Clients: 800, Mode: Read Only, Average Latency. c3-highcpu-8 SPR was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Document Classification, oBERT base uncased on IMDB, Scenario: Asynchronous Multi-Stream. c3-highcpu-8 SPR was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Document Classification, oBERT base uncased on IMDB, Scenario: Asynchronous Multi-Stream. c3-highcpu-8 SPR was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90, Scenario: Asynchronous Multi-Stream. c3-highcpu-8 SPR was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90, Scenario: Asynchronous Multi-Stream. c3-highcpu-8 SPR was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Token Classification, BERT base uncased conll2003, Scenario: Asynchronous Multi-Stream. c3-highcpu-8 SPR was the fastest.
Neural Magic DeepSparse benchmark with settings of Model: NLP Token Classification, BERT base uncased conll2003, Scenario: Asynchronous Multi-Stream. c3-highcpu-8 SPR was the fastest.
Google Draco benchmark with settings of Model: Church Facade. c3-highcpu-8 SPR was the fastest.
nginx benchmark with settings of Connections: 1000. c3-highcpu-8 SPR was the fastest.
nginx benchmark with settings of Connections: 4000. c3-highcpu-8 SPR was the fastest.
BRL-CAD benchmark with settings of VGR Performance Metric. c3-highcpu-8 SPR was the fastest.
OpenCV benchmark with settings of Test: Core. c3-highcpu-8 SPR was the fastest.
OpenCV benchmark with settings of Test: Stitching. c3-highcpu-8 SPR was the fastest.
OpenCV benchmark with settings of Test: Object Detection. c3-highcpu-8 SPR was the fastest.

GROMACS, MariaDB, PostgreSQL, DeepSparse, Nginx, OpenCV, BRL-CAD and more all were showing very significant uplift with the Google Cloud C3 virtual machine. Across the board the Google Cloud C3 VM with Sapphire Rapids was exhibiting great performance speed-ups while still delivering the best value among these Google Cloud instances tested.

Geometric Mean Of All Test Results benchmark with settings of Result Composite, Google Cloud c3 Sapphire Rapids. c3-highcpu-8 SPR was the fastest.

Across the span of 103 benchmarks, above is the geometric mean of all the results. The c3-highcpu-8 Sapphire Rapids virtual machine was 47% faster than the next fastest 8 vCPU instance tested: the c2-standard-8 with Cascade Lake. The n2-standard and n2-highcpu 8 vCPU instances performed about the same in this set of benchmarks and put this new Sapphire Rapids compute instance at more than 60% faster. Not bad when comparing Google Cloud instance types and sticking to the same vCPU capacity throughout. Though from a hardware perspective, keep in mind that between Cascade Lake and Sapphire Rapids is Intel's Ice Lake, to which there isn't as much exposure.

In any event, for Google Cloud customers the new C3 series with Sapphire Rapids delivers terrific vCPU-for-vCPU gains over prior generations while also managing to deliver better value based on the current hourly pricing. It will also be interesting to see how and when AMD EPYC "Genoa" processors are deployed in Google Cloud.

If you enjoyed this article consider joining Phoronix Premium to view this site ad-free, multi-page articles on a single page, and other benefits. PayPal or Stripe tips are also graciously accepted. Thanks for your support.


Related Articles
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.