Intel Xeon E7 Processor Generational Performance Comparison

Intel has a fairly recent document titled Accelerated Operations for Telecom and Financial Services which is also listed under Accelerate OLTP Database Performance with Intel TSX. It describes the “performance” increases seen with the AsiaInfo ADB from moving from 2.8GHz Intel Xeon E7-4890 v2 (Ivy Bridge-EX), to 2.5GHz Intel Xeon E7-8890 v3 (Haswell-EX), and finally to 2.2GHz Intel Xeon E7-8890 v4 (Broadwell-EX) processors, as shown in Figure 1.

 

Figure 1: Speedup from Successive Processor Generations

 

This workload is described as “AsiaInfo ADB Database OCS k-tpmC”, while the AsiaInfo ADB is described as “a scalable OLTP database that targets high performance and mission critical businesses such as online charge service (OCS) in the telecom industry”, that runs on Linux.

The reason I have performance in quotes above is because what they are really measuring is closer to what I would call capacity or scalability. Their topline result is “Thousands of Transactions per Minute” as measured with these different hardware and storage configurations.

The key point to keep in mind with these types of benchmarks is whether they are actually comparing relatively comparable systems or not. In this case, the systems are quite similar, except for the core counts of the successive processor models (and the DD3 vs. DDR4 memory support). Here are the system components, as listed in the footnotes of the document:

Baseline: Four-sockets, 15-core Intel Xeon E7-4890 v2, 256GB DDR3/1333 DIMM, Intel DC S3700 SATA for OS, (2) 2TB Intel DC P3700 PCIe NVMe for storage, 10GbE Intel X540-AT2 NIC

Next Generation: Four-sockets, 18-core Intel Xeon E7-8890 v3, 256GB DDR4/1600 LVDIMM, Intel DC S3700 SATA for OS, (2) 2TB Intel DC P3700 PCIe NVMe for storage, 10GbE Intel X540-AT2 NIC

New: Four-sockets, 24-core Intel Xeon E7-8890 v4, 256GB DDR4/1600 LVDIMM, Intel DC S3700 SATA for OS, (2) 2TB Intel DC P3700 PCIe NVMe for storage, 10GbE Intel X540-AT2 NIC

The baseline system has a total of 60 physical cores, running at 2.8GHz, using the older Ivy Bridge-EX microarchitecture. The next generation system has a total of 72 physical cores, running at 2.5GHz, using the slightly newer Haswell-EX microarchitecture. Finally, the new system has a total of 96 physical cores, running at 2.2GHz, using the current Broadwell-EX microarchitecture. These differences in core counts, base clock speeds, and microarchitecture make it a little harder to fully understand their benchmark results in a realistic manner.

Table 1 shows some relevant metrics for these three system configurations. The older generation processors have fewer cores, but run at a higher base clock speed. The newer generation processors would be faster than the older generation processors at the same clock speed, but the base clock speed is lower as the core counts have increased with each successive generation flagship processor. The improvements in IPC and single-threaded performance are obscured by lower base clock speeds as the core counts increase, which makes the final score increase less impressive.

 

Processor Base Clock Total System Cores Raw Score Score/Core
Xeon E7-4890 v2 2.8GHz 60 725 12.08
Xeon E7-8890 v3 2.5GHz 72 1021 14.18
Xeon E7-8890 v4 2.2GHz 96 1294 13.48

Table 1: Analysis of ADB Benchmark Results

 

Table 2 shows some metrics from an analysis of some actual and estimated TPC-E benchmark results for those same three system configurations, plus an additional processor choice that I added. The results are pretty similar, which supports the idea that both of these benchmarks are CPU-limited. From a SQL Server 2016 perspective, you are going to be better off from a performance/license cost perspective if you purposely choose a lower core count “frequency-optimized” processor (at the cost of less total system capacity per host).

This is somewhat harder to do with the Intel Xeon E7 v4 family, because of your limited SKU choices. A good processor choice for many workloads would be the 10-core Intel Xeon E7-8891 v4 processor, which has a base clock speed of 2.8GHz and a 60MB L3 cache that is shared by only 10 cores.

If you could spread your workload across two database servers, you would be much better off with two, four-socket servers with the 10-core Xeon E7-8891 v4 rather than one four-socket server with the 24-core Xeon E7-8890 v4. You would have more total system processor capacity, roughly 27% better single-threaded CPU performance, twice the total system memory capacity, and twice the total number of PCIe 3.0 expansion slots. You would also only need 80 SQL Server 2016 Enterprise Edition core licenses rather than 96 core licenses, which would save you about $114K in license costs. That license savings would probably pay for both database servers, depending on their exact configuration.

 

Processor Base Clock Total System Cores Est TPC-E Score Score/Core
Xeon E7-4890 v2 2.8GHz 60 5576.27 92.94
Xeon E7-8890 v3 2.5GHz 72 6964.75 96.73
Xeon E7-8890 v4 2.2GHz 96 9068.00 94.46
Xeon E7-8891 v4 2.8GHz 40 4808.79 120.22

Table 2: Analysis of Estimated TPC-E Benchmark Results

 

The Intel document also discusses the “performance” increases seen from moving from Intel DC S3700 SATA drives to Intel DC P3700 PCIe NVMe drives. This is going to be primarily influenced by the advantages of being connected directly to the PCIe bus and the lower latency and overhead of the NVMe protocol compared to the older AHCI protocol.

Finally, they talk about the “performance” increases they measured from enabling the Intel Transactional Synchronization Extensions (TSX) instruction set and the Intel AVX 2.0 instruction set on current generation Intel E7-8800 v4 series processors.

SQL Server 2016 already has hardware support for older SSE/AVX instructions as discussed here and here. I really hope that Microsoft decides to add even more support for newer instruction sets (such as TSX) in SQL Server vNext.

 

 

Scaling SQL Server 2012 Pre-Conference Session

In just a couple of weeks, I will be giving a full day pre-conference session at the SQLPASS 2013 Conference in Charlotte, North Carolina. My pre-conference session will be on Monday, October 14, 2013. This pre-con’s going to be a lot of fun, and it will be a great way to kick off the SQLPASS 2013 Conference.

What you’ll learn

You’re a DBA, database developer, or system admin who must maintain a database server that is not performing and scaling well. You are not sure where the main scalability problems are or what you can do to solve them. The thought of picking out the best server and storage subsystem without making an expensive mistake makes you more than a little bit nervous.

This pre-conference session will cover the following topics and more:

  • Top scalability issues with SQL Server 2012
  • How you can postpone the scaling decision by finding and removing bottlenecks
  • How to use my SQL Server Diagnostic Information Queries to pinpoint performance issues
  • How to select and size your hardware and storage subsystem for maximum scalability
  • How to select hardware to get the best performance while minimizing your SQL Server 2012 licensing costs
  • How to use the scaling features built into SQL Server 2012 and 2014
  • How to scale up SQL Server 2012
  • How to use engineering techniques to scale out SQL Server 2012

Here is the full abstract:

Scaling SQL Server 2012

SQL Server implementations can quickly evolve and become more complex, forcing DBAs and developers to think about how they can scale their solution quickly and effectively. Scaling up is relatively easy (but can be expensive), while scaling out requires significant engineering time and effort. If you suggest hardware upgrades you may be accused of simply “throwing hardware at the problem”, and if you try to scale out, you may be thwarted by a lack of development resources or 3rd party software restrictions. As your database server nears its load capacity, what can you do? This session gives you concrete, practical advice on how to deal with this situation. Starting with your present workload, configuration and hardware, we will explore how to find and alleviate bottlenecks, whether they are workload related, configuration related, or hardware related. Next, we will cover how you can decide whether you should scale up or scale out your data tier. Once that decision is made, you will learn how to scale up properly, with nearly zero down-time. If you decide to scale out, you will learn about practical, production-ready techniques such as vertical partitioning, horizontal partitioning, and data dependent routing. We will also cover how to use middle-tier caching and other application techniques to increase your overall scalability.

How much does it cost?

When you register for the PASS Summit, my “Scaling SQL Server” pre-conference session is just $395.00. If you’ve already registered for the PASS 2013 Summit, email Shannon.Cunningham@sqlpass.org to take advantage of this opportunity.