Upgrading SQL Server– Top Five Intel Xeon Scalable Processors for SQL Server Usage

On July 11, 2017, Intel formally released their new Xeon Scalable Processor Family of processors for data center usage. This new processor family has a somewhat ridiculous 58 different SKUs in the line. They are broken up into Platinum 8xxx, Gold 6xxx, Gold 5xxx, Silver 4xxx, and Bronze 3xxx sub-categories.

New two-socket server models(which are required for these processors), such as the Dell PowerEdge R740 let you choose from 41 of these available 58 processors (which is also somewhat overwhelming). Microsoft charges the same SQL Server core license prices regardless of the performance of each processor core. Given this, you need to try to pick a particular processor with an awareness of that fact to minimize your licensing costs.

Intel Xeon Bronze and Silver Processors

There are two choices in the Intel Xeon Bronze line, and six choices in the Intel Xeon Silver line. For SQL Server usage, you can and SHOULD ignore those eight processor SKUs from the Bronze and Silver lines. The Bronze processors have low base clock speeds, no Turbo Boost, no Hyper-Threading and low memory speeds. The Silver processors have low base clock speeds and low memory speeds.

Intel Xeon Gold and Platinum Processors

There are 22 different choices between the Intel Xeon Gold 5xxx and 6xxx lines, and 11 different choices in the Intel Xeon Platinum line. Out of these 33 different choices, I think five of them are the “best” choices for SQL Server usage, as measured by their single-threaded CPU performance. Even though these processors are more expensive than the Bronze and Silver lines, the extra hardware cost is a small fraction of the overall system cost, including SQL Server core licenses.

Top Five Intel Xeon Scalable Processors for SQL Server Usage

Here is my top five list, which is ranked by base clock speed and L3 cache size per physical core. Picking a processor from this list is going to give you the best performance for each one of your relatively expensive SQL Server 2016 Enterprise Edition core licenses, which cost $7,128.00 each.

  1. Intel Xeon Gold 5122 (4 cores)
  2. Intel Xeon Gold 6144 (8 cores)
  3. Intel Xeon Gold 6128 (6 cores)
  4. Intel Xeon Gold 6146 (12 cores)
  5. Intel Xeon Gold 6154 (18 cores)

Figure 1 shows my “preferred” processor models at each physical core count, along with some of their specifications and their price. Note that I have not included the “M” suffix processor models (which offer double the memory support for a $3,000.00 premium). Depending on your workload, you may want those for SQL Server usage, but I wanted to simplify the table somewhat.

You should also notice the huge premium that Intel charges for the highest core count SKUs, and think about whether that makes sense in your situation. If you can split your workload across multiple servers, you will be much better off from a performance and cost perspective with multiple servers with lower core count processors. Some choices are relatively bad (such as the Platinum 8170) due to their low base clock speeds.

Preferred Skylake-SP

Figure 1: Preferred Intel Skylake-SP Processors for SQL Server Usage

Figure 2 shows some comparative system metrics for a two-socket server with the given processor. These calculations are based on a single actual TPC-E benchmark score for a Lenovo ThinkSystem SR650 server using a pre-release version of SQL Server 2017. I suspect that we will see better results with the release version of SQL Server 2017.

Looking at these numbers, the Raw TPC-E Score is a measurement of the CPU capacity of the system, while Score/Core shows the single-threaded CPU performance for that particular processor. The License cost column shows how much it would cost to purchase SQL Server 2016 Enterprise Edition core licenses for that system.

The first thing jumps out to me is that the Platinum 8170 is a pretty bad choice compared to the Platinum 8168 or Gold 6154. Selecting the Gold 6154 instead of the Platinum 8170 would save you $114,048.00 in SQL Server license costs and $7,724.00 in hardware costs, with roughly the same capacity and much better single-threaded performance (which will be noticeable to your users).

Another good choice would be two database servers with the Gold 6146 instead of one database server with the Platinum 8180. This would save you $57,024.00 in SQL Server license costs and $6,874.00 in CPU costs (which would pay for a good chunk of the second server). Two servers would have more CPU capacity, twice the memory capacity, twice the number of PCIe 3.0 lanes, and much better single-threaded performance.

Comparative Skylake

Figure 2: Comparative Two-Socket System Metrics for Intel Skylake-SP Processors

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server– Windows Server 2016 Licensing Issues

If you are planning a complete data platform upgrade, you should be planning on using Windows Server 2016 as your operating system (assuming you are planning on using Windows rather than Linux). Windows Server 2016 has a number of advantages over older versions of Windows Server, including higher license limits for memory, better networking and clustering support, and a longer remaining support lifetime (January 11, 2022 for mainstream support). Windows Server 2012 and Windows Server 2012 R2 will fall out of mainstream support on October 9, 2018, which is not that far away.

One potential issue with Windows Server 2016 is the fact that Microsoft is now using a core-based licensing system for that product. Microsoft requires a minimum of eight core licenses per processor and sixteen core licenses per server. This could potentially be a point of confusion and extra SQL Server licensing costs for some organizations.

Here is the scenario. Imagine that you have done your sizing analysis and calculations for SQL Server 2017 usage, and you have decided that a new two-socket server with two, quad-core Intel Xeon Gold 5122 processors will best suit your performance needs and budget requirements. That particular processor costs $1221.00 each.

Shon Smith, your well-meaning server administrator is aware of the new Windows Server 2016 licensing model, and he wants this new database server to actually have the sixteen physical cores that he was forced to buy Windows Server 2016 core licenses for. He also wants to minimize the hardware cost of this new server. Because of this, he decides to buy a two-socket server with two, eight-core Intel Xeon Bronze 3106 processors (which only cost $306.00 each). Shon has maximized his investment in those sixteen required Windows Server 2016 core licenses, and he has saved $1,730.00 on the hardware. But what has he done to you, the DBA?

First, this new database server now will require sixteen SQL Server 2017 core licenses instead of only eight. For SQL Server 2017 Standard Edition, this means a $14,872.00 increase in your SQL Server license costs, while SQL Server 2017 Enterprise Edition would cost $57,024.00 extra due to this hardware change. But you have twice as many physical cores, so that must be better, right?

Actually, not at all in this case. The Intel Xeon Bronze 3106 has a base clock speed of only 1.7GHz, with no Turbo Boost, no Hyper-Threading and an 11MB L3 cache. The Intel Xeon Gold 5122 that you specified has a base clock speed of 3.6GHz, a Turbo Boost speed of 3.7GHz, along with Hyper-Threading and an 16.5MB L3 cache.

The system you originally specified would have more than double the single-threaded CPU performance compared to Shon’s modified system. Your original system would also have more total CPU capacity than Shon’s modified system. The Intel Xeon Bronze 3106 is a terrible choice for SQL Server usage!

Keep in mind that most SQL Server instances only require Windows Server 2016 Standard Edition (rather than Datacenter Edition). Those sixteen required core licenses for Windows Server 2016 Standard Edition only cost $882.00 total, not each. This means that you should not let that required minimum Windows Server 2016 core license count sway your processor choice for SQL Server.

This exercise assumes that SQL Server 2017 licenses have the same pricing model as SQL Server 2016 licenses. It also uses a particularly bad processor choice by Shon, to illustrate the point. Using two eight-core Intel Xeon Gold 6144 processors would have given you much better performance and scalability (at a higher hardware cost) for that extra SQL Server license cost.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server– SQL Server Support Dates

One valid argument for considering a SQL Server upgrade sooner rather than later is the fact that the version(s) of SQL Server you are using in Production may be out of mainstream or even extended support. But what does that mean exactly?

It does NOT mean that out of support versions suddenly stop working. It does mean that they won’t receive any more regular servicing updates in the form of Service Packs or Cumulative Updates. You may also have issues where 3rd Party applications are no longer supported on out of support versions of SQL Server.

To be fair, you can also run into the opposite issue, where 3rd Party applications are not supported on newer versions of SQL Server. Figure 1 shows the relevant support dates for SQL Server 2000 through SQL Server 2016.

Picture6

Figure 1: SQL Server Support Dates

If your SQL Server version is out of support, it is also likely that the version of Windows that it is running on will be out of support. Figure 2 shows the support end dates for Windows Server 2003 up to Windows Server 2016. Both Windows Server 2012 and Windows Server 2012 R2 will fall out of mainstream support on October 9, 2018, which is not that far away as I write this. From a support perspective, Windows Server 2016 gives you a lot more time in a supported state.

Picture7

Figure 2: Windows Server Support Dates

Microsoft recently announced a new program called Premium Assurance, which lets you add six more years of product support to 2008 or newer versions of SQL Server or Windows Server, for an escalating annual cost (based on when you purchase it). The annual cost is a percentage of the original total license cost, that went up to 7% on August 1, 2017. The cots will go up to 9% on July1, 2018. This program is an add-on to Software Assurance. This is a pretty expensive option that I would not prefer to use, for both economic and technical reasons, unless I had no other choice.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server– Database Compatibility Levels

One important factor to contemplate during an upgrade and migration effort are your database compatibility levels, which control certain database behaviors to be compatible with a particular version of SQL Server. Unlike the database file version (which get changed when you restore a database backup in a newer version or attach an older database to a newer version of SQL Server), the database compatibility level can be changed back and forth to different values.

Here is an example of how to set the database compatibility level in T-SQL:

ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120;

Figure 1 shows the supported database compatibility levels by major SQL Server version.

Picture6

  Figure 1: Supported Database Compatibility Levels By Version

By default, new user databases that are created in an instance of SQL Server are set to the version of the SQL Server instance (so a new user database in SQL Server 2017 would have a database compatibility level of 140). The exception would be if the model database has a lower database compatibility level. Upgrading a database with a database compatibility level lower than the lowest allowed level, sets the database compatibility level to the lowest compatibility level allowed for that version (as shown in Figure 1).

The classic scenario for why you might want to use an older database compatibility level after an upgrade to a newer version of SQL Server is to use it as an interim migration aid to work around some application compatibility issues that might surface after a migration (which you hopefully discovered during your pre-migration testing efforts). After further testing and possible database/application changes, it is pretty common to eventually change the database compatibility level to match the database engine version of the instance, which lets you take advantage of some of the new features and behavior differences in that version.

With SQL Server 2014 and newer, the database compatibility level also controls whether the new cardinality estimator or the legacy cardinality estimator is used. The new cardinality estimator gives equivalent or better query performance for many to most queries, but it sometimes has very significant performance regressions. It is very important to do as much testing as possible before you change the compatibility level to 120 or higher. My post Upgrading SQL Server–Diagnosing Performance Regressions goes into much more detail about this issue.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Replacing Slow Hardware Part 2

Previously, we talked about the single-threaded performance, CPU capacity and hardware cost difference between an extremely poor server model and processor SKU choice and a much better server model and processor SKU choice from about three and a half years ago. In that case, the bad choice only had about 34% of the single-threaded CPU performance and about 34% of the CPU capacity of the good choice processor SKU.

There was a $746.00 increase in the cost of the processor itself, and perhaps a $500.00 difference in the cost of a base Dell PowerEdge R620 compared to a base Dell PowerEdge R320 server. The SQL Server license cost would have been identical in either case, based on the number of physical cores.

Since the legacy hardware choice was so poor, that makes it even easier to get a significant increase in single-threaded CPU performance and overall CPU capacity by purchasing a brand new model server, using the best available processor at the same physical core count. A new model server will also have higher memory density and capacity and more PCIe 3.0 lanes available. It will also have a number of manageability advantages.

Dell’s 14th generation servers support the new 14nm Intel Xeon Scalable Processors (Skylake-SP) processors that have anywhere from four to 28 physical cores. In our case, we want to replace a legacy Dell PowerEdge R320 server that had one Intel Xeon E5-2407 v2 processor that had four physical cores, a base clock speed of 2.4GHz, no Turbo Boost, no Hyper-Threading, a relatively small 10MB L3 cache, a slow 6.4 GT/s QPI speed, three channels of DDR3-1066 memory support and 24 PCIe 3.0 lanes.

The best quad-core Intel Xeon Scalable processor SKU for a two-socket server is the Intel Xeon Gold 5122 processor, which has a base clock speed of 3.6GHz, a Turbo clock speed of 3.7GHz, Hyper-Threading, 16.5MB of L3 cache, six channels of DDR4-2666 memory support, and 48 PCIe 3.0 lanes, and a price of $1,221.00.

A bad choice at the same physical core count is the Intel Xeon Silver 4112 processor that has a base clock speed of just 2.6Ghz, a Turbo clock speed of only 3.0GHz, Hyper-Threading, 8.25MB of L3 cache, six channels of slower DDR4-2400 memory support, 48 PCIe 3.0 lanes, and a price of $473.00. Saving $748.00 by getting the cheaper processor is foolish, false economy for SQL Server usage.

A good server for a Xeon Gold 5122 processor is the new Dell PowerEdge R740 two-socket server. Even though this is a two-socket server, you can choose to only populate one socket to start with (which is what we will do here). One trade-off when you only use one processor in a two-socket server is that you cut the overall memory capacity of the server in half (until you add another processor later).

The main point here is that you should do this type of analysis when looking at new servers and selecting a particular processor for SQL Server 2016/2017 usage. You should definitely avoid the Intel Xeon Silver and Bronze lines for SQL Server usage, since you give up so much performance and scalability for a relatively small hardware cost reduction.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Replacing Slow Hardware

One unfortunately common scenario is a situation where someone made a very poor processor choice sometime in the past for a database server that runs a mission critical workload. When I do health checks for SQLskills, I encounter both good processor choices and some pretty bad processor choices.

One recent example was a 22nm, quad-core Intel Xeon E5-2407 v2 processor (Ivy Bridge-EN) being used in a Dell PowerEdge R320 one-socket server. The Intel Xeon E5-2407 v2 was launched in Q1 of 2014, and it was a part of the Ivy Bridge-EN family. The EN suffix means lower base clock speeds, smaller L3 caches, and lower QPI speeds compared to the EP suffix processors, which is not a good thing for SQL Server usage. If you want to know how to decode Intel Xeon processor numbers, Intel explains it here.

This particular processor has a base clock speed of 2.4GHz, no Turbo Boost, no Hyper-Threading, a relatively small 10MB L3 cache, and a slow 6.4 GT/s QPI speed. This processor was an extremely bad choice for SQL Server usage, and it would have been a prime candidate for my laptop comparison method that I discussed here. It did have the virtue of only costing $250.00 for the processor though!

Since this processor has four physical cores, it would cost $1,858.00/per core for SQL Server 2016 Standard Edition (for a total of $7,432.00) and it would cost $7,128.00/core for SQL Server 2016 Enterprise Edition (for a total of $28,512.00). This means you could be paying anywhere from 30X to 114X your processor cost for the SQL Server 2016 licenses, depending on which edition of SQL Server 2016 you purchased.

A couple of things in defense of the person who made this choice. First, the single-socket Dell PowerEdge R320 only supported Intel Xeon E5-2400 v2 family processors, and the 2.4GHz Intel Xeon E5-2407 v2 was at least a better choice than the even slower 1.8GHz Intel Xeon E5-2403 v2 they could have picked! The primary motivation here was probably to pick a low cost server, with a low core count processor to minimize their hardware and SQL Server licensing costs.

Given that sort of goal, a much better choice at the time would have been a two-socket Dell PowerEdge R620 server with just one quad-core Intel Xeon E5-2637 v2 processor. The E5-2637 v2 has a base clock speed of 3.5GHz, a 3.8GHz Turbo Boost speed, Hyper-Threading, a larger 15MB L3 cache, and a faster 8.0 GT/s QPI speed. It did cost a little more for the processor, at a price of $996.00. A base two-socket PowerEdge R620 would also cost a little more than a base one-socket PowerEdge R320, but the overall difference in hardware cost would be pretty negligible compared to the SQL Server licensing costs.

Using an analysis based on TPC-E scores, I estimate one Intel Xeon E5-2407 v2 would have a system CPU capacity of 192.16 with a score/core of 48.04. A single Xeon E5-2637 v2 would have a system capacity of 559.77, with a score per core of 139.94. That is quite a difference!

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Compelling Upgrade Arguments

Over the course of my career, I can’t remember too many people coming to me and initiating the conversation about upgrading to a new version of SQL Server, a new version of Windows Server or a new set of server hardware and storage. As a DBA, Data Architect, or Consultant, I have generally had to take the lead, and make the business and technical case for an upgrade effort. Unfortunately, this is not always an easy effort, going against institutional inertia, budget limitations, and many other common arguments.

One proven way to start this process is to honestly assess what are your biggest issues in your current Production environment. Perhaps you have multiple servers or SQL Server instances that are underutilized and might be good candidates for consolidation. Perhaps you have multiple instances that are running on hardware that is out of warranty, on storage that is also out of warranty. Another possibility is persistent and worsening performance and scalability issues that are negatively affecting the business and also causing an increased workload for your I.T. staff.

Whatever the main issues are, you also need to ask yourself whether you have done as much as you can to alleviate the issue with your current resources. Have you done all you can to maintain, properly configure, and tune your environment, from top to bottom? Doing your job as a confident and effective DBA will usually improve the situation to a certain extent, perhaps enough postpone the immediate necessity for upgrade. This helps your organization in the short term and also builds your credibility.

It is also possible, that despite your best maintenance and tuning efforts, you may still be in a situation where an upgrade (hopefully a complete platform upgrade) is the best long-term answer. Pushing for an upgrade after you have gained knowledge and credibility by doing everything you can to improve your current environment is much easier than if a platform upgrade is the first thing you ask for!

Using Your Laptop or Workstation for a Hardware Comparison

One tactic that I have used on more than one occasion to help build the case for a hardware upgrade is to run a couple of quick, standardized synthetic CPU/Memory benchmarks, such as Geekbench, Cinebench, or CPU-Z, on both my laptop or workstation and a Production database server. Of course, you want to do this in an intelligent and safe way. You should have an idea (from previous analysis and research) how fast your workstation is (for both single-threaded and multi-threaded performance) compared to the database server in question. You also would want to run tests like this (if you actually run them) on a server that is not actually being used for a Production workload when you run the benchmarks.

What I mean by “if you actually run them” is that most of the well-known benchmarks have online databases where you can browse the results from other systems. Another source of comparative benchmark information are the server-oriented hardware review sites such as ServeTheHome and StorageReview. Using sources like these, you may be able to avoid actually running these benchmarks on your Production server.

Depending on the age and speed of your workstation and age and speed of your legacy database server, you may find that your workstation is significantly faster than your legacy database server for single-threaded CPU performance. You may even find that it has a higher multi-threaded score (which represents capacity) than your legacy database server.

Once you are sure of your facts (and have done as much other research and analysis as possible to back them up), you can choose a strategic time and method to disclose this information to your management. This might be in an email, or in a meeting, it depends on the situation and people involved.

For example, after reviewing the extensive efforts you and your team have made to improve performance on your current environment, you could casually mention that according to CPU-Z and Geekbench, your laptop (point at it) has 73% higher single-threaded CPU performance than your current Production database server. Often, this will get an initial laugh, but if you have all of your evidence and analysis ready to present, this can actually be a compelling point that can plant the seed in your boss’s mind.

Another general approach is to be persistent. Quite often, despite your best efforts and preparation, your management team will not approve of your upgrade plan. I have found that if you are determined, and continue to improve the quality of your arguments over time, you will usually prevail in the end. 

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Mitigating Performance Regressions

As I discussed yesterday, it is sometimes possible to experience performance regressions after upgrading to a new version of SQL Server, even when it is running on a new, presumably faster database server. This is much less likely to happen if you have done a good job of analysis, planning, testing, benchmarking, and post-migration configuration activity, but it is still possible.

Hardware Selection and Configuration

Depending on the age and hardware specifications of your new database server compared to your old database server, it is definitely possible for a new database server to be significantly slower (in terms of single-threaded CPU performance) than your old server was. For example, lets say that your old database server had two eight-core Intel Xeon E5-2667 v2 processors, while your new database server has two eight-core Intel Xeon Silver 4108 processors. Even though the Xeon Silver is three generations newer than your old Xeon E5 v2, there is a huge base clock speed difference between the two specific processors in question. The Xeon E5-2667 v2 has a base clock speed of 3.3 GHz, while the Xeon Silver 4108 has a base clock speed of 1.8 GHz, so even with the generational improvements, the newer processor is not going to be nearly as fast for single-threaded performance. That particular new processor was a very bad choice for SQL Server, but if you have been reading my blog, I don’t think you would make such a poor choice!

From a configuration perspective, you want to make sure that you have the appropriate BIOS settings on your new database server. You want to make sure that power management is set to OS control or disabled, and you want to make sure that NUMA has not been disabled. You also want to make sure that Turbo Boost is enabled and that Hyper-Threading is enabled (except in some types of reporting/DW workloads).

If your server is a virtualization host, you want to make sure that VT-x and VT-d are enabled so that you can take advantage of the virtualization support built into your processor. Speaking of that, newer generation Intel processors have much better virtualization performance compared to legacy processors, which is another reason to upgrade your hardware. You also want to make sure that your hypervisor is using the High Performance power policy.

You need to make sure that Windows Server is using the High Performance Power Plan rather than the default Balanced Power Plan. After doing all of this, you should run CPU-Z to confirm how fast your processors are running and whether Turbo Boost is being used or not. While you are at it, you can use the CPU benchmark feature in CPU-Z to run a quick CPU test on your old database server compared to your new database server.

SQL Server Configuration and Performance

After validating your hardware configuration and performance, you will need to start looking at how your new SQL Server instance is configured and how it is performing. Fortunately, SQL Server 2016 and newer do a much better job of instance configuration during the installation process, so there are less things that need to be changed after a typical installation compared to older versions of SQL Server.

Microsoft has a great document called “Recommended updates and configuration options for SQL Server 2012 and SQL Server 2014 with high-performance workloads” that is full of useful settings to consider and perhaps change based on your workload. Even though Microsoft claims that SQL Server 2016 doesn’t really need as much manual configuration as previous versions, looking at that document is still a useful exercise.

Microsoft also has a very useful Post-migration Validation and Optimization Guide which is a pretty good starting point for diagnosing and alleviating post-migration performance issues.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Diagnosing Performance Regressions

One fairly common issue you might encounter when upgrading to SQL Server 2014 or later are performance regressions with some queries. SQL Server 2014 introduced a new cardinality estimator (CE) that is used by the query optimizer by default when you change the compatibility level of a database to 120 or higher.

Most of the time, the new CE works well, and gives better query performance than the legacy CE. Unfortunately, the new CE sometimes leads to dramatically worse query performance for some queries. This particular issue has been the most frequent problem I have experienced and heard of with upgrades to SQL Server 2014 and newer over the past several years. Fortunately, there are a number of things you can do to prevent being surprised by this when you go into production.

First, you need to do a thorough job of testing query performance on the new version of SQL Server, with both cardinality estimators. This should start with manual testing of your more important stored procedures and queries, capturing their execution plans and io statistics. Then, during any functional or load testing, you should be collecting and analyzing diagnostic information using my DMV Diagnostic queries.

Ideally, you should be doing additional testing and analysis with the new Microsoft Database Experimentation Assistant (DEA). Database Experimentation Assistant is a new A/B testing solution for SQL Server upgrades. It will assist in evaluating a targeted version of SQL for a given workload. It can be used to compare performance between SQL Server 2005 and any later version of SQL Server.

If you do run into performance regressions with the new version of SQL Server, the very first thing you should do is read Joe Sack’s seminal whitepaper “Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator” to understand how the new CE works, and what mitigation options were available with SQL Server 2014.

These include changing the compatibility level to 110 for a particular database to get the legacy CE for the entire database, or using query-level hints to force the legacy or new CE for a particular query, regardless of the database compatibility level. You can also use an instance-level trace flag to use either CE for all databases.

SQL Server 2016 lets you use database-scoped configurations to change which CE will be used for each database, regardless of the database compatibility model. This gives you a lot more flexibility. SQL Server 2016 also has Query Store, which is very helpful for finding and correcting query plan regressions.

SQL Server 2017 adds automatic plan correction which leverages Query Store to try to force the last known good plan for query nplans that have performance regressions.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Delayed Durability

One of the interesting new features that was added in SQL Server 2014 was delayed durability. This feature lets you trade some transaction durability for increased log write performance. This is possible because delayed durable transaction commits are asynchronous and report a commit as successful before the log records for the transaction are flushed and actually written to the disk.

The price you pay for this possible performance boost is the possibility of some data loss, as explained in more detail here by Paul Randal.

The main reason you might want to try this feature is if your workload and infrastructure actually has performance bottlenecks writing data to your transaction log file (which is not that common). This will be characterized by high WRITELOG and PAGEIOLATCH_xx wait types.

The first thing I would check and correct if necessary would be the virtual log file (VLF) count for the database. Having an extremely high VLF count can hurt write performance to the log file. Setting your auto growth increment to an appropriately large size will reduce the rate of growth of your VLFs.

Other things I would consider trying before I used delayed durability would be to make sure I understood the I/O performance characteristics on the LUN where my transaction log file was located, using a synthetic storage benchmark tool such as Microsoft DiskSpd. If necessary, I would explore whether it is possible to do anything to improve the write performance of than LUN.

If I had multiple databases that all had their transaction log files on the same disk, I would want to make sure the LUN where they were located had very good random write performance. It makes a lot of difference whether you have a small number of magnetic drives using a parity-based RAID level (such as RAID 5), or high performance flash-based storage.

Parity-based RAID levels have an unavoidable write penalty, since they have to write the parity information after they write the actual striped data. Hardware RAID controllers with fast multi-core processors and a large hardware cache, dedicated to writes can help mask this problem.

Keep in mind that not all flash storage is created equal. There are major performance differences between traditional SATA AHCI drives and newer PCIe NVMe drives, and also between read-optimized and write-optimized devices. Another important factor is the capacity of the flash device you are using. Larger capacity devices usually have much better write performance than smaller capacity devices from the same manufacturer and product line.

Aaron Bertand wrote a great post about delayed durability, including some good performance benchmarks here. Personally, I have seen major performance gains with delayed durability when running in an Azure VM that has relatively poor storage performance. You cannot use delayed durability if you are using transactional replication, change tracking, or change data capture (CDC).

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.