SQL Server 2016 SP2 CU9 Released

Update: 10/9/2019

Microsoft has now released SQL Server 2016 SP2 CU10 as a replacement for SP2 CU9 (and they have removed SP2 CU9 from the build list). This happened not because of any problems with the actual SP2 CU9 payload (the binaries), but because of a problem if you tried to uninstall SP2 CU9. Here is the Microsoft statement about the issue:

SQL Server 2016 SP2 CU10 is a replacement for SQL Server 2016 SP2 CU9. CU9 had an uninstall issue that is resolved in the CU10 package. If you previously installed CU9, we recommend that you install CU10.

In my experience, it is not that common to want to (or have to) uninstall a SQL Server CU, but it does occasionally happen. Even so, the uninstall process should work as advertised, and it is unfortunate that this issue wasn’t caught before it was released.

Original Post:

Microsoft has released SQL Server 2016 SP2 CU9, which is Build 13.0.5470.0. This cumulative update has 21 public hotfixes. Keep in mind that both the SQL Server 2016 RTM and SQL Server 2016 SP1 branches are out of support, so there won’t be any more cumulative updates for those branches.

This is one of the more interesting fixes:

FIX: Poor query performance due to low cardinality estimation in SQL Server 2016 when you use default CE and column is covered by both single and multi column statistics

If you haven’t done it already, you should be making plans to get on the SP2 branch of SQL Server 2016. The Microsoft SQL Server 2016 build versions list is here.




Avoiding SQL Server Upgrade Performance Issues

Introduction

When you upgrade to a modern version of SQL Server, there are some critical things you should do to help avoid any SQL Server performance issues.

SQL Server 2008 and SQL Server 2008 R2 are rapidly approaching the end of Extended support from Microsoft on July 9, 2019.  SQL Server 2014  is also falling out of Mainstream support on July 9, 2019.  SQL Server 2012 fell out of Mainstream support on July 11, 2017. Because of this, I am seeing an increasing number of organizations that have been migrating to a modern version of SQL Server. I define a modern version of SQL Server as SQL Server 2016 or later.

I see this as a positive development overall, since SQL Server 2016 and newer actually have many useful new features that make them much better products than their predecessors. Migrating to a modern version of SQL Server also usually means using new, faster hardware and storage, running on a current version of Windows Server, which is also very beneficial, as long as you choose your new hardware and storage wisely.

SQL Server Upgrade Performance Issues

Despite all of this, I have seen a decent number of cases where organizations have migrated from a legacy version of SQL Server to a modern version of SQL Server on new hardware and a new operating system, and then be unpleasantly surprised by SQL Server performance issues once they are in Production. How can these performance regressions be occurring, and what steps can you take to help prevent them?

The main culprit in most of these performance regressions is a combination of lack of knowledge, planning, and adequate performance testing. Unlike legacy versions of SQL Server, modern versions of SQL Server have several important performance-related configuration options that you need to be aware of, understand, and actually test with your workload. Most people who run into performance regressions have done what I call a “blind migration” where they simply restore their databases from the older version to the new version of SQL Server, with no meaningful testing of the performance effects of these different configuration options.

Key Configuration Options

So, what are these key configuration options that you need to be concerned with from a performance perspective? The most important ones include your database compatibility level, the cardinality estimator version that you are using, your database-scoped configuration options, and what trace flags you are using. Since SQL Server 2014, the database compatibility level affects the default cardinality estimator that the query optimizer will use. Since SQL Server 2016, the database compatibility level also controls other performance related behavior by default. I have written more about this subject here:

The Importance of Database Compatibility Level in SQL Server

Compatibility Levels and Cardinality Estimation Primer

You have the ability override many of these database compatibility level-related changes with database scoped configuration options and query hints. There are actually a rather large number of different combinations of settings that you have to think about and test. So, what are you supposed to do?

Microsoft Database Experimentation Assistant

In my ideal scenario, you would use the free Microsoft Database Experimentation Assistant (DEA) to capture a relevant production workload. This involves taking a full production database backup, then capturing a production trace that covers representative high priority workloads. While this is going on, I would run some of my SQL Server Diagnostic Queries to get some baseline metrics from your legacy instance.

Once you have done that, you can then restore that backup to your new environment, and replay the production trace multiple times in your new environment. Each time you do this (which also includes a fresh restore from that original full production database backup), you will use a different combination of these key configuration settings. You have to make the database configuration/property changes after each restore, but before you replay the DEA trace.

Configuration Combinations to Test

The idea here is to see which combination of these configuration settings yields the best performance with your workload. Here are some relevant, likely combinations:

    • Use the default native database compatibility level of the new version
    • Use the default native database compatibility level of the new version and use the query optimizer hotfixes database-scoped configuration option
    • Use the default native database compatibility level of the new version and use the legacy cardinality estimator database-scoped configuration option
    • Use the default native database compatibility level of the new version and use the legacy cardinality estimator database-scoped configuration option and use the query optimizer hotfixes database-scoped configuration option
    • Use the existing database compatibility level of the old version
    • Use the existing database compatibility level of the old version and use the query optimizer hotfixes database-scoped configuration option

This level of DEA testing may not be practical if you have a large number of databases, but you should really try to do it on your most mission critical databases. Barring that, I would try to do as much testing of your most important stored procedures and queries as possible, using these different configuration settings.

Microsoft’s Recommended Upgrade Sequence

Finally, if no adequate testing is possible you can follow Microsoft’s recommended upgrade sequence (in your new production environment, after you go live), which is:

    • Upgrade to the latest SQL Server version and keep the source (legacy) database compatibility level
    • Enable Query Store, and let it collect a baseline of your workload
    • Change the database compatibility level to the native level for the new version of SQL Server
    • Use Query Store (and Automatic Plan Correction on SQL Server 2017 Enterprise Edition) to fix performance regressions by forcing the last known good plan

You also have all of the other new “knobs” of database-scoped configuration options, query-level hints, and trace flags available to you. You may have to do some additional work on some queries with USE HINT query hints. Ideally, you would have done enough testing so that you already have a pretty good idea of the “best” combination of these settings for your workload, but many organizations don’t actually do that.

Keep in mind that for each of the new QP features over the last two versions (Adaptive Query Processing in SQL Server 2017 and Intelligent Query Processing in SQL Server 2019), Microsoft exposes the ability to disable specific behavior at the database scoped configuration or query USE HINT scope.  Microsoft generally recommends that if you do find regressions related to a specific feature, try disabling it at lower granularities first, so you can still benefit from all of the rest of the improvements you get from the latest database compatibility level.

Query Tuning Assistant

Microsoft is shipping a new tool called Query Tuning Assistant (QTA) in SSMS 18.0. QTA can guide you through the recommended database compatibility level upgrade process in a wizard-fashion, collecting the baseline workload in Query Store, bumping up the database compatibility level, and then comparing performance with the post-upgrade workload collection. At the end of this process, if performance regressions are detected, rather than moving back to the previously known good plan, the QTA will actually suggest hint-based improvements that can be deployed for individual queries (using plan guides), without having to necessarily move back to the legacy CE. It will also gives you some ideas (indirectly) for how you can modify problematic queries that have CE-related regression issues, when you have that option.

The Importance of Database Compatibility Level in SQL Server

(New: we’ve published a range of SQL Server interview candidate screening assessments with our partner Kandio, so you can avoid hiring an ‘expert’ who ends up causing problems. Check them out here.)

Prior to SQL Server 2014, the database compatibility level of your user databases was not typically an important property that you had to be concerned with, at least from a performance perspective. Unlike the database file level (which gets automatically upgraded when you restore or attach a down-level database to an instance running a newer version of SQL Server, and which can never go back to the lower level), the database compatibility level can be changed to any supported level with a simple ALTER DATABASE SET COMPATIBILITY LEVEL = xxx command.

You are not stuck at any particular supported database compatibility level, and you can change the compatibility level back to any supported level that you wish. In many cases, most user databases never had their compatibility levels changed after a migration to a new version of SQL Server. This usually didn’t cause any issues unless you actually needed a new feature that was enabled by the latest database compatibility level.

With SQL Server 2012 and older, the database compatibility level was mainly used to control whether new features introduced with a particular version of SQL Server were enabled or not and whether non-supported old features were disabled or not. The database compatibility level was also used as a method to maintain better backwards application compatibility with old versions of SQL Server. If you didn’t have time to do full regression testing with the newest compatibility level, you could just use the previous compatibility level until you could test and modify your applications if needed.

Table 1 shows the major versions of SQL Server and their default and supported database compatibility levels.

SQL Server Version             Database Engine Version        Default Compatibility Level           Supported Compatibility Levels

SQL Server 2019                   15                                            150                                                  150, 140, 130, 120, 110, 100

SQL Server 2017                   14                                            140                                                  140, 130, 120, 110, 100

SQL Server 2016                   13                                            130                                                  130, 120, 110, 100

SQL Server 2014                   12                                            120                                                  120, 110, 100

SQL Server 2012                   11                                            110                                                  110, 100, 90

SQL Server 2008 R2              10.5                                         100                                                  100, 90, 80

SQL Server 2008                   10                                            100                                                  100, 90, 80

SQL Server 2005                     9                                              90                                                  90, 80

SQL Server 2000                     8                                              80                                                  80

Table 1: SQL Server Versions and Supported Compatibility Levels

New Database Creation

When you create a new user database in SQL Server, the database compatibility level will be set to the default compatibility level for that version of SQL Server. So for example, a new user database that is created in SQL Server 2017 will have a database compatibility level of 140. The exception to this is if you have changed the compatibility level of the model system database to a different supported database compatibility level, then a new user database will inherit its database compatibility level from the model database.

Database Restore or Attach

If you restore a full database backup that was taken on an older version of SQL Server to an instance that is running a newer version of SQL Server, then the database compatibility level will stay the same as it was on the older version of SQL Server, unless the old database compatibility level is lower than the minimum supported database compatibility level for the newer version of SQL Server. In that case, the database compatibility level will be changed to the lowest supported version for the newer version of SQL Server.

For example, if you were to restore a SQL Server 2005 database backup to a SQL Server 2017 instance, the database compatibility level for that restored database would be changed to 100. You will get the same behavior if you detach a database from an older version of SQL Server, and then attach it to a newer version of SQL Server.

This general behavior is not new, but what is new and important is what else happens when you change a user database to database compatibility level 120 or newer. These additional changes, that can have a huge impact on performance, don’t seem to be widely known and understood in the wider SQL Server community. I still see many database professionals and their organizations just doing what I call “blind upgrades” where they go from SQL Server 2012 or older to SQL Server 2014 or newer (especially SQL Server 2016 and SQL Server 2017), where they don’t do any serious performance regression testing to understand how their workload will behave on the new native compatibility level and whether the additional configuration options that are available will have a positive effect or not.

Database Compatibility Level 120

This was when the “new” cardinality estimator (CE) was introduced. In many cases, most of your queries ran faster when using the new cardinality estimator, but it was fairly common to run into some queries that had major performance regressions with the new cardinality estimator. Using database compatibility level 120 means that you will be using the “new” CE unless you use an instance-wide trace flag or a query-level query hint to override it.

Joe Sack wrote the classic whitepaper “Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator” that explains the background and behavior of this change back in April of 2014.  If you saw performance regressions on some queries with the new CE, SQL Server 2014 did not have that many options for alleviating the performance issues caused by the new CE. Joe’s whitepaper covers those options in great detail, but essentially, you were limited to instance-level trace flags or query-level query hints to control which cardinality estimator was used by the query optimizer, unless you wanted to revert back to database compatibility level 110 or lower.

The reason I called it the “new” CE in quotes is because there is now no single “new” CE. Each new version of SQL Server since SQL Server 2014 has CE and query optimizer changes tied to the database compatibility level. The new, more accurate terminology that is relevant on SQL Server 2016 and newer is CE120 for compatibility level 120, CE130 for for compatibility level 130, CE140 for for compatibility level 140, and CE150 for for compatibility level 150.

Database Compatibility Level 130

When you are on SQL Server 2016 or newer, using database compatibility level 130 will use CE130 by default, and will enable a number of other performance related changes. The effects of global trace flags 1117, 1118, and 2371 are enabled with database compatibility level 130. You will also get the effect of global trace flag 4199 for all query optimizer hotfixes that were released before SQL Server 2016 RTM.

SQL Server 2016 also introduced database scoped configuration options, which give you the ability to control some behaviors that were formerly configured at the instance level, using an ALTER DATABASE SCOPED CONFIGURATION command. The two most relevant database scoped configuration options for this discussion are LEGACY_CARDINALITY ESTIMATION and QUERY_OPTIMIZER_HOTFIXES.

LEGACY_CARDINALITY ESTIMATION enables the legacy CE (CE70) regardless of the database compatibility level setting. It is equivalent to trace flag 9481, but it only affects the database in question, not the entire instance. It allows you to set the database compatibility level to 130 in order to get the other functional and performance benefits, but use the legacy CE database-wide (unless overridden by a query-level query hint).

The QUERY_OPTIMIZER_HOTFIXES option is equivalent to trace flag 4199 at the database level. SQL Server 2016 will enable all query optimizer hotfixes before SQL Server 2016 RTM when you use the 130 database compatibility level (without enabling trace flag 4199). If you do enable TF 4199 or enable QUERY_OPTIMIZER_HOTFIXES, you will also get all of the query optimizer hotfixes that were released after SQL Server 2016 RTM.

SQL Server 2016 SP1 also introduced the USE HINT query hints that are easier to use and understand than the older QUERYTRACEON query hints that you had to use in SQL Server 2014 and older. This gives you even more fine-grained control over optimizer behavior that is related to database compatibility level and the version of the cardinality estimator that is being used. You can query sys.dm_exec_valid_use_hints to get a list of valid USE HINT names for the exact build of SQL Server that you are running.

Database Compatibility Level 140

When you are on SQL Server 2017 or newer, using database compatibility level 140 will use CE140 by default. You also get all of the other performance related changes from 130, plus new ones as detailed here. SQL Server 2017 introduced the new adaptive query processing features, and they are enabled by default when you are using database compatibility level 140. These include batch mode memory grant feedback, batch mode adaptive joins, and interleaved execution.

Database Compatibility Level 150

When you are on SQL Server 2019 or newer, using database compatibility level 150 will use CE150 by default. You also get all of the other performance related changes from 130 and 140, plus new ones as detailed here. SQL Server 2019 is adding even more performance improvements and behavior changes that are enabled by default when a database is using compatibility mode 150. A prime example is scalar UDF inlining, which automatically inline many scalar UDF functions in your user databases. This may be one of the most important performance improvements for some workloads.

Another example is the intelligent query processing feature, which is a superset of the adaptive query processing feature in SQL Server 2017. New features include table variable deferred compilation, approximate query processing, and batch mode on rowstore.

There are also sixteen new database scoped configuration options (as of CTP 2.2) that give you database-level control of more items that are also affected by trace flags or the database compatibility level. It gives you even more fine-grained control of these higher level changes that are enabled by default with database compatibility level 150.

Conclusion

Migrating to a modern version of SQL Server (meaning SQL Server 2016 or newer) is significantly more complicated than it was with legacy versions of SQL Server. Because of the changes associated with the various database compatibility levels and various cardinality estimator versions, it is actually very important to put some thought, planning, and actual testing into what database compatibility level you want to use on the new version of SQL Server that you are migrating your existing databases to.

Microsoft’s recommended upgrade process is to upgrade to the latest SQL Server version, but keep the source database compatibility level. Then, enable Query Store on each database and collect baseline data on the workload. Next, you set the database compatibility level to the latest version, and then use Query Store to fix your performance regressions by forcing the last known good plan.

You really want to avoid a haphazard “blind” migration where you are blissfully unaware of how this works and how your workload will react to these changes. Changing the database compatibility level to an appropriate version and using the appropriate database scoped configuration options, along with appropriate query hints where absolutely necessary, is extremely important with modern versions of SQL Server.

Another thing to consider (especially for ISVs) is that Microsoft is starting to really push the idea that you should think about testing and certifying your databases and applications to a particular database compatibility level rather than a particular version of SQL Server. Microsoft provides query plan shape protection when the new SQL Server version (target) runs on hardware that is comparable to the hardware where the previous SQL Server version (source) was running and the same supported database compatibility level is used both at the target SQL Server and source SQL Server.

The idea here is that once you have tested and certified your applications on a particular database compatibility level, such as 130, you will get the same behavior and performance if you move that database to a newer version of SQL Server (such as SQL Server 2017 or SQL Server 2019) as long as you are using the same database compatibility level and you are running on equivalent hardware.