Database Compatibility Level and Query Store

A question I’ve gotten a few times when teaching relates to database compatibility level and Query Store. I was talking to a client yesterday about post-upgrade plans and implementing Query Store, and the topic came again. They wanted to know what compatibility level the database needed in order to use Query Store.

The quick answer: it doesn’t matter.

Query Store is an engine feature that was introduced in SQL Server 2016 (and available in Azure SQL Database since Q4 of 2015) and continues to evolve with each release. It is a feature that has to be enabled at the database level, using ALTER DATABASE SET, and it is compatibility level agnostic.

However, Query Store does track both engine version and compatibility level at the plan level in sys.query_store_plan. This allows you to use Query Store data for comparing plan performance for a query before and after upgrades, both at the engine level and database compatibility level.

Query Store Demo Setup

I have a copy of WideWorldImporters restored on a SQL Server 2019 instance, and I’ll mention that it’s not the standard copy you get from GitHib, it’s one I’ve run the DataLoadSimulation.DailyProcessToCreateHistory stored procedure against to add more data and introduce some skew. If you recreate this against a copy of WideWorldImporters that you have, you may not see the same results.

The first step after restoring is to enable Query Store, and then clear out any old QS data. This is something I do only for demos; I don’t recommend it for a production database unless you’re ok with removing all Query Store data.

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE = ON;
GO
ALTER DATABASE [WideWorldImporters] SET QUERY_STORE (
	OPERATION_MODE = READ_WRITE, INTERVAL_LENGTH_MINUTES = 10
	);
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE CLEAR;
GO

Next, set the compatibility level to 110, and set the database to use the legacy Cardinality Estimator, just as if we had upgraded this database from SQL Server 2012 to SQL Server 2019.

USE [master];
GO
ALTER DATABASE [WideWorldImporters] 
	SET COMPATIBILITY_LEVEL = 110;
GO

USE [WideWorldImporters];
GO
ALTER DATABASE SCOPED CONFIGURATION 
	SET LEGACY_CARDINALITY_ESTIMATION = ON;
GO

Compatibility Level 120

We have a very simple query that we’ll execute 10 times against WideWorldImporters, and display the actual execution plan.

USE [WideWorldImporters];
GO

SELECT 
	[ol].[StockItemID], 
	[ol].[Description], 
	[ol].[UnitPrice],
	[o].[CustomerID], 
	[o].[SalespersonPersonID],
	[o].[OrderDate]
FROM [Sales].[OrderLines] [ol]
JOIN [Sales].[Orders] [o]
	ON [ol].[OrderID] = [o].[OrderID]
WHERE [ol].[UnitPrice] = 240.00
AND [o].[OrderDate] = '2016-04-16';
GO 10
Execution Plan with Compatibility Level=110

Execution Plan with Compatibility Level = 110

The plan has two nested loops, with a clustered index scan against the Orders table, and an index seek against the FK_Sales_OrdersLines_OrderID nonclustered index.  Within Query Store, the sys.query_store_plan table shows a compatibility_level of 110, and an engine_version of 15.0.1900.25.

Output from sys.query_store_plan

Output from sys.query_store_plan

Compatibility Level 150

Without making any changes to the Query Store configuration, change the compatibility level to 150. The Cardinality Estimator version stays set to Legacy.

USE [master];
GO
ALTER DATABASE [WideWorldImporters] 
	SET COMPATIBILITY_LEVEL = 150;
GO

Next, re-run the same query against Orders and OrderLines and note the plan change:

Execution Plan with Compatibility Level=150

Execution Plan with Compatibility Level = 150

We still have the clustered index scan against Order, but it feeds into a Hash Mash that uses a Columnstore index scan against OrderLines to probe the matching rows – it’s a different plan. We see this in Query Store as a different plan_id (4), with a compatibility_level of 150.

SELECT 
    [qst].[query_text_id],
	[qsq].[query_id],  
	[qst].[query_sql_text], 
	[rs].[count_executions],
	[qsp].[plan_id], 
	[qsp].[compatibility_level],
	[qsp].[engine_version]
FROM [sys].[query_store_query] [qsq] 
JOIN [sys].[query_store_query_text] [qst]
	ON [qsq].[query_text_id] = [qst].[query_text_id]
JOIN [sys].[query_store_plan] [qsp] 
	ON [qsq].[query_id] = [qsp].[query_id]
JOIN [sys].[query_store_runtime_stats] [rs] 
	ON [qsp].[plan_id] = [rs].[plan_id] 
WHERE [qst].[query_text_id] = 1; 
GO
Output from sys.query_store_plan after change to compatibility level 150

Output from sys.query_store_plan after change to compatibility level 150

Leveraging Query Store

One of the many ways you can take advantage of the data in Query Store, beyond just identifying problematic queries, is to use it for A/B testing.  This entirely the purpose of the Query Tuning Assistant, which will compare plan performance you, as well as stabilize query performance.  If you are unable to run a representative workload against your database, or just want to investigate specific, critical queries, you have all the data at your disposal in Query Store.  You simply need to be running SQL Server 2016 or higher, or Azure SQL Database and have Query Store enabled with compatibility level set to any supported value (100 or higher).

Query Store Options in SSMS 18.4

Last week the SQL Server Tools team released a new version of Management Studio, 18.4, and it dropped in the middle of my Query Store full day pre-con on Monday. Those two are related. There were some changes in SSMS 18.4 specific to Query Store, and for a few seconds over lunch I thought about downloading the new version and running it for my afternoon demos. But…I didn’t want to anger the demo Gods, so I stuck with 18.3.

What’s New?

The list of features and fixes for each release of SSMS can be found here, and for this post I’m interested in the features related to Query Store. Within Management Studio we’ll create a new database named QS_Settings and turn on Query Store with the default options:

/*
     Of note: I NEVER recommend creating a database
     with this syntax, I am being lazy and just accepting all
     defaults, JUST for demo purposes
*/
USE [master];
GO
CREATE DATABASE [QS_Settings];
GO
ALTER DATABASE [QS_Settings] SET QUERY_STORE = ON;
GO
ALTER DATABASE [QS_Settings] SET QUERY_STORE (
     OPERATION_MODE = READ_WRITE
     );
GO

If we look at the Query Store page on the Database Properties window, you’ll see some additional entries:

Query Store options in SSMS 18.4

Query Store options in SSMS 18.4

For comparison, here is a screen shot of the same window in SSMS 18.3:

Query Store options in SSMS 18.3 and below

Query Store options in SSMS 18.3 and below

There are two options that, previously, you could only modify using T-SQL:

  • MAX_PLANS_PER_QUERY
  • WAIT_STATS_CAPTURE_MODE

And there is an entirely new section specific to the CUSTOM value for QUERY_CAPTURE_MODE, which was added in SQL Server 2019. In addition, the default values for some settings changed in SQL Server 2019.

Setting MAX_PLANS_PER_QUERY and WAIT_STATS_CAPTURE_MODE

When you enable Query Store in any version of SQL Server, by default the value for MAX_PLANS_PER_QUERY is 200. As mentioned in my Query Store Settings post, I recommend leaving this setting at the default value. The only case where I would change it is if I find that I have queries with more than 200 plans. If you are running SSMS 18.3 or below, or Azure Data Studio (ADS), you can use:

USE [master];
GO
ALTER DATABASE [QS_Settings] SET QUERY_STORE (
     MAX_PLANS_PER_QUERY = 500
     );
GO

When you enable Query Store in SQL Server 2017 or 2019, WAIT_STATS_CAPTURE_MODE is enabled by default (remember that query-level wait statistics were not available in Query Store in SQL Server 2016). When you upgrade to SQL Server 2017 or 2019 from SQL Server 2016, and you have Query Store enabled,  WAIT_STATS_CAPTURE_MODE is automatically enabled. If you want to turn it off for some reason, and you’re using SSMS 18.3 or below, or ADS, you can run:

USE [master];
GO
ALTER DATABASE [WideWorldImporters] SET QUERY_STORE (
     WAIT_STATS_CAPTURE_MODE = OFF
     )
GO

With SSMS 18.4, you now have the ability to change both options via the UI. In general, I recommend using the T-SQL because then it can go into change control, and you don’t have to worry about altering another setting by mistake.

CUSTOM Options for QUERY_CAPTURE_MODE

In SQL Server 2019 we now have the ability to customize queries that are captured based on execution count, total compile CPU time (ms), OR total execution CPU time (ms), within a defined window of time. This provides much more control than the AUTO option, which excludes queries based on internally-determined thresholds set by Microsoft, which are not documented. The CUSTOM option is a game-changer, as it allows those with ad hoc workloads to use Query Store and minimize the performance overhead.

Changing to CUSTOM

When upgrading a database from SQL Server 2016 or 2017 to SQL Server 2019, with Query Store already enabled, the value for QUERY_CAPTURE_MODE will not change. To enable and customize the options if you are running SSMS 18.3 or lower, or ADS, you can use:

USE [master];
GO
ALTER DATABASE [QS_Settings] SET QUERY_STORE (
     QUERY_CAPTURE_MODE = CUSTOM,
     QUERY_CAPTURE_POLICY = (
          STALE_CAPTURE_POLICY_THRESHOLD = 1 HOURS, 
          EXECUTION_COUNT = 30,
          TOTAL_COMPILE_CPU_TIME_MS = 1000, 
          TOTAL_EXECUTION_CPU_TIME_MS = 100)
          )
GO

Within SSMS 18.4, when you change QUERY_CAPTURE_MODE to CUSTOM in the UI you will see:

Options for CUSTOM capture policy

Options for CUSTOM capture policy

At this time, I do not have recommendations, based on data, for these options. I have one customer running SQL Server 2019, and I haven’t heard from many people who are running SQL Server 2019 and have Query Store enabled.  Yet. Over time, I am hoping to get better information, but my initial suggestion is that you make a best estimate and then monitor. The default values are a good starting point. The minimum window of time (STALE_CAPTURE_POLICY_THRESHOLD) is one hour, which is also the default. Note that the drop-down for this option contains a list of values:

STALE_CAPTURE_POLICY_THRESHOLD drop-down options

STALE_CAPTURE_POLICY_THRESHOLD drop-down options

Unlike INTERVAL_LENGTH_MINUTES, these are not the only values that you can use. For example, you can set STALE_CAPTURE_POLICY_THRESHOLD to a value not in the list, such as two hours, using T-SQL:

USE [master];
GO
ALTER DATABASE [QS_Settings] SET QUERY_STORE (
     QUERY_CAPTURE_MODE = CUSTOM, 
     QUERY_CAPTURE_POLICY = (
          STALE_CAPTURE_POLICY_THRESHOLD = 2 HOURS)
     );
GO

The value of 2 HOURS will then appear in the drop-down list going forward, which I find interesting, but that’s another rabbit hole to explore. Acceptable values for this option range from 1 hour to 7 days. While you can be clever and choose something like 42 hours, think about what window of time is appropriate for a business day, whether it’s an 8-hour day or a 24-hour day because you’re 24×7.

Thresholds

For EXECUTION_COUNT, the default value is 30, which is appropriate for an ad hoc workload as most of the time as queries only execute one time. Determining values for TOTAL_COMPILE_CPU_TIME_MS and TOTAL_EXECUTION_CPU_TIME_MS is a bit trickier. The defaults are 1000ms and 100ms, respectively. The settings are milliseconds, but the data in the Query Store table is stored in microseconds; remember this when you’re comparing your settings to what is in Query Store.

Think about the execution time first. Do you have a threshold where if a query requires more than X amount of CPU time, you’re concerned? Maybe 100ms is too low, and 1000ms (1 second) is more appropriate for your environment.

Compilation time is the hardest. I would not recommend setting it below 1000ms until you can look at captured data in Query Store. Understand that both compilation and CPU time are cumulative for these thresholds. If a query executions 5 times (below your EXECUTION_COUNT threshold, but takes 25ms of TOTAL_EXECUTION_CPU_TIME_MS each time, then on that fifth execution it will exceed 100ms and subsequent executions will be captured.

Summary

For anyone using SQL Server 2019, I highly recommend using SSMS 18.4, particularly if you use Query Store, or plan on using it. If you’re running SQL Server 2016 or 2017, it’s nice to see those additional settings in the UI, and in general I recommend staying current with releases for all the tools, assuming you don’t have business rules that limit or control what you can install.

Monitoring Space Used by Query Store

Last week I presented a session on Query Store and when talking about the settings I mentioned that monitoring space used by Query Store is extremely important when you first enable it for a database.  Someone asked me how I would do that and as I provided an explanation I realized that I should document my method…because I give the same example every time and I would be nice to have the code.

For those of you not familiar with the Query Store settings, please check out my post which lists each one, the defaults, and what I would recommend for values and why.  When discussing MAX_STORAGE_SIZE_MB, I mention monitoring via sys.database_query_store_options or Extended Events.  As much as I love Extended Events, there isn’t an event that fires based on a threshold exceeded.  The event related to size is query_store_disk_size_over_limit, and it fires when the space used exceeds the value for MAX_STORAGE_SIZE_MB, which is too late.  I want to take action before the maximum storage size is hit.

Therefore, the best option I’ve found is to create an Agent job which runs on a regular basis (maybe every four or six hours initially) that checks current_storage_size_mb in sys.database_query_store_options and calculates the space used by Query Store as a percentage of the total allocated, and then if that exceeds the threshold you set, send an email.  The code that you can put into an Agent job is below.  Please note you want to make sure the job runs in the context of the user database with Query Store enabled (as sys.database_query_store_options is a database view), and configure the threshold to a value that makes sense to your MAX_STORAGE_SIZE_MB.  In my experience, 80% has been a good starting point, but feel free to adjust as you see fit!

Once your Query Store size has been tweaked and stabilized, I would leave this job in place as a safety to alert you should anything change (e.g. someone else changes a Query Store setting which indirectly affects the storage used).

/* Change DBNameHere as appropriate */
USE [DBNameHere]

/* Change Threshold as appropriate */
DECLARE @Threshold DECIMAL(4,2) = 80.00
DECLARE @CurrentStorage INT
DECLARE @MaxStorage INT

SELECT @CurrentStorage = current_storage_size_mb, @MaxStorage = max_storage_size_mb
FROM sys.database_query_store_options

IF (SELECT CAST(CAST(current_storage_size_mb AS DECIMAL(21,2))/CAST(max_storage_size_mb AS DECIMAL(21,2))*100 AS DECIMAL(4,2))
FROM sys.database_query_store_options) >= @Threshold
BEGIN

     DECLARE @EmailText NVARCHAR(MAX) = N'The Query Store current space used is ' + CAST(@CurrentStorage AS NVARCHAR(19)) + 'MB
     and the max space configured is ' + CAST(@MaxStorage AS NVARCHAR(19)) + 'MB,
     which exceeds the threshold of ' + CAST(@Threshold AS NVARCHAR(19) )+ '%.
     Please allocate more space to Query Store or decrease the amount of data retained (stale_query_threshold_days).'

     /* Edit profile_name and recipients as appropriate */
     EXEC msdb.dbo.sp_send_dbmail
     @profile_name = 'SQL DBAs',
     @recipients = 'DBAs@yourcompany.com',
     @body = @EmailText,
     @subject = 'Storage Threshold for Query Store Exceeded' ;
END