Query Store Settings

In SQL Server 2017 there are nine (9) settings related to Query Store.  While these are documented in sys.database_query_store_options, I often get asked what the value for each setting “should” be.  I’ve listed out each setting below, along with the default value and considerations around changing the setting.

 

OPERATION_MODE

The default value for a new or upgraded database in SQL Server 2016 or SQL Server 2017 is OFF.  For Azure SQL Database, the default value is READ_WRITE.

If you want to enable Query Store, this needs to be set to READ_WRITE, which is the desired state.

You also have the option of READ_ONLY, whereby new queries, new plans, runtime statistics, and wait statistics (in SQL Server 2017) will not be captured, but any forced plans will still be forced.  This state can occur if you reach the MAX_STORAGE_SIZE_MB limit (see below).  You can check actual state against desired state using the query below:

SELECT [actual_state_desc], [desired_state_desc]
FROM [sys].[database_query_store_options];
GO

It’s recommended to always run in a READ_WRITE state.  I have heard of some environments which switch between READ_WRITE and READ_ONLY.  If you want to understand your workload and have the data needed to troubleshoot performance issues, you need to be capturing information on a continual basis.

 

QUERY_CAPTURE_MODE

The default value for SQL Server 2016 and SQL Server 2017 is ALL.  For Azure SQL Database, the default value is AUTO.

With AUTO, queries that are insignificant from a resource utilization perspective, or executed infrequently, are not captured.  If you need to capture queries that may only execute a few times, or those that use very few resources, then use ALL.  Otherwise, use AUTO, as this will capture the relevant majority of your workload.

There is a third option, NONE, where no new queries are captured.  Runtime and wait statistics will continue to be captured for queries that are already in Query Store.

I recommend setting this option to AUTO, as the number of queries in your environment that need tuning/your attention is a small percentage of the total number of queries that execute.  You won’t miss out on important data if you exclude queries that don’t use a lot of resources or don’t execute very often.

 

MAX_PLANS_PER_QUERY

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 200.

This setting is an integer, so theoretically you can set it to 2,147,483,647!  If you don’t know how many distinct plans you might have for a query, you can use sys.dm_exec_query_stats and get a count of distinct query_plan_hash values for a given query_hash:

SELECT [query_hash], COUNT (DISTINCT [query_plan_hash])
FROM [sys].[dm_exec_query_stats]
GROUP BY [query_hash]
ORDER BY 2 DESC;
GO

While I would like to believe that 200 distinct plans for a query is really high, I’ve talked to several DBAs who confirmed they had counts in the thousands.  Thus, you may need to increase this settings if you have queries that are unstable and generate a lot of different plans, and you want to capture each different plan.  Understand that a workload with a large number of plans for a query will require more space, hence the limitation.  You can set the limit lower than the possible number of plans to control the size, with the understanding that you won’t capture every plan variation.  The value of 200 is a good starting point for most environments.

 

MAX_STORAGE_SIZE_MB

For SQL Server 2016 and SQL Server 2017 the default value is 100MB.  For Azure SQL Database, the default value is specific to the tier (Basic = 10MB, Standard = 100MB, Premium = 1GB).

The Query Store data is stored in internal tables in the user database (in the PRIMARY filegroup, like other system tables) and exposed through catalog views.  You can configure how much disk space can be used by Query Store.

This settings should be increased for an on-premises solution.  It may need to be increased for SQL Database, there are multiple factors that affect how much space you will need for Query Store data.  These factors are:

  • The value for QUERY_CAPTURE_MODE; if you’re capturing ALL queries, you will have more information than if using AUTO.  The amount of data is difficult to predict – it depends on your workload (Do you have a lot of queries that run just one time?  Do you have a lot of queries that use very little resources?).
  • The length of time you retain data in Query Store (CLEANUP_POLICY).  The more data you keep, the more space you will need.
  • Whether you’re running SQL Server 2017 and capturing wait statistics information (WAIT_STATS_CAPTURE_MODE).  The wait statistics information is extremely valuable, but it is more data to keep and retain.
  • The value for INTERVAL_LENGTH_MINUTES.  The lower this value, the more runtime statistics data you will have and thus you will need more space.
  • Type of workload.  If you have an ad-hoc workload that has high variation in query text, then you will have more individual queries stored, and thus more plans and more runtime and wait statistics as that information.  If you have a stable workload that does not have ad-hoc queries or queries generated by dynamic strings or ORM tools like NHibernate or Entity Framework), then you will have a fewer number queries and less data overall.

As you can see, there is no “answer” for what the value for MAX_STORAGE_SIZE_MB should be.  I recommend starting with 2GB allocated, and then monitor via sys.database_query_store_options and Extended Events.  For some solutions, 1GB is plenty.  For other solutions, you may need 5GB or more.

Update May 30, 2019: There is still no documentation from Microsoft that lists a recommendation for MAX_STORAGE_SIZE_MB, however, the max value to which you can set this option to in Azure SQL Database is 10GB…suggesting that Microsoft might think that anything larger than 10GB is too large. Why would it matter? A larger Query Store can take longer to load, and creates more overhead. You may have to decrease how long you’re keeping data to get within a 10GB size.

 

CLEANUP_POLICY (STALE_QUERY_THRESHOLD_DAYS)

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 30, with the exception of the Basic tier for Azure SQL Database, which defaults to 7 days.

How much historical data do you want to keep?  If you’re a shop that develops in production, you might want to keep more history.  If your workload is pretty stable and you only roll-out changes quarterly or less frequently, 30 days may be enough information for you.  The more data that you retain, the more disk space you will need.  If you’re not certain about workload, I recommend starting with at least 30 days for this setting, and over the first couple months of use you’ll figure out if you want to keep older data.

 

SIZE_BASED_CLEANUP_MODE

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is AUTO, and I recommend leaving it as such.

With a value of AUTO, as Query Store gets close to the storage size allocated by MAX_STORAGE_SIZE_MB it will automatically purge out the oldest data to make sure there is enough space for new data.  There is a possibility for data that has not reached the CLEANUP_POLICY to be removed (e.g. if MAX_STORAGE_SIZE_MB is 2GB and CLEANUP_POLICY is 30 days, and you reach 2GB in 15 days, data will start to be removed).

You can set this to OFF, but in that scenario, if the MAX_STORAGE_SIZE_MB is reached the OPERATION_MODE will change to READ_ONLY and you will no longer capture new data.  It is recommended to leave this set to AUTO and adjust MAX_STORAGE_SIZE_MB as appropriate.

 

DATA_FLUSH_INTERVAL_SECONDS

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 900 (15 minutes).

It is recommended to leave this value at the default.

 

INTERVAL_LENGTH_MINUTES

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 60.

This is a critical setting, as it determines the window of time across which runtime statistics will be aggregated.  You can only select fixed values for this settings (1, 5, 10, 15, 30, 60, 1440).  The smaller this value, the smaller the window of time for which you will have runtime stats.  This will allow you to look at data at a more granular level.  However, the smaller the value the more data you will capture and thus the more space that is needed.

For the client environments that I support, I’ve set this to 30, as I like a smaller window of time for analysis and based on the performance issues I’ve had to troubleshoot thus far, that’s been a good window.  If you have space constraints or concerns, then leave it at the default of 60.

 

WAIT_STATS_CAPTURE_MODE

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is ON.

If you upgrade a database which has Query Store enabled from SQL Server 2016 to SQL Server 2017, the WAIT_STATS_CAPTURE_MODE will be enabled on upgrade.  If you have a database on SQL Server 2017 and enable Query Store, this option will be enabled.

I recommend enabling this option if you’re on SQL Server 2017 as this information can be extremely valuable when troubleshooting query performance.  Note that you may need to increase MAX_STORAGE_SIZE_MB to accommodate this additional data.

54 thoughts on “Query Store Settings

  1. Great post, with lots of insight!

    I believe it should be WAIT_STATS_CAPTURE_MODE instead of WAIT_STATISTICS_CAPTURE_MODE, but on the flip side this was the top ranked result for “Query Store settings” Web search, so I might as well fix the SQL parser code and post the inverse comment in the other, lower ranked MSDN articles… 🙂

  2. I want to omit all sys.dm queries, including\especially queries generated by query store’s GUI and the new TSQL I’m discovering. Said another way: I want to isolate user ad-hoc and stored procedure executions. How do I configure QS to do that? I’d also like to separate or partition ad-hoc and stored procedures within the query store. Is there a way to do this?

    1. Geoff-

      There is no way to specify what queries you do NOT want Query Store to capture. The only settings that affects what queries are captured is QUERY_CAPTURE_MODE. If that’s a feature you’d like to see, you can create a request in UserVoice (and let me know if you do, happy to add it my list to try and get people to up-vote it).

      Hope that helps,

      Erin

  3. my google-fu is failing me. is there a way to isolate Query Store data to a separate volume? I have a thousand or so databases that we’ve gotten pretty good at predicting growth for. I’m unsure of how much storage Query Store will need, but to be valuable across various scheduled workflows, I need 30 days. I know I can specify storage limits for QS, but wondering if I can just add a large volume of cheaper storage and dump Query Store there?

        1. Ahh that was you! I added my vote yesterday and linked it to the StackOverflow answer so hopefully others can find it.

  4. Hi Erin thanks for the article. From what i’ve seen when I enable querystore for a database the plan cache for that database is flushed is this what you’ve experienced? And if so anyway of avoiding it, it seems ok when switching from read to read-write but that initial enablement of QueryStore appears to be flushing the DB’s plan cache.

    1. Hi Simon-

      I am guessing that you are on SQL Server 2016 and running into this bug:

      https://support.microsoft.com/en-us/help/3212523/fix-a-memory-leak-occurs-when-sql-server-procedure-cache-consumes-too

      It’s fixed in SQL 2016 SP1 CU2 and SQL 2016 CU4. I highly recommend upgrading to the latest release (SQL 2016 SP2 CU1), but if not, at least get to the version in which the issue is fixed. Hope this helps, let me know if you have additional questions,

      Erin

      1. Hi Erin. Whilst I am indeed on SQL2016 SP1 but I’ve upgraded a local environment to SQL2016 SP1 CU9 and I still witness the exact same behaviour – the procedure/plan cache for the database that has QueryStore enabled is flushed. I posed the question also on MSDN and Erland Sommarskog said that he’d seen the same behaviour in SQL2017?

  5. Hi Erin.

    One other thing i’ve noticed from my testing is that at 1130(GMT) each day there is high activity on the perfmon QueryStore counters. CPU is up at an impressive 372% one day(I can see the actual server CPU is higher but still less than 15% at the time so i’m not overly trusting that counter) but the other querystore perfmon counters for reads/writes are also way higher than other times of day. Is there some sort of daily flush that happens is it perhaps the stale queries being removed? I want to work out what process is occurring and if it’s something configurable to ensure this happens at a quieter time of day.

    1. Simon-

      Query Store is going to flush data to disk on a regular basis based on the DATA_FLUSH_INTERVAL_SECONDS value (e.g. every 15 minutes), and it can flush data more frequently if it deems necessary. But there is no other “schedule” that it follows in terms of flushing.
      With regard to cleanup, that’s going to occur automatically based on SIZE_BASED_CLEANUP_MODE and CLEANUP_POLICY (STALE_QUERY_THRESHOLD_DAYS). If you have the cleanup policy at 30 days, it’s regularly going to look for data to clear out that’s older than 30 days, but I’m pretty certain that’s not at a specific time. It will also clear out data if it approaches the max size if you have SIZE_BASED_CLEANUP_MODE enabled. So it could be that. But it could be a result of a change in workload – is there anything going on in the system at that time that changes the volume of queries executed, maybe more ad-hoc queries, maybe more recompiles?

      Erin

  6. I have started to learn and implement query store on limited basis and am iin the middle of your course via PluralSight. I have initially noticed some clutter with SQL being stored from maintenance jobs, Update Statistics in particular. Is there any way to eliminate certain maintenance.

    1. Hi Jim-

      The only way to “restrict” what queries are stored in Query Store is via the QUERY_CAPTURE_MODE setting. If you have it set to AUTO and those internal queries are getting captured, then it’s because they execute frequently and/or exceed a threshold set internally, there’s no way to exclude them.

      Thanks for reading and watching!

      Erin

  7. Is there a way to control what queries are reported in the standard SSMS QueryStore reports? I don’t need to see system operations like StatMan because I can’t tune them directly. I may want the overall picture at some points but at others I want to be more restrictive. I can go to code, certainly, but the UI has some advantages in a training scenario.

    1. Hi Bryant-

      There is not a way to control what is reported in the reports – that would be a User Voice request for the tools team, right now it doesn’t filter out system statements or anything else.

      Erin

  8. While it does not filter out by user or typ eof operations, you can filter somethings with sp_blitzquerystore. Per the below site, you can provide the following parameters:

    @DatabaseName NVARCHAR(128) — Database you want to look at Query Store for
    @Top INT — How many plans per metric you want to bring back
    @StartDate DATETIME2 — Start range of data to examine (if NULL, will go back seven days
    @EndDate DATETIME2 — End range of data to examine (if NULL, will default to today’s date, though this changes if @StartDate isn’t NULL)
    @MinimumExecutionCount INT — Minimum number of executions a query must have before being analyzed
    @DurationFilter DECIMAL(38,4) — Minimum length in seconds a query has to run for before being analyzed
    @StoredProcName NVARCHAR(128) — If you want to look for a particular stored procedure
    @Failed BIT — If you want to look for only failed queries
    @ExportToExcel BIT — Backwards compatibility, skips the generalized warnings, doesn’t display query plan xml, and cleans/truncates query text
    @HideSummary BIT — Hides the general warnings table
    @SkipXML BIT — Skips XML analysis entirely, just returns unanalyzed plans
    @Debug BIT — Prints out any dynamic SQL used for debugging

    https://www.brentozar.com/archive/2017/06/introducing-sp_blitzquerystore/

  9. Pingback: Em desenvolvimento
  10. Hi I was running Query Store on a SQL Server 2017 (14.0.2002.14) with some Databases in a Mirror Scenario (with a withness). QueryStore was Read/Write enabled and all was fine.
    One ESX Host went down, and a failover on the mirror SQL Server occurs. After rebooting the ESX and a fall back to the original Mirror scenario the query store does not collect any Data.
    With
    SELECT TOP 10 qt.query_sql_text, q.query_id,
    qt.query_text_id, p.plan_id, rs.last_execution_time
    FROM sys.query_store_query_text AS qt
    JOIN sys.query_store_query AS q
    ON qt.query_text_id = q.query_text_id
    JOIN sys.query_store_plan AS p
    ON q.query_id = p.query_id
    JOIN sys.query_store_runtime_stats AS rs
    ON p.plan_id = rs.plan_id
    ORDER BY rs.last_execution_time DESC;
    The last_execution_time field is old, just before the failover. With
    SELECT actual_state, actual_state_desc, readonly_reason,
    current_storage_size_mb, max_storage_size_mb
    FROM sys.database_query_store_options;
    I can see that the query store is in READ_WRITE Mode and the current_storage_size_mb is less than max_storage_siz_mb.
    In the SQL Server Error Logs I can see start, stop and configuration changes of the Query store, but no errors are logged. Also no errors in the Windows Eventlog.
    Any ideas, which could help me to find the reason for the problem? Thanks for help
    Best Regards from Austria
    Peter

    1. Hi Peter-

      I have never tested Query Store with a database mirroring failure so I am not sure of the behavior. Did you try turning Query Store off, then turning it back on?

      Erin

      1. Peter,
        I am also facing same issue. I am using RDS MULTI-AZ on AWS and after failover of instance, I see that on new primary query store stops collecting SQLs even when it status in READ_WRITE . did you find any solution to solve it?

        Thanks
        Sukh

  11. Hi Erin, this is a great article, and thanks for keeping it up to date. Hopefully, the next release of SQL Server (the one after v2019) will provide us with the option to enable Query Store on an AlwaysOn ReadOnly replica. As it stands now, the Query Store repository is contained at the database level, and while QS it is available for reporting on the secondary, the statistics we see there are actually replicated from the primary. Unfortunately, the most interesting database applications I’m responsible for monitoring use the primary only for ETL and all application queries are directed to the secondary, which makes QS not useful for troubleshooting anything other than ETL.
    OpenQueryStore (an entirely different topic) approaches this by providing option to contains statistics in a designated and centralized database, so maybe we’ll eventually have a similar solution for standard Query Store.
    – Eric

    1. Hi Eric-

      The behavior for QS and read-only replicas has been this way since the beginning (https://www.sqlskills.com/blogs/erin/query-store-and-availability-groups/) and I definitely recommend up-voting the User Voice item for this request:

      https://feedback.azure.com/forums/908035-sql-server/suggestions/32899126-enable-query-store-for-collection-on-a-read-only-r

      It has a large number of votes, so hopefully it will get into a future release, but it will require a major architecture change.

      Erin

  12. hi Erin
    Thanks for your insight in to QS.
    I’ve being heavily using QS on my very busy OLTP system and recently noticed an issue which I can’t explain.
    I have had a query forced on my AG environment and was using the “forced” plan for a while.
    recently I noticed the query stopped using the “forced Plan” but start using the same plan but with a different planID and performance as not good as if was the “forced” plan. I have compared the two plans and below properties are difference

    compileCPU
    CompileMemory
    CompileTime
    Estimated SUbtree cost
    QueryHash
    QueryPlanCache

    However statementSQLHandle is the same.( and everything else)

    I’m on Microsoft SQL Server 2016 (SP2-CU3) (KB4458871) – 13.0.5216.0 (X64) Sep 13 2018 22:16:01 Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2012 R2 Datacenter 6.3 (Build 9600: ) (Hypervisor)

  13. Hi-

    Thank you for comment. Based on the information provided my guess is that you are running into the scenario where the plans are “morally equivalent”. My friend Kendra Little blogged about it here: https://sqlworkbooks.com/2018/03/forced-plan-confusion-is_forced-vs-use-plan-true/ and there is a User Voice item for it: https://feedback.azure.com/forums/908035-sql-server/suggestions/33562136-indicate-morally-equivalent-forced-plan-where-us but there is no update on it.

    Hope that helps,

    Erin

  14. Super useful, great job! TY.
    I think adding an example with all the recommended values applied to query store properties might help as some syntax is quite painful (cleanup policy, I’m looking at you).

    SELECT name, ‘USE [master] ‘
    + ‘ ALTER DATABASE ‘ +QUOTENAME(Name)+ ‘ SET QUERY_STORE = ON ‘
    + ‘ ALTER DATABASE ‘ +QUOTENAME(Name)+ ‘ SET QUERY_STORE (‘
    + ‘ OPERATION_MODE = READ_WRITE’
    + ‘, QUERY_CAPTURE_MODE = AUTO’
    + ‘, MAX_PLANS_PER_QUERY = 200’
    + ‘, MAX_STORAGE_SIZE_MB = 2000’
    + ‘, CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30)’
    + ‘, SIZE_BASED_CLEANUP_MODE = AUTO’
    + ‘, DATA_FLUSH_INTERVAL_SECONDS = 900’
    + ‘, INTERVAL_LENGTH_MINUTES = 60’
    + ‘, WAIT_STATS_CAPTURE_MODE = ON ‘
    + ‘) ‘
    FROM sys.databases
    where database_id>4

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

A Fond Farewell

If you haven’t guessed from the title, I’m writing this post because I am leaving SQLskills. This Friday, January 14th, is my last day, and

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.