Why aren’t you using Query Store?

Way back in 2016 I wrote a post trying to find out why people were not using Extended Events.  See, I really like Extended Events, and I’d been presenting on the topic for several years, but I kept running into people who wouldn’t even try XE.  There were also a lot of individuals who declared they were “Profiler for Life”.  Ok, but I wanted to understand why.

I find myself asking people the same question now about Query Store. I’ve presented a lot of sessions on the topic, with the primary goal of teaching people how to use Query Store.  As such, most of the people in my sessions are not using Query Store.  As an example, if I have 30 people in a session, typically only 25-30% are running SQL Server 2016 or higher.  And of those 7 to 10 people, maybe 2 to 5 are using Query Store.  Does that mean that only 5%-15% of SQL Server users are using Query Store?  No.  I have no idea how many people who are running SQL Server 2016 or higher are using Query Store.  I’m pretty sure Microsoft doesn’t have an accurate number either, except for Azure SQL Database.  Query Store is turned on for nearly all 2 million user databases on Azure SQL Database.

So the primary purpose of this post is to provide a place (the comments) for people to share why they are not using Query Store.  I’m very interested in hearing this information, so please share!  And please comment even if it’s the same as what someone else has written.

In addition, if you don’t know much about Query Store but want to get started because you’re running SQL Server 2016 or higher (or are looking to upgrade), I’ve included some links below to help you get started.  Some links are for documentation or blog posts, but if you’re not a believer of RTFM :) , there are also links to my Pluralsight courses (requires a subscription) and an introductory session I’ve given (free).  If you have questions not answered in a post or video, let me know this in the comments as well!

Lastly, I’ve had multiple questions related to the performance impact of enabling Query Store.  This is a great question – look for a post next week that provides more detail.  But until then, please let me know why you’re not using Query Store, and thanks for reading!

 

Do you need to update statistics after an upgrade?

This post originally went live on May 11, 2018, but modifications were made on May 14, 2018 after some additional internal discussions with Microsoft.  Changes made on May 14, 2018 are in blue.  

There are a variety of methods we use for helping customers upgrade to a new SQL Server version, and one question we often get asked is whether or not statistics need to be updated as part of the upgrade process.

tl;dr

Yes.  Update statistics after an upgrade.  Further, if you’re upgrading to 2012 or higher from an earlier version, you should rebuild your indexes (which will update index statistics, so then you just need to update column statistics).

History

Some of you may remember that the stats blob changed between SQL Server 2000 and SQL Server 2005, and Microsoft specifically recommended updating statistics after upgrading from SQL Server 2000.  Official Microsoft documentation about the stats blog change in SQL Server 2005 is difficult to find, but this article includes the following paragraph:

After you upgrade from SQL Server 2000, update statistics on all databases. Use the sp_updatestats stored procedure to update statistics in user-defined tables in SQL Server databases. This step is not necessary for upgrading from SQL Server 2005.

Current Microsoft documentation related to upgrading does not state anything specific about updating statistics, but people continue to ask and if you peruse forums, blog posts, and other social media options, you’ll see recommendations to update statistics.  Further, the documentation that Microsoft provides about when to update statistics does not mention anything about upgrades.

Side bar: I don’t recommend using sp_updatestats, and here’s why: Understanding What sp_updatestats Really Updates.

Today

The statistics blob has not changed since SQL Server 2000 to my knowledge, but I thought I would ask someone from Microsoft for an official recommendation to share publicly.  Here you go:

Microsoft suggests that customers test the need for a full update of statistics after a major version change and/or a database compatibility level change.

Further items to note:

  1. If Microsoft updates the format of statistics (e.g. the stats blog), customers will be expected to update statistics after an upgrade.
    1. Microsoft does not always upgrade the statistics format as part of a major version upgrade.
  2. There are occasions where Microsoft does not change the format of statistics, but they do change the algorithm for creating statistics as part of a major version upgrade or database compatibility level change.

In addition, there was a change in the nonclustered leaf level internals in SQL Server 2012, so if you are upgrading to 2012 or higher from an earlier version (e.g. 2008, 2008R2), rebuild your nonclustered indexes.  And remember, rebuilding indexes updates the statistics for those indexes with a fullscan, so you do not need to update them again.

Conclusion

As part of your upgrade methodology, it is recommended (by me, based on experience with a lot of customer upgrades) to build in time to update statistics.  I’ve gotten some pushback from customers who don’t want to update statistics after upgrade because it takes too long.  Some kind reminders:

  • Updating statistics is an online operation, therefore, the database and related applications are accessible and usable. A statistics update does take a schema modification lock so you’re not allowed to make any changes to a table while its stats are updating.  Therefore, if you decide to change your schema after upgrading your SQL Server version (not something I would typically recommend), do that before you update stats.
  • You need to update statistics regularly to provide the optimizer with current information about your data, so at some point it needs to be done. Immediately after an upgrade is a really good time, considering the aforementioned items.

If you’re not comfortable upgrading to a newer version of SQL Server, we can help!  I’m in the process of helping a customer migrate from SQL Server 2012 to SQL Server 2017, and I’m so excited to get them up to the latest version so they can start using some new features…like Query Store 😉

Query Store Examples: Stories from customers

In the past week I’ve used Query Store to troubleshoot performance issues for two different customers running SQL Server 2016 and higher.  I’ve presented a lot of sessions about what Query Store is, how to configure it, and basic use cases, but we really take it to the next level when we can provide Query Store examples from the real world.  If you are running SQL Server 2016 or SQL Server 2017, I hope you’re using this feature!

Scenario 1

A long time customer reached out for some help after adding 50% more memory to their VM and seeing no improvement in performance.  Sometimes you can’t throw hardware at an issue, and they wanted to look at using In-Memory OLTP to help improve performance.  After a 10 minute discussion on In-Memory OLTP, where I explained why it wouldn’t magically solve their performance issues (and also explained the amount of testing that would need to be done prior to implementation), I took a look at the system.

I started with a health audit and a review of wait statistics.  There were a couple settings they could change, but nothing was horribly misconfigured.  Wait stats showed nothing remarkable.  The majority of waits were due to CXPACKET, but the average duration was extremely low.  They had some WRITELOG and PAGEIOLATCH waits, but these were also low in average duration.  I asked if they had Query Store enabled…they did not.  I requested that they enable it to capture query information while I starting to look at plan cache data.  With Query Store collecting information, I used Glenn’s DMV queries to dig into the plan cache to look at queries that were executing most frequently, taking the longest, and consuming the most resources.

Now, while the plan cache has some fantastic information, it’s transitory.  Plans can fall out of cache because they aren’t used, or because they get recompiled, so truly tracking information for execution frequency over time can be a bit tricky.  Further, there are some plans that never make it into the plan cache (e.g. trivial plans or those with the RECOMPILE hint).  This is where Query Store shines.  EVERY query that executes can be captured in Query Store, even if it’s trivial or has the RECOMPILE hint.  Further, execution frequency and resource use is aggregated over defined intervals (e.g. 15 minutes, 30 minutes), as determined by the INTERVAL_LENGTH_MINUTES setting.  You can look back over a small or large period of time to see more than just what query performance looked like, but also how many times queries executed.

Several hours later, after a typical workload where users said performance was “slow”, we looked at the Query Store data.  In terms of high resource queries, there were a few (e.g. lots of reads, or lots of CPU), but the execution frequency was low for a lot of those heavy hitters.  Then we looked at queries executing most frequently – the death by a thousand cuts scenario – and here we could see that there were thousands of queries executing in an hour, and while individually a query might not take long to execute, or use a lot of resources, but cumulatively it added up.

Queries with high execution count in Query Store

Queries with high execution count in Query Store

 

 

In digging into some of those queries I quickly noticed that almost every plan had a missing index recommendation.  I queried sys.indexes and found that almost 95% of the tables in the database had 2 indexes or less.  It’s a rare case where a database is under-indexed.  Their problem?  Their workload volume and their data are slowly, but steadily increasing.  They have some big tables that are missing indexes and causing scans, and even though all that data is sitting in memory (because they have enough server memory to hold the entire database), they are using a lot of CPU and time to roll through that data.  I could have determined this with the information in the plan cache, but I would have had to set up a job to capture it on a regular basis and then write some queries to do analysis against it.  With Query Store, I just had to enable it, let it capture the data, then use the UI to look at performance.

Scenario 2

A new customer engaged with us after upgrading to SQL Server 2017.  They had some extreme cases of variability in query performance – the system would be running fine and then all of the sudden performance would tank.  They would knee-jerk and free the plan cache, then suddenly things would be great again.  A couple hours later, the same problem.  Rinse and repeat, rinse and repeat.  We had them enable Query Store and within a couple hours we took a look at the data (letting them continue with the practice of freeing procedure cache when there was a problem).  Within the Query Store UI I looked at CPU use for that window of time, and used the data grid to sort the output to view queries with multiple plans first.  There it was…they had a select number of queries that had multiple plans with huge variations due to different input parameters – queries that were parameter sensitive.

Viewing multiple plans for the same query in Query Store

Viewing multiple plans for the same query in Query Store

 

We used Query Store to force the most optimal plan and thus stabilize performance, then we looked at the query itself and the different plans being generated.  Within an hour we determined one code change and an index that would generate a consistent plan.  After testing these changes and implementing them, we unforced the plan, confirmed that the new plan we wanted was consistently used, and moved on to the next challenge.

Summary

If you’re running SQL Server 2016 or higher, I highly recommend enabling Query Store…even if you have a third party monitoring tool.  Don’t get me wrong, those tools are great and track a ton of information that Query Store doesn’t.  But those tools don’t capture EVERY query, nor do they capture query metrics all the time.  Lastly, they don’t provide the ability to force plans.  You get all that with Query Store, and more :)

If you’re interested in learning more about Query Store from the ground up, you’re in luck!  In two weeks I’m hosting a live, online Immersion Event for Query Store that you can attend from the comfort of your home (or your desk at work :)  You can find a course description and registration details here.  I hope you’re able to attend!

Query Store Requests

In early January Microsoft announced that Connect, the method for filing SQL Server bugs and feature requests, was being retired.  It was replaced by User Voice, and any bugs/requests were ported over.  Sadly, the votes from Connect did not come across to User Voice, so I went through and found all the Query Store requests, which are listed below.  If you could please take the time to up-vote them, that would be fantastic.  If you could also take time to write about why this would help your business, help you upgrade, or purchase more SQL Server licenses, that is even better.  It helps the product team immensely to understand how this feature/fix/functionality helps you and your company, so taking 5 minutes to write about that is important.

Two things…I’ve listed these in order of priority to me, but of course your priorities may be different!  :)  Second, if I have missed a bug/request, please feel free to email me or comment so I can add it to this list.  Thanks!

  1. Enable Query Store for collection on a read-only replica in an Availability Group
  2. Indicate “morally equivalent” forced plan where use_plan = true but is_forced = 0
    1. Kendra Little wrote a post explaining this behavior: Forced Plan Confusion: Is_Forced vs. Use Plan = True
  3. Query store create database
    1. Check out Andy Mallon’s blog post about discovering this issue: Deadlock when creating a database – A Query Store bug
  4. Option to store query store data in a filegroup other than PRIMARY
  5. Add or link views’ execution plan to a query execution plan
  6. SQL Server Management Studio 17.3 (14.0.17199.0) returns incorrect syntax on WAIT_STATS_CAPTURE_MODE
  7. QDS_LOADDB Wait type blocks all queries
    1. You can use trace flag 7752 to get around this, see Query Store Trace Flags
  8. Query Store SELECT Performance
  9. “Edit Query Text” query text doesn’t match the actual query in 2017 RTM
  10. Add an Extended Events action for query_id
  11. Max Plan per query missing from Query Store properties in SSMS 2016
    1. Still an issue in SSMS 17.5
  12. [SQL Server 2016 CTP 2.4 – Query Store] Functionality to collect the information of queries which are not finished
  13. Query Store – Add Actual memory Grant Used From sys.dm_exec_query_stats
    1. In SQL Server 2017, sys.query_store_runtime_stats the data captured includes:
      1. avg_query_max_used_memory
      2. last_query_max_used_memory
      3. min_query_max_used_memory
      4. max_query_max_used_memory
      5. stdev_query_max_used_memory

      This data represents memory used for the query, not the memory grant, but if I were interested in using Resource Governor to create pools with memory limits for queries, this is pretty good information from which to work.

  14. Bugbash: Enable Query Store for tempdb
  15. Add Query Store to Model for Auto Configuration on New Databases
    1. As far I know, this works.  If you enable Query Store for model, it will be enabled by default for new databases you create (unless you otherwise specify), see SQL Server Query Store: Default Settings.

 

Query Store Fix in SQL Server 2017

There is a Query Store fix in the latest Cumulative Updates of SQL Server 2017, and if you are keeping current with the latest cumulative updates, there’s a script you need to run if you’ve installed CU2 at any point.

First, if you are still on CU2, you might want to consider upgrading to CU4 (released February 17, 2018).

Second, if you previously had CU2 installed (and are now on CU3 or CU4), you want to make sure that you’ve run the script included with CU3 and CU4 release notes.

The script removes plans from Query Store that were captured when running CU2.  If you want to know for certain whether your database is affected before you execute the script, you can run the following query against the databases with Query Store enabled:

/* execute against EACH database that has Query Store enabled */
SELECT COUNT([plan_id])
FROM [sys].[query_store_plan]
WHERE [engine_version] = '14.0.3008.27';
GO

As an aside, if you aren’t using Query Store, notice that one of the things it captures for the plan is SQL Server version…how cool is that in terms of testing, upgrading, and troubleshooting?

If you have plans from CU2, they need to be removed from Query Store, and this is done in the script using the sys.sp_query_store_remove_plan function.  Understand that in addition to removing the query plan, it will also remove the runtime stats tied to that plan from Query Store.  But, more important than that, if that plan was forced for a query, it will be un-forced before it is removed.

Therefore, before you run the script from Microsoft, I strongly recommend you not just check to see if you have plans from CU2, but you also look to see if any of those are forced:

/* execute against EACH database that has Query Store enabled */
SELECT
   [p].[query_id],
   [p].[plan_id],
   CASE
      WHEN [q].[object_id] = 0 THEN 'Ad-hoc'
      ELSE OBJECT_NAME([q].[object_id])
   END AS [Object],
   [qt].[query_sql_text],
   [q].*, TRY_CONVERT(XML, [p].[query_plan]) AS [QueryPlan_XML]
FROM [sys].[query_store_plan]  [p]
JOIN [sys].[query_store_query] [q]
   ON [p].[query_id] = [q].[query_id]
JOIN [sys].[query_store_query_text] [qt]
   ON [q].[query_text_id] = [qt].[query_text_id]
WHERE  [engine_version] = '14.0.3008.27'
   AND [p].[is_forced_plan] = 1;
GO

This script will list any queries that have forced plans, and if they are part of an object (e.g. stored procedure) it will also list the object name.  If no rows return from this query, then you don’t have any forced plans which are affected and you can run the script from Microsoft.

If you do have any queries with forced plans, I recommend that you save a copy of the forced plan as a .sqlplan file, so that you have documentation of what plan was forced.  There are two ways to do this:

  1. Within the Query Store page for the user database in SSMS, run the Queries with Forced Plans report. In the grid, sort the list by query_id.  For each query_id identified by the query above, select the forced plan, then right-click on it and save.
  2. Within the Query Store page for the user database in SSMS, open the Tracked Queries report. Individually enter each query_id identified by the query above, select the forced plan, then right-click on it and save.

Once you have saved off a copy of every forced plan, then you can execute the script from Microsoft.

Note: If you are using Automatic Plan Correction, this will also un-force those plans (which would have been forced automatically).

At this point, queries that previously had a forced plan may have performance issues.  It’s possible that changes in your data distribution, changes in statistics, or perhaps even changes in the optimizer have caused a different plan to be generated that might be acceptable performance-wise.  If that is not the case, there is no ability to import plans into Query Store.  Each query that had a forced plan, that doesn’t get a new, “good” plan, may need to be run again to get the same plan into Query Store.  One of the easiest things to do is to find compiled values within the plan using the ParameterCompiledValue attribute, then re-run the query using those values.  You can then use data from Query Store, and compare against the saved plan, to verify it’s a consistent/stable plan.

Finding input parameters from the query plan

Finding input parameters from the query plan

 

 

 

 

 

 

 

 

 

 

 

 

 

I definitely recommend updating to the current Cumulative Update (and this is a general recommendation, not just because of the issue I’ve discussed here), and part of preparing for that upgrade means checking to see if you’re affecting by this issue, and addressing it as part of the upgrade process, rather than down the road.

T-SQL Tuesday #99: Life Outside SQL Server

T-SQL Tuesday

T-SQL Tuesday is a blog party held the second Tuesday of each month.

I am late to the party (I am often late to the party, just ask Aaron Bertrand, this month’s host).  I chose door #1 for this month’s T-SQL Tuesday Dealer’s Choice party.

If you follow me on Twitter, you know I’m a runner.  You might also know that I like spin.  And maybe you know that I’m a spin instructor at Psycle, a studio here in Ohio?

The bike...

The bike…

 

 

 

 

 

 

 

 

 

 

 

My regular class is on Tuesday mornings, but I sub for other instructors when my schedule allows.  Teaching spin is big in some places – like Soul Cycle which some say started the whole spin craze – to the point where instructing is a full-time job and it includes benefits.

I’m not in it for the money.

The extra cash is nice, but what I love is watching people evolve, and get better.

I’m a coach at heart.

I have found that I love not thinking about how I’m doing, but rather thinking about how to get someone else to do their best.

The studio...

The studio…

 

 

 

 

 

 

 

 

 

This is why I teach

Both spin, and SQL Server.  Off the bike my favorite course is IE0, for the Accidental/Junior DBA.  The first day everyone is pretty quiet – they don’t want to share what they don’t know.  By the last day everyone has realized that no one is an expert, and they will tell stories and ask the fun “why” questions.

This happens in spin.

People show up the first ride and they are afraid they’ll be the worst rider in the room.  A few rides later and they’re asking questions about their bike set up, or sharing how they’re feeling about class.  Weeks, months, even years go by and you see those individuals get stronger, faster, more efficient…and in some cases become instructors themselves.

Rock Hall Ride (thanks Colleen!)

Rock Hall Ride (thanks Colleen!)

 

 

 

 

 

 

 

 

 

 

 

That happened to me.

I started riding to become a stronger runner, but I found that I loved the dark room, loud music, and sanctity of the spin studio just as much as I loved the fresh air, sunshine, and the sound of my feet running.  It is, at our studio, therapy on a bike.  Some may scoff at the notion, others attend and find it isn’t their thing.  And that’s cool.  You do you, you find your thing and go all in, and spread that light wherever you can.  But if you find yourself in Cleveland and you’re up for a ride, let me know :)

 

 

 

 

Query Store and “in memory”

If you’ve ever been to one of my sessions you know that I really like demos.  I find they can illustrate how things work extremely well and I use them to compliment an explanation.  I’m a very visual learner so it also helps me to understand and then explain a concept.  The drawback of demos is that they sometimes fail, partly or entirely.  I really work to make mine as infallible as possible, but every so often I run into a problem.

When I was working on Query Store demos, one thing I noticed is that sometimes the data from the runtime stats system view seems to be duplicated.  And this happens in my Pluralsight course on Query Store and Automatic Tuning, funny enough.  After restoring a database and stepping through a blocking scenario, when I query the runtime and wait stats with the query below, I get the following output:

SELECT
[rs].[runtime_stats_interval_id],
[rsi].[start_time] AS [IntervalStartTime],
[rsi].[end_time] AS [IntervalEndTime],
[qsq].[query_id],
[qst].[query_sql_text],
[qsp].[plan_id],
[rs].[count_executions],
[rs].[avg_duration],
[rs].[avg_logical_io_reads],
[ws].[wait_category_desc],
[ws].[total_query_wait_time_ms]
FROM [sys].[query_store_query] [qsq]
JOIN [sys].[query_store_query_text] [qst]
ON [qsq].[query_text_id] = [qst].[query_text_id]
JOIN [sys].[query_store_plan] [qsp]
ON [qsq].[query_id] = [qsp].[query_id]
JOIN [sys].[query_store_runtime_stats] [rs]
ON [qsp].[plan_id] = [rs].[plan_id]
JOIN [sys].[query_store_runtime_stats_interval] [rsi]
ON [rs].[runtime_stats_interval_id] = [rsi].[runtime_stats_interval_id]
JOIN [sys].[query_store_wait_stats] ws
ON [qsp].[plan_id] = [ws].[plan_id]
AND [rsi].[runtime_stats_interval_id] = [ws].[runtime_stats_interval_id]
WHERE [qst].[query_sql_text] LIKE '%UPDATE%'
AND [rs].[execution_type] = 0
ORDER BY [rsi].[start_time] ASC, [ws].[total_query_wait_time_ms] DESC;
GO

 

Query Store Runtime and Wait Statistics

Query Store Runtime and Wait Statistics

 

 

 

 

You can see that there are two rows for the same query_id and plan_id, but the count_executions is different, as are the avg_duration and avg_logical_io_reads values .  The data is not truly a duplicate.  This behavior occurs because the Query Store data is stored in memory before it is flushed to disk, and when you query the data SQL Server is pulling it from both locations (and doing a UNION ALL) and displaying it in the output.  If I waited a bit and ran the query again, the two rows for that interval would probably disappear – most likely because the in memory data had been flushed to disk.  You can force the data to be flushed manually using sp_query_store_flush_db:

EXEC sp_query_store_flush_db;
GO

After I execute that stored procedure, when I query the runtime and stats data again, the output is only one line:

 

Query Store Runtime and Wait Statistics After SP Flush

Query Store Runtime and Wait Statistics After SP Flush

 

 

 

 

The Query Store data is held in memory as an optimization – if the runtime stats had to be updated on disk every time a query executed, the amount of I/O could easily overload a high-volume system.  Therefore, data is stored in memory and flushed to disk based on the DATA_FLUSH_INTERVAL_SECONDS setting for Query Store, which defaults to 15 minutes.  I recommend reading How Query Store Collects Data for more detail (and a nice visual).  It’s important to understand that even though I keep writing “in memory” and the documentation states that the sp_query_store_flush_db procedure “flushes the in-memory portion of the Query Store data to disk”, this data does not reside in In-Memory OLTP structures.

Note: You can use Query Store to capture data for Natively Compiled stored procedures that access In-Memory OLTP objects, but you need to enable the collection of execution statistics using sys.sp_xtp_control_proc_exec_stats, as described in Microsoft Docs.

If you’re using Query Store and you’ve run into similar behavior, hopefully this explains what you’ve seen and why, and you know how to address it going forward, if it presents an issue for some reason.

Lastly, just yesterday the Tiger Team released some code you may need to run if you’re using Query Store and you’ve upgraded to SQL Server 2017 CU3.  Please take a minute to review in case it applies to you!

Query Store Settings

In SQL Server 2017 there are nine (9) settings related to Query Store.  While these are documented in sys.database_query_store_options, I often get asked what the value for each setting “should” be.  I’ve listed out each setting below, along with the default value and considerations around changing the setting.

 

OPERATION_MODE

The default value for a new or upgraded database in SQL Server 2016 or SQL Server 2017 is OFF.  For Azure SQL Database, the default value is READ_WRITE.

If you want to enable Query Store, this needs to be set to READ_WRITE, which is the desired state.

You also have the option of READ_ONLY, whereby new queries, new plans, runtime statistics, and wait statistics (in SQL Server 2017) will not be captured, but any forced plans will still be forced.  This state can occur if you reach the MAX_STORAGE_SIZE_MB limit (see below).  You can check actual state against desired state using the query below:

SELECT [actual_state_desc], [desired_state_desc]
FROM [sys].[database_query_store_options];
GO

It’s recommended to always run in a READ_WRITE state.  I have heard of some environments which switch between READ_WRITE and READ_ONLY.  If you want to understand your workload and have the data needed to troubleshoot performance issues, you need to be capturing information on a continual basis.

 

QUERY_CAPTURE_MODE

The default value for SQL Server 2016 and SQL Server 2017 is ALL.  For Azure SQL Database, the default value is AUTO.

With AUTO, queries that are insignificant from a resource utilization perspective, or executed infrequently, are not captured.  If you need to capture queries that may only execute a few times, or those that use very few resources, then use ALL.  Otherwise, use AUTO, as this will capture the relevant majority of your workload.

There is a third option, NONE, where no new queries are captured.  Runtime and wait statistics will continue to be captured for queries that are already in Query Store.

I recommend setting this option to AUTO, as the number of queries in your environment that need tuning/your attention is a small percentage of the total number of queries that execute.  You won’t miss out on important data if you exclude queries that don’t use a lot of resources or don’t execute very often.

 

MAX_PLANS_PER_QUERY

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 200.

This setting is an integer, so theoretically you can set it to 2,147,483,647!  If you don’t know how many distinct plans you might have for a query, you can use sys.dm_exec_query_stats and get a count of distinct query_plan_hash values for a given query_hash:

SELECT [query_hash], COUNT (DISTINCT [query_plan_hash])
FROM [sys].[dm_exec_query_stats]
GROUP BY [query_hash]
ORDER BY 2 DESC;
GO

While I would like to believe that 200 distinct plans for a query is really high, I’ve talked to several DBAs who confirmed they had counts in the thousands.  Thus, you may need to increase this settings if you have queries that are unstable and generate a lot of different plans, and you want to capture each different plan.  Understand that a workload with a large number of plans for a query will require more space, hence the limitation.  You can set the limit lower than the possible number of plans to control the size, with the understanding that you won’t capture every plan variation.  The value of 200 is a good starting point for most environments.

 

MAX_STORAGE_SIZE_MB

For SQL Server 2016 and SQL Server 2017 the default value is 100MB.  For Azure SQL Database, the default value is specific to the tier (Basic = 10MB, Standard = 100MB, Premium = 1GB).

The Query Store data is stored in internal tables in the user database (in the PRIMARY filegroup, like other system tables) and exposed through catalog views.  You can configure how much disk space can be used by Query Store.

This settings should be increased for an on-premises solution.  It may need to be increased for SQL Database, there are multiple factors that affect how much space you will need for Query Store data.  These factors are:

  • The value for QUERY_CAPTURE_MODE; if you’re capturing ALL queries, you will have more information than if using AUTO.  The amount of data is difficult to predict – it depends on your workload (Do you have a lot of queries that run just one time?  Do you have a lot of queries that use very little resources?).
  • The length of time you retain data in Query Store (CLEANUP_POLICY).  The more data you keep, the more space you will need.
  • Whether you’re running SQL Server 2017 and capturing wait statistics information (WAIT_STATS_CAPTURE_MODE).  The wait statistics information is extremely valuable, but it is more data to keep and retain.
  • The value for INTERVAL_LENGTH_MINUTES.  The lower this value, the more runtime statistics data you will have and thus you will need more space.
  • Type of workload.  If you have an ad-hoc workload that has high variation in query text, then you will have more individual queries stored, and thus more plans and more runtime and wait statistics as that information.  If you have a stable workload that does not have ad-hoc queries or queries generated by dynamic strings or ORM tools like NHibernate or Entity Framework), then you will have a fewer number queries and less data overall.

As you can see, there is no “answer” for what the value for MAX_STORAGE_SIZE_MB should be.  I recommend starting with 2GB allocated, and then monitor via sys.database_query_store_options and Extended Events.  For some solutions, 1GB is plenty.  For other solutions, you may need 5GB or more.

 

CLEANUP_POLICY (STALE_QUERY_THRESHOLD_DAYS)

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 30, with the exception of the Basic tier for Azure SQL Database, which defaults to 7 days.

How much historical data do you want to keep?  If you’re a shop that develops in production, you might want to keep more history.  If your workload is pretty stable and you only roll-out changes quarterly or less frequently, 30 days may be enough information for you.  The more data that you retain, the more disk space you will need.  If you’re not certain about workload, I recommend starting with at least 30 days for this setting, and over the first couple months of use you’ll figure out if you want to keep older data.

 

SIZE_BASED_CLEANUP_MODE

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is AUTO, and I recommend leaving it as such.

With a value of AUTO, as Query Store gets close to the storage size allocated by MAX_STORAGE_SIZE_MB it will automatically purge out the oldest data to make sure there is enough space for new data.  There is a possibility for data that has not reached the CLEANUP_POLICY to be removed (e.g. if MAX_STORAGE_SIZE_MB is 2GB and CLEANUP_POLICY is 30 days, and you reach 2GB in 15 days, data will start to be removed).

You can set this to OFF, but in that scenario, if the MAX_STORAGE_SIZE_MB is reached the OPERATION_MODE will change to READ_ONLY and you will no longer capture new data.  It is recommended to leave this set to AUTO and adjust MAX_STORAGE_SIZE_MB as appropriate.

 

DATA_FLUSH_INTERVAL_SECONDS

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 900 (15 minutes).

It is recommended to leave this value at the default.

 

INTERVAL_LENGTH_MINUTES

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is 60.

This is a critical setting, as it determines the window of time across which runtime statistics will be aggregated.  You can only select fixed values for this settings (1, 5, 10, 15, 30, 60, 1440).  The smaller this value, the smaller the window of time for which you will have runtime stats.  This will allow you to look at data at a more granular level.  However, the smaller the value the more data you will capture and thus the more space that is needed.

For the client environments that I support, I’ve set this to 30, as I like a smaller window of time for analysis and based on the performance issues I’ve had to troubleshoot thus far, that’s been a good window.  If you have space constraints or concerns, then leave it at the default of 60.

 

WAIT_STATS_CAPTURE_MODE

The default value for SQL Server 2016, SQL Server 2017, and Azure SQL Database is ON.

If you upgrade a database which has Query Store enabled from SQL Server 2016 to SQL Server 2017, the WAIT_STATS_CAPTURE_MODE will be enabled on upgrade.  If you have a database on SQL Server 2017 and enable Query Store, this option will be enabled.

I recommend enabling this option if you’re on SQL Server 2017 as this information can be extremely valuable when troubleshooting query performance.  Note that you may need to increase MAX_STORAGE_SIZE_MB to accommodate this additional data.

Query Store and Availability Groups

Last week at the PASS Summit I presented a pre-con and general session on Query Store. I had several questions related to Query Store and Availability Groups, so I thought I’d pull them all together and answer them here.

 

Q: What happens to the Query Store data when a database is in an Availability Group?

A: You can enable Query Store for any user database, including one in an AG, and since that data is stored in internal tables in the database, it exists in the replica copies as well (just like other system tables and user tables).

 

Q: Can you enable Query Store for a read-only replica?

A: No.  Because the replica is read-only, and Query Store inherently writes data TO the database, you cannot enable it to capture queries that are executed against that read-only copy.  I did create a Connect UserVoice item for this request.  If this is of interest to you, please up-vote it: Enable Query Store for collection on a read-only replica in an Availability Group.  The more votes this has, the better the possibility that Microsoft will implement it, so feel free to share with your friends and have them vote too!

*Update March 8, 2018: Connect has ended and all requests were ported to UserVoice…and your votes are still needed.  Please vote when you have a chance!

 

Q: Can you force a plan for a query on a read-only replica if you force it first for that query on the primary?

A: No.  With a read-only database this is not possible.

 

Q: Can you create a separate user database on the secondary, enable Query Store for it, and then issue queries against the read-only replica through that user database and have them captured in Query Store?

A: No.  (But it’s a really creative idea.)

 

Have other questions related to Query Store and AGs?  Let me know!  Happy to answer them.

PASS Summit 2017: Day 2

Day 2 is starting here at PASS Summit in Seattle – watch this space for updates the next hour and a half!

Grant Fritchey is kicking off day 2!  I’m a big fan of Grant.  He talks about the community and how people should take advantage of what’s available here at Summit.  I couldn’t agree more – I wouldn’t be where I am in my career without this conference, nor would I have the friends I do all over the world.  Grant hands it over to Denise McInerney, VP of Marketing, and Denise shares her journey within PASS (she’s a 6 year Board Member).  Denise continues driving the point about the value of this conference and PASS.  She then talks about the volunteers for PASS, and announces the winner of the PASSion Award.  This year’s winner is Roberto Fonseca.

Denise is talking about the survey feedback from the past couple years, and session evaluations at PASS.  *PLEASE* complete these by the end of next week (if not earlier) – both PASS and the speakers truly value this feedback.  If you provide additional feedback there is a Board Q&A tomomrrow at Friday, 2PM. Today is the WIT lunch, and Denise announces that next year’s Summit is  November 6 – November 9, 2017.  Denise introduce Rimma Nehme, a Group Product Manager/Architect for Azure Cosmos DB and Azure HDInsight at Microsoft.  Today is going to uncover CosmosDB.  It will be technical!  Let’s go!!

 Do we need another database?

This is the wrong question to ask.  If you look at the rising data problems and needs, most of the production systems today (particularly the ones designed in the 70s and 80s), the modern calls and needs are addressing these problems.  90% of the world’s data was created in the last 2 years alone.  The estimated growth in the next 3-5 years is 50x.  Another trend is global, and another is data is big.  Not just a few TB, but trends of companies processing hundreds of TB to pedabytes.  Every 60 seconds 200 million emails are generated.  Rimma is throwing out ridiculous numbers about the amount of data being generated.  I can’t keep up!

Data is also interconnected.  What you do in Seattle can be connected to another location in the world.  This the butterfly affect.  We are experiencing about 125 exabytes of data (that’s a lot of zeroes).  Companies are looking at ways of extracting that data and monetize that information.  Another trend is the balance of power continues to shift from structured to unstructured data.  About 80% of data originates in unstructured data. Never push the data to computation – push the computation to the data.

When dealing with distributed, you need to deal with a lot of differences.  For example, different architectures.  In 2010 an engineer inside Microsoft observed this and identified that we need a different architectures to deal with these fundamental differences in data at scale.  This is how Project Florence was born, which is the base of what is now CosmosDB.  It was one of the exemplary partnerships between Microsoft Research and the Azure Data team.

At the time they were working to address the problem of the data for large scale applications within Microsoft (e.g. XBox).  They tried the “earthly databases”, building something on their own, and these options weren’t working.  Hence project Florence to meet the internal needs.  A basic set of requirements were laid out:

  1. Turnkey global distribution
  2. Guaranteed low latency at the 99th percentiles, worldwide
  3. Guaranteed HA within region and globally
  4. Guaranteed consistency
  5. Elastically scale throughput and storage, at any time, on demand, and globally
  6. Comprehensive SLAs (availability, latency, throughput, consistency)
  7. Operate at low cost (this is crucial and important! – first customers were Microsoft departments)
  8. Iterate and query without working about schemas and index management (applications evolve over time and rapidly))
  9. Provide a variety of data model and API choices

This manifests into three principals that have evolved

  1. Global distribution from the ground up
  2. fully resource governed stack
  3. Schema-agnostic service

It is very hard to build any service (particularly with such requirements).

If it was easy, everyone would do it (via NASA).  So this is how CosmosDB was built.  This is used internally by Microsoft.  It is one of the fastest services in the cloud.  It is a ring-0 service, meaning it is available in all regions by default.  It is millions of lines of C++ code.  It is 7 years in the making, it is truly not a new services.  Here is what it looks like (a bit of marketing here).

The foundations of the service for a globally distributed, massively scale-able multi–model database service are

  1. comprehensive SLA
  2. five well-defined consistency model
  3. guaranted low latency at t the99th percentile
  4. elasticscale out of storage and throughput
  5. and…

Fine grained multi-tenancy.  This cannot be an after thought.  From left to right, you can take a physical resource like a cluster and dedicate to a single tenant (e.g. customer or database).  You can take an entire machine and dedicate.  You can go another step and take a machine to homogeneous customers.  The final level of granularity is taking that machine and dividing between heterogeneous tenants and providing performance and scalability.

In terms of global distribution, Azure has 42 regions world wife…36 are available, 6 are still being  built out.  You can span your CosmosDB across all of those regions.

Within a database account you have a database.  Within that you have users and permissions.  Within that CosmosDB is a container.  A container of data with a particular data model.  Below that are other user defined code.  The database may span multiple clusters and regions and you can scale it in terms of these containers.  It is designed to scale throughput and storage INDEPENDENTLY.  How is the system designed by the scene (10K foot view)?  Within regions there are data centers, with data centers there are stamps, within that there are fault domains, within that there are containers and within that the replicas.  Within the replicas are the data.  On the database engine this is where the secret sauce comes in – bw-indexes, resource manager, log manager, IO manager, etc.  On any cluster will see thousands or hundreds of tenants.  Need to make sure that none of the tenants are noisy.

Another tenant that is import is the concept of partitioning.  How does CosmosDB solve this?  The tenants create containers of data and behind the scenes these are partitions.  The partitions are comprised are 4 replicas.  This is consistent and reliable.  Each one is a smart construct.  Out of those partitions, you can create partition sets.  These can then span clusters, federations, data centers, regions.  You can overlay topologies to implement solutions that span across multiple regions across the planet.  You need to make sure that the applications then work really well (particularly when merge or split partitions set).  You have the partition which is a data block and then you can build the partition set of various topological.

What are some of the best practices?

  1. Always want to select a partition key that provides even distribution
  2. user location aware partition key for access locally
  3. Select a partition key that can be a transaction scope
  4. Don’t want to use the timestamp for write-heavy workloadso

The resource model summary : Resources, Resource model, partitioning model

Core capabilities Turnkey global distribution – this is adding regions with a click.  Yu can come to an Azure portal, you can see the map of the entire world and pick the regions where you want your data to be.  The data is replicated behind the scenes and then its available for access.  You’re not dealing with VMs, cores.  You can add and remove regions at any time and the application does not need to be re-deployed.  The connection between application and database is logical.  This is enabled by multi-homing capability API.  You can connect to physically to the end point.  Another capability is that you can associate priorities with each of the regions.  If there is an outage or failover in a region, the failover will occur in the order of priority, and that can be changed at any time.  Something added for customers is to simulate a regional outage (but don’t go crazy with this says Rimma!).  You can test HA of the entire application stack.

Another capability is being able to provide geo-fencing.  If you come from any other part of the world there can be regulations where data has to present in particular regions, so if data needs to stay withing a location for requirements, that capability is required.

How does AlwaysOn work?  By virtue of partitioning have multiple locations.  One replica goes down, the application will be unaffected.  If partition goes down, the application will go t partition in another region.  If an entire region goes down, the application will go to another region.  The data is always available.

Another area of interest is active writers and active readers in any region.  Right now turnkey provided at database level, but looking to push this down to the partition key level (a true Active Active topology).  Online backups are available, they are stored in Azure blob storage.  The key capability is that it’s intended for “oops I deleted my data”, it”s’ not for a data center going down (that’s hwy you have multiple regions).

Second capability is elastic scale out.  As data size, scale throughput independently.  Could start out with small amount of data and keep adding more and more.  Back end will seamlessly scale.  Transnational data tends to be small, web and content data is medium sized, and social data/machine generated data is much larger.  As data size grows or throughput grows, scale occurs and this happens seamlessly behind the scenes.  This is done with SSDs behind the scenes.

Resource governance is the next component.  As operations occur, they occur RUs.  You provision RUs that you need (how many transactions/sec to you need?).  All replicas (just a partitioning of data) get a certain budget of RUs.  If you exceed, you’ll get rate limited.  At any time can increase provision throughput.  Can then support more transactions/sec.  Can also decrease at any time.

RU is a read-based currency partitioned at granularity of a 1 second.  It is normalized across DB operations.  Cost the operations via machine learning pipelines that cost queries (e.g. scans, lookups, etc.).  Have run machine learning on models on telemetry data, and then calibrate the cost model accordingly for RUs.  ((DATA DRIVEN).  Back to partitioning model: at any time can change throughput and behind the scenes you can specify the throughput (RUs) you want.  Behind the scenes the re-partitioning will occur, and each one will get more RUs to provide the throughput asked for.  This is where splitting/merging partitions matters, but it happens behind the scenes and you don’t have to worry about it.

What about when you add regions?  You want to add more RUs so you don”t starve existing regions.  Those RUs are spread across all partitions and regions.  Rimma shows how one customer elastically provisioned resources during the holiday season to size up to meet demand, and then size down when no-longer needed.  In a 3 day period, Rimma shows a graph of RUs.  At the top end there are 3 trillion RUs.  (THREE TRILLION IN THREE DAYS PEOPLE)

Various nodes have a various number of RUs serving different workloads, and you can look at the different tenants and partitions in there.  Multi-tenancy and global distribution at that level is incredible.

Another tenant: Guaranteed low-latency at 99%.  This was a core requirement because time is money.  From the business part of view, twice as much revenue lost to slowdowns.  So the system is designed.  At 99th percentile, less than 10ms for the reads measured at 1KB document (which is =80-90% of workload).  At average, will observe lower latency (less than 2 ms for reads and 6ms for writes.  How is this accomplished?  Reads and writes from local region and SSDs done.  The database designed to be write optimized and using latch-free database engine.  All data is indexed by default.  This is a fundamental difference from relational databases, here we have automatically indexed SSD storage.  Customer example: application in California and data in far east.  Added another region and then latency dropped.  Over black Friday/cyber Monday latencies less than 10ms for reads and 15ms for writes.  Azure Cosmos DB allows you to be the speed of light.  If have a database in Chicago and have friends in Paris who want to read your data.  If this  was a central database they would request to read the data from Paris and getting that data from Chicago to Paris takes 80-100 ms.  With CosmosDB you get it in less than 10ms because of those regional locations.

The last here is the consistency model in CosmosDB.  How does it go?  When you are dealing with any distributed system, whether databases or other sytem, typically you are faced with fundamental trade off of latency, availability, consistency and throughput.  If centralized database all requests against primary copy.  With global distribution, get geo–replication get HA and low latency.  But what happens if one replica can’t communicate with others and updates are being made?  What kind of consistency guarantees are made?  This can be a problem!  Do you need to wait for data to be synchronized before you serve it?  Do you want strong consistency or eventual consistency?  Do you want the red pill or blue pill?  With a relational database you get a perfect consistency.  They won’t serve the data until quorum is an agreement. The price there is latency.  On the other hand, the whole movement of no consistency guarantees means low latency.  But real-world consistency is not a binary choice as just described.

What about something in between?  The research literature talks about the wild west of consistency models (not one or the other).  A parper recommended is Replicated Data Consistency Explained Through Baseball by Doug terry, a Microsoft Research individual.  Uses real-world examples from baseball.  Depending on who you are in the game, you might get value out of different consistency models.  The engineers asked the question: can we pick out an intermedicate consistency model and is easy to configure, programmable, presents clear trade-offs?  Most real-life applications don”t fall into those two extremes.  Bounded-stateless, session (monotonic reads and writes local to geo location), and consistent prefix (when updates applied, the prefixes are guaranteed to be consistent).

How is this accomplished?  use TLA+ specifications to specify consistency models.  If you don”t know, check out video by Leslie Lampert who is an important figure in how the system was designed.  Leslie was a Touring award winner for Paxis (sp?) algorithm and founding father of what is used in the stack.

Operationalized the five different consistency models.  Using telemetry to see then how those models are used.  Only 3% use strong consistency, 4% use eventual, and 93% are using the three models in between.  Depending on consistency model specified, might need more computational work, which requires RU.  Have to make trade offs accordingly, and you can monetize this and decide what’s’ best.

Comprehensive SLAs…high availability SLAs are not enough.  Microsoft is in the service business, not just the software business.  Typically services don’t give any SLA, or give for HA.  When tried to approach this problem, asked “What are all the things that developers and customers really care about?”  They care about performance, throughput, consistency, availability and latency.  SO this is the first service in the market that has published comprehensive SLAs that are backed up by Microsoft financially.  Can now see that guaranteed if come in and run workload, will get guaranteed performance, throughput, consistency, availability and latency .  Availability tracked at the tenant and partition level in 5 minute granularity.   Customers can see their run time statistics against their SLA.

Service is also multi-model.  Wanted to enable native integration with different data models.  Behind scenes just ARS model (atom-record-sequence).  All models get translated to ARS model.  Very easy for the service to then on-board other data models now and in the future.  If want document and graph, do not need two copies of data, it can be handled by the same set of data.  This is a powerful combination — to look at data through different lenses.

Why schema agnostic approach?  Modern applications that are built in the cloud are not static.  Can start with one schema, add more tables/new columns…need a robust approach to handle these scenarios.  The object model is schema-free.  The data gets stored as-is.  How do you query this data?  Behind the scenes the data is ARS.  At the global scale, dealing with indexes, schema, etc. is a nonstarter.  In CosmosDB there is schema agnostic indexing.  The indexes are a union of all document trees, and can then consolidate into one and only keep unique values.  All of this structure info is then normalized.  It is an inverted index which gives optimal write performance.  Can identify where documents located and then serve up.  The index overhead in practice is pretty small.  There is a published paper, Schema-Agnostic Indexing with Azure Cosmos DB, go read it!

The Bw-tree in Azure Cosmos DB is highly concurrent, non-blocking, optimized for SSDs.  Avoids random writes.  There are three layers, the b-tree, cache and log structured store (see paper).  Rhema is going faster now. I’m not kidding.  Bw-tree is implemented as delta updates.  There is a mapping table to the updates and updates stored as deltas (sounds like in-memory index structure?).

Rimma shows architecture of Query Processing, there are different semantics but have the same underlying components (compiler, optimizer, etc.).  The engine is very flexible and expect that in the future will host other run-times.  The multi-API approach allows native support for multiple APIs.  If want to store data in cloud but not re-write your app, you can.  There are more APIs coming in the future.  What does this strategy enable?

  • No recompilation needed
  • Better SLAs, lower TCO
  • leverage the existing OSS tool-chain and ecosystem and development IT expertise
  • Life and shift from on-premises to cloud
  • No vendor lock-in
  • Symmetric on-premises and cloud database development

All happy databases are alike, each unhappy database is unhappy in its own way (Kyle Kingsbury via Leo Tolstoy).

How run service?  Weekly deployments worldwide.  16 hours of stress testing every day.  It’s like magic factory across the globe.  Also run variant and invariant checks.  Bots that are fixing nodes that might have issues.  Consistency checking and reporting going over the data continually.

In conclusion…wanted to put herself in our shoes. It’s a lot of information to digest…particularly if not invested in this technology.  Why should you care?  Rimma brings up a quote from Darwin:

It is not the strongest species that survive, nor the most intelligent, but the ones most responsive to change.

Can try CosmosDB for free, no need to credit card info, etc.  Childhood dream of going to Cosmos (space) will be fulfilled.

Key points to remember:

  • global distribution, horizontal partitioning and fine grained multi-tenancy cannot be an afterthought
  • schema agnostic database engine design is crucial for a globally distributed database
  • intermediate consistency models are extremely useful
  • globally distributed database must provide comprehensive SLAs beyond just HA

This is a hidden gem, but the bigger message remember the entire NoSQL movement is a counter-culture movement. But the question is how would we build databases if we started today? Without the legacy that we know, would we look at things differently?  Would we focus on limitations or focus on the bigger picture?  Sometimes it is ok to break the rules and try to different things.  Life is too short to build something that nobody wants.  If we focus on real pain points, not the “nice to have things”, but really look at the issues and abandon our constraints and self–imposed limits, we can create a modern SQL.  Rimma ends by thanking Dharma Shukla and entire CosmosDB team.

<collapses>