Troubleshooting a Change in Query Performance

This is tale of troubleshooting…

When you unexpectedly or intermittently encounter a change in query performance, it can be extremely frustrating and troublesome for a DBA or developer. If you’re not using Query Store, a third-party application, or your own method to capture query data, then you probably don’t have information about what query performance looked like when things were good…you just know how it’s running now. I was working with a customer of Tim’s last week that had been chasing this exact problem for, in their words, years. They had recently upgraded to SQL Server 2016, and the problem was still occurring.

First Steps When There’s a Change in Query Performance

If there is a change in query performance, we typically look at differences in the execution statistics, as well as differences in the plan in terms of the shape and operations. The execution statistics confirm that performance is worse; duration is higher, CPU and/or IO is higher. With this client, after we verified the execution data, we used Query Store to compare the plans, and they were different. When I see different plans I immediately check the compiled values (input parameters) for each plan. Typically, I expect the different plans are a result of different values – because the query is sensitive to different input parameters. Interestingly enough, in the case of this customer, the different plans had the exact same parameters. My next step? Statistics.

The customer had told me that the only way they could consistently resolve the issue was to rebuild indexes for a few of the tables that were involved in the query. Rebuilding indexes causes an update to statistics with a FULLSCAN. I asked if they had ever tried just updating statistics with FULLSCAN to fix the issue, and they reported that it caused problems when they did. I tabled that issue, as I had a theory about what was happening and wanted to prove that out before digging into a secondary issue.

Collecting Query-Related Data

I set up an Extended Events session to capture the auto_stats event (which fires when statistics automatically update) and filtered on the objects in question.

I also walked them through the statistics information within SQL Server. This was extremely fun. I love statistics, and I love explaining the output of DBCC SHOW_STATISTICS to people. When I’m teaching, I can typically see the light bulbs come on for people as I talk through the stats header, the density vector, and the histogram (especially the histogram!). On the call, I could hear them say “oh!” and “that makes so much more sense!” as I was talking…so fun. But I digress. So, we looked at the statistics for the tables in question, and we could see that because they had rebuilt indexes earlier in the day to resolve the issue, the sample was 100% (from the FULLSCAN). Query performance was good at this point, so we had our baseline.

Then I showed them how to view the stats header information for all statistics for the tables in question using the sys.dm_db_stats_properties DMV and requested that they query the DMV if they had a change in query performance.  I explained that they should look to see if statistics had updated, and if so, take note of the sample size.


My hypothesis was that stats were getting automatically updated because Auto Update Statistics was enabled for the database, and enough data had changed to fire the update (realize they are on 2016 with 130 compatibility mode, which means the threshold is lower than it used to be…20% + 500 rows). After statistics updated, all queries that used those statistics were recompiled, and because the sample was the default, it wasn’t an accurate representation of the data in the table, and so a different plan was being generated, even though the same input parameters were used.

Back to the story

I took a few moments and explained my hypothesis, and then we looked at the data in one of the tables. We looked at the distribution of data in each of the leading columns in the indexes, and in one of them, there was extreme skew.  If query performance degraded, and statistics had updated with the default sample, I requested that they run UPDATE STATISTICS with FULLSCAN to see if that resolved the issue.


Sure enough, within the next 24 hours query performance changed and statistics for one of the indexes had been updated automatically with the default sample. I correlated this finding with the  the Extended Events output as an extra validation step.  When they updated statistics with the FULLSCAN, the “good” query plan was used again, query performance improved, and CPU dropped immediately.

A Better Solution

Now that we knew the issue, we had to decide how to address it. I recommended that they manually update statistics again with the FULLSCAN, but this time with the PERSIST_SAMPLE_PERCENT option set to ON. With this option enabled, statistics will retain the sample rate used with the manual update of statistics, even if it’s an automatic update. In this scenario, we wanted to ensure that statistics would always be updated with a FULLSCAN.

The customer was slightly concerned about the implications of the FULLSCAN update running during the day, because the tables were large and the update would generate overhead. This is absolutely a valid concern for tables that are extremely large.  To address this, we included the NORECOMPUTE option in the UPDATE STATISTICS command for the statistic in question, ensuring that the statistic would not automatically update even if the number of modifications crossed the threshold.

I explained to the customer that they would definitely need to run a SQL Agent job on a regular basis to update statistics for those which used the NORECOMPUTE option so that the optimizer had updated information about the distribution of data. Using NORECOMPUTE is rare, I find, but it definitely has its use cases. It’s essential to have a job to manually update those stats if you use that option for any of your statistics.

The Take Away

I haven’t heard from the customer since we determined the problem and created a solution, and I take that as a good sign. It’s ironic, because so often when I’m talking about changes in query performance, I mention that updating statistics is not the solution. It can just look like the solution because it causes a recompile and for a parameter-sensitive query, it often “fixes” the issue. But it hides the root problem in that scenario, and updating statistics (via the index rebuild) was hiding the root problem here as well. It wasn’t that the query was parameter sensitive, it’s that the optimizer needed a histogram from a FULLSCAN of the column, due to the skew in the data.

Most of the time, the default sample for a stats update is good enough. But when it isn’t, then you either need a FULLSCAN, or if that doesn’t do it, you need to look at filtered statistics across ranges of the data to give the optimizer better information.

Updating Statistics with Ola Hallengren’s Script

I am a HUGE fan of updating statistics as part of regular maintenance.  In fact, if you don’t know if you have a step or job that updates out of statistics on a regular basis, go check now!  This post will still be here when you get back 😊

At any rate, for a long time the default options for updating statistics were pretty much a sledgehammer.  Within the maintenance plan options, the Update Statistics Task only provides the option to update Index statistics, Column statistics, or both.  You can also specify whether it is a full scan or a sample for the update, but that’s about it:

Update Statistics Task (Maintenance Plan)

Update Statistics Task (Maintenance Plan)

I don’t like this option because it means that statistics that have had little or no change will be updated.  I could have a 10 million row table where only 1000 rows change, and yet the statistics for that table will update.  This is a waste of resources.  For a small database, or system that’s not 24×7, that isn’t such a big deal.  But in a database with multiple 10 million row tables, it is a big deal.

The sp_updatestats command isn’t a favorite of mine either.  I’ve written about that here, so I won’t re-hash it.

If you have used Ola Hallengren’s scripts for maintenance, you hopefully know that it will also update statistics using the @UpdateStatistics parameter.  The default value for this is NULL, which means do not update statistics.  To be clear, if you drop in Ola’s scripts and have it create the jobs for you, and then you start running the “IndexOptimize – USER_DATABASES” job, by default you’re not updating statistics.  The code the IndexOptimize – USER_DATABASES job has, by default, is:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@LogToTable = 'Y'

If you want to have the job also update statistics, you need:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@UpdateStatistics = 'ALL',
@LogToTable = 'Y'

With this variation, we are updating index and column statistics, which is great.  But…we are updating them regardless of whether it’s needed.  Statistic with no rows modified? Update it.  Statistic with 10 rows modified? Update it.

There has always been an option to only update statistics that have changed, this is the @OnlyModifiedStatistics option, and this gets us behavior just like sp_updatestats.

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y',
@LogToTable = 'Y'

With this option, if no rows have changed, the statistic will not be updated.  If one or more rows have changed, the statistic will be updated.

Since the release of SP1 for 2012, this has been my only challenge with Ola’s scripts.  In SQL Server 2008R2 SP2 and SQL Server 2012 SP1 they introduced the sys.dm_db_stats_properties DMV, which tracks modifications for each statistic.  I have written custom scripts to use this information to determine if stats should be updated, which I’ve talked about here.  Jonathan has also modified Ola’s script for a few of our customers to look at sys.dm_db_stats_properties to determine if enough data had changed to update stats, and a long time ago we had emailed Ola to ask if he could include an option to set a threshold. Good news, that option now exists!

Using Ola’s script to update statistics based on a threshold of change

With the IndexOptimize stored procedure Ola now includes the option of @StatisticsModificationLevel.  You can use this to set a threshold for modifications, so that only statistics with a specific volume of change are updated.  For example, if I want statistics updated if 5% of the data has changed, use:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@UpdateStatistics = 'ALL',
@StatisticsModificationLevel= '5',
@LogToTable = 'Y'

Take note: the option @OnlyModifiedStatistics option is not included here…you cannot use both options, it has to be one or the other.

This is great!  I can further customize this for different tables.  Consider a database that has a very volatile table, maybe dbo.OrderStatus, where auto-update may or may not kick in during the day, so I want to make sure stats are updated nightly:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@Indexes = 'ALL_INDEXES, -SalesDB.dbo.OrderStatus',
@UpdateStatistics = 'ALL',
@StatisticsModificationLevel= '10',
@LogToTable = 'Y'

This will address fragmentation and update statistics for all tables in the SalesDB database except dbo.OrderStatus, and it will update statistics if 10% or more of the rows have changed.

I would then have a second job to address fragmentation and stats for OrderStatus:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@Indexes = 'SalesDB.dbo.OrderStatus',
@UpdateStatistics = 'ALL',
@StatisticsModificationLevel= '1',
@LogToTable = 'Y'

For the dbo.OrderStatus table, statistics would be updated when only 1% of the data had changed.

I love the flexibility this provides!

You might be wondering why I chose 1%…take a close look at this important note which is included in Ola’s documentation:

Statistics will also be updated when the number of modified rows has reached a decreasing, dynamic threshold, SQRT(number of rows * 1000)

This is critical to understand because if the threshold I have set for @StatisticsModificationLevel ends up having a number of rows HIGHER than the formula above, statistics will update sooner than I expect.

For example, if I have 1 million rows in a table and I have @StatisticsModificationLevel = 10, then 10% of the rows, or 100,000, have to change in order to update statistics.  HOWEVER, if you plug 1 million into SQRT(1,000,000 * 1000), you get 31,623, which means Ola’s script will update statistics after 31,623 rows have changed…well before 100,000.

This may be important for some of you to understand in terms of these thresholds, so I dropped the information into a table to make it easier to comprehend (at least, it’s easier for me!).

Thresholds for Statistics Updates (percentage and SQRT algorithm)

Thresholds for Statistics Updates (percentage and SQRT algorithm)

Using my original example, if dbo.OrderStatus has about one million rows, then with 1% as the threshold, only 10,000 rows need to change before stats are updated.  If the SQRT algorithm were used, over 30,000 rows would need to change before stats were updated, and depending on the data skew, that might be too high.

Understand that as tables get larger, statistics will likely be updated before the set percentage value is reached because the SQRT algorithm has a lower threshold.  (Yes, I’m driving this point home.)  Consider a table with 10 million rows.  If I set the threshold to 5%, I would expect statistics to update after 500,000 modifications, but in fact they will update after 100,000.

If you’re wondering where the SQRT algorithm comes from, please review Microsoft’s Statistics documentation.  This threshold was originally introduced with trace flag 2371 to lower the threshold for automatic updates.  It is applied by default started in SQL Server 2016 when using compatibility level 130.  My assumption is that Ola determined this was a good threshold to use as a fail-safe/catch-all for his script, and I think it was smart move on his part.  In general, I’d rather have statistics update too often, rather than not often enough.  However, using the new @StatisticsModificationLevel option gives us better control than we’ve had previously, unless we write a custom script (which is still an option…do what works best for you!).

Do you need to update statistics after an upgrade?

This post originally went live on May 11, 2018, but modifications were made on May 14, 2018 after some additional internal discussions with Microsoft.  Changes made on May 14, 2018 are in blue. 

There are a variety of methods we use for helping customers upgrade to a new SQL Server version, and one question we often get asked is whether or not statistics need to be updated as part of the upgrade process.


Yes.  Update statistics after an upgrade. Further, if you’re upgrading to 2012 or higher from an earlier version, you should rebuild your indexes (which will update index statistics, so then you just need to update column statistics).


Some of you may remember that the stats blob changed between SQL Server 2000 and SQL Server 2005, and Microsoft specifically recommended updating statistics after upgrading from SQL Server 2000.  Official Microsoft documentation about the stats blog change in SQL Server 2005 is difficult to find, but this article includes the following paragraph:

After you upgrade from SQL Server 2000, update statistics on all databases. Use the sp_updatestats stored procedure to update statistics in user-defined tables in SQL Server databases. This step is not necessary for upgrading from SQL Server 2005.

Current Microsoft documentation related to upgrading does not state anything specific about updating statistics, but people continue to ask and if you peruse forums, blog posts, and other social media options, you’ll see recommendations to update statistics. Further, the documentation that Microsoft provides about when to update statistics does not mention anything about upgrades.

Side bar: I don’t recommend using sp_updatestats, and here’s why: Understanding What sp_updatestats Really Updates.


The statistics blob has not changed since SQL Server 2000 to my knowledge, but I thought I would ask someone from Microsoft for an official recommendation to share publicly.  Here you go:

Microsoft suggests that customers test the need for a full update of statistics after a major version change and/or a database compatibility level change.

Further items to note:

  1. If Microsoft updates the format of statistics (e.g. the stats blog), customers will be expected to update statistics after an upgrade.
    1. Microsoft does not always upgrade the statistics format as part of a major version upgrade.
  2. There are occasions where Microsoft does not change the format of statistics, but they do change the algorithm for creating statistics as part of a major version upgrade or database compatibility level change.

In addition, there was a change in the nonclustered leaf level internals in SQL Server 2012, so if you are upgrading to 2012 or higher from an earlier version (e.g. 2008, 2008R2), rebuild your nonclustered indexes.  And remember, rebuilding indexes updates the statistics for those indexes with a fullscan, so you do not need to update them again.


As part of your upgrade methodology, it is recommended (by me, based on experience with a lot of customer upgrades) to build in time to update statistics.  I’ve gotten some pushback from customers who don’t want to update statistics after upgrade because it takes too long.  Some kind reminders:

  • Updating statistics is an online operation, therefore, the database and related applications are accessible and usable. A statistics update does take a schema modification lock so you’re not allowed to make any changes to a table while its stats are updating.  Therefore, if you decide to change your schema after upgrading your SQL Server version (not something I would typically recommend), do that before you update stats.
  • You need to update statistics regularly to provide the optimizer with current information about your data, so at some point it needs to be done. Immediately after an upgrade is a really good time, considering the aforementioned items.

If you’re not comfortable upgrading to a newer version of SQL Server, we can help!  I’m in the process of helping a customer migrate from SQL Server 2012 to SQL Server 2017, and I’m so excited to get them up to the latest version so they can start using some new features…like Query Store 😉