I’ve been seeing this over the past few years, imagine this scenario:

You have a stored procedure that runs well most of the time but sometimes it’s WAYYYYY off. It’s almost as though the performance of it went from great to horrible in a split second (like falling off of a cliff). You don’t know why but someone says – it’s got to be the statistics. In fact, if you have the luxury of time (which most folks don’t have), you execute it yourself and you check the plan – WOW, the estimated number of rows is WAY off from the actual rows. OK, it’s confirmed (you think); it’s statistics.

But, maybe it’s not…

See, a stored procedure, a parameterized statement executed with sp_executesql and prepared statements submitted by clients ALL reuse cached plans. These plans were defined using something called parameter sniffing. Parameter sniffing is not a problem itself – but, it can become a problem for later executions of that same statement/procedure. If a plan for one of these statements was created for parameters that only return 1 row then the plan might be simple and straightforward – use a nonclustered index and then do a bookmark lookup (that’s about as simple as it can get). But, if that same sp_execute statement/procedure/prepared statement runs again later with a parameter that returns thousands of rows then using the same plan created by sniffing that earlier parameter then it might not be good. And, this might be a rare execution. OR, it could be even more strange. These plans are not stored on disk; they are not permanent objects. They are created any time there is not already a plan in the cache. So, there are a variety of reasons why these plans can fall out of cache. And, if it just so happens that an atypical set of parameters are the first ones used after the plan has fallen out of cache (better described as “has been invalidated”) then a very poor plan could end up in cache and cause subsequent executions of typical parameters to be way off. Again, if you look at the actual plan you’ll probably see that the estimate is WAY off from the actual. But, it’s NOT likely to be a statistics problem.

But, let’s say that you think it is a statistics problem. What do you do?

You UPDATE STATISTICS tablename or you UPDATE STATISTICS tablename indexname (for an index that you specifically suspect to be out of date)

And, then you execute the procedure again and yep, it runs correctly this time. So, you think, yes, it must have been the statistics!

However, what you may have seen is a side-effect of having updated statistics. When you update statistics, SQL Server usually* does plan invalidation. Therefore, the plan that was in cache was invalidated. When you executed again, you got a new plan. This new plan used parameter sniffing to see the parameters you used and then it came up with a more appropriate plan. So, it probably wasn’t the statistics – it was the plan all along.

So, what can you do?

First, do not use update statistics as your first response. If you have a procedure that’s causing you grief you should consider recompiling it to see if you can get a better plan. How? You want to use sp_recompile procedurename. This will cause any plan in cache to be invalidated. This is a quick and simple operation. And, it will tell you whether or not you have a recompilation problem (and not a statistics problem). If you get a good plan then what you know is that your stored procedure might need some “tweaking” to its code. I’ve outlined a few things that you can use to help you here: Stored procedures, recompilation and .NetRocks. If that doesn’t work, then you MIGHT need to update statistics. What you should really do first though is make sure that the compiled value of the code IS the same as the execution value of the code. If you use “show actual plan” you can see this by checking the properties window (F4) and hovering over the output/select.

properties2 What caused that plan to go horribly wrong   should you update statistics?

This will confirm that the execution did (or did not) use those values to compile the plan. If they were the correct values then you might have a statistics problem. But, it’s often blamed and it’s not actually the problem. It’s the plan.

OK, there’s a bit more to this…

*Do plans ALWAYS get invalidated when you update statistics? No…

Erin Stellato (blog | twitter) first blogged about this here: Statistics and Recompilation.

And, also here: Statistics and Recompilation, Part II.

Here’s a quick summary though because it looks like things have changed again in SQL Server 2012…

  • In SQL Server 2005, 2008 and 2008R2 – updating statistics only caused plan invalidation when the database option auto update statistics is on.
  • In SQL Server 2012 – updating statistics does not cause plan invalidation regardless of the database option.

So, what’s the problem? Ironically, I kind of like this. I think that statistics has been blamed all too often for statement/plan problems when it’s not the statistics, it’s the plan. So, I like that there will be fewer false positives. But, at the same time, if I update statistics off hours, I DEFINITELY want SQL Server to invalidate plans and re-sniff my parameter (especially if the data HAS changed) and possibly get new plans from my updated stats.

In the end, I did chat with some folks on the SQL team and yes, it looks like a bug. I filed a connect item on it here: https://connect.microsoft.com/SQLServer/feedback/details/769338/update-statistics-does-not-cause-plan-invalidation#.

UPDATE – 12:55 (yes, only 2 hours after I wrote this).

It’s NOT a bug, it’s BY DESIGN. And, it actually makes sense.

If the plan should NOT be invalidated (directly due to statistics because the data has NOT changed) then it won’t. But…
If the plan should be evaluated (statistics have been updated AND data changed) then it will.

The key point is “data changed.” An update statistics ALONE will not cause plan invalidation (which is STILL different behavior from 2005/2008/2008R2) but it’s the best of both worlds IMO. Only if at least ONE row has been modified then the UPDATE STATISTICS *WILL* cause plan invalidation.

UPDATE 2: The key point is that there might still be some false positives and I’d still rather than people try sp_recompile first but it’s good that UPDATE STATISTICS will cause plan invalidation. But, it’s still a tad different than prior versions… interesting for sure.

A simple workaround is to use sp_recompile tablename at the end of your maintenance script but be aware that running an sp_recompile against a TABLE requires a schema modification lock (SCH_M). As a result, this can cause blocking. If you don’t have any long running reports (or long running transactions) at that time though, it should be quick and simple.

And, stay tuned on this one. In a later CU you should be able to remove the sp_recompile AND you won’t need to worry about the database option either (yeah!).

Thanks for reading,

kt