SQLskills procs to analyze data skew and create filtered statistics

At PASS Summit 2013 I delivered a presentation titled: Skewed Data, Poor Cardinality Estimates, and Plans Gone Bad (you can watch this session on PASStv here). Note: please fast forward 4 mins as they didn’t tell me they were recording / publishing my “chatting” before the session – which is weird IMO but if you want to hear me talk about diving and sharks and octopus then there it is. As for the actual content of the session, it’s all about understanding the limits of SQL Server’s histogram (which has a maximum of 201 steps) and the result that estimates from skewed data (using step averages) can be highly incorrect – resulting in estimates that are off and query plans that aren’t correct. As part of that session, I demo’ed some code to help programmatically analyze histograms for skew and also code that will create filtered statistics for you – based on a few parameters. In this post, I have an update to those scripts. But, there’s also a bit of good news and bad news that goes along with filtered statistics. Let me start with the good news…

The Good News

  • Filtered statistics work really well in SQL Server 2008, SQL Server 2008 R2, and SQL Server 2012. (hmm… I bet you can already figure out the bad news)
  • The scripts have been updated to fix a couple of minor issues (one where I checked for version but didn’t allow them to run on SQL Server 2014).
  • sp_SQLskills_DropAllColumnStats has been modified to ONLY drop filtered statistics that start with SQLskills_FS (which is the naming convention I use when I create filtered stats).
  • sp_SQLskills_CreateFilteredStats no longer requires a “max” value for tables. The final filtered statistic created will be unbounded. Also, depending on how many steps you have, there were a few weird situations where you might end up with a statistic that is definitely >= value and < same value. This would NOT hurt any estimates but it also wasn’t very useful. Totally benign but also fixed.

The Bad News

Filtered statistics are not yet accessible by the SQL Server 2014 cardinality estimator (tested up to SQL 2014 CU3). This is also true for multi-column, column statistics (where you need to use MORE than just the first column) and hypothetical indexes created with auto pilot. But, there is good news here as well – you CAN still use SQL Server 2014 and take advantage of filtered statistics by accessing the legacy CE at the query level. In fact, if you run using the legacy CE in 2014 then you can access all of these things (and, again, you can do this on a query by query basis if you’d like). So, let me give you a few options and a recommendation.

Cardinality Estimator Options for SQL Server 2014

Option 1 – low impact (my recommendation)

  • Upgrade existing databases (through backup / restore)
  • Leave the existing compatibility level intact (SQL Server 7.0 CE is the default for compatibility modes of SQL Server 7.0 through SQL Server 2012). Remember, restoring / attaching  does not “upgrade” the compatibility level; it remains the level it was when it was backed up / detached.
  • For the following types of queries, test the query performance using trace flag 2312. If you see a benefit, use the new CE model in the query with OPTION (QUERYTRACEON 9481).
    •  That aren’t performing well
    • Whose estimates are far off from actual
    • Where you want to use filtered statistics or auto pilot

Option 2 – low impact

  • Change to compatibility level 120
  • Enable trace flag 9481 server-wide
  • Again, use query-level changes for accessing the new CE.

Option 3 – potentially high impact (potentially good; potentially bad)

  • Change to compatibility level 120 (New CE model)
  • When troubleshooting, test trace flag 9481 against queries that have regressed
  • Use trace flag 9481 for queries that regressed

Playing with the skew analysis scripts and filtered statistics scripts

If you’re still wanting to determine if your very large tables show signs of skew and potentially poor estimates – check out my PASStv presentation from last year. Then, replace the SQLskills project with the scripts in this zip (SQLskillsProcs). If you keep your database compatibility mode at 110 (for SQL Server 2012) then you won’t need to make any changes to the other examples from that session. However, if you change to the SQL Server 2014 compatibility mode (120) then you’ll need to add this to the statements where you want to leverage filtered statistics.

SELECT blah blah
...
OPTION (QUERYTRACEON 9481, QUERYTRACEON 3604, QUERYTRACEON 9204)

9481 = Legacy CE model for SQL Server versions 7.0 – 2012

3604 = this will display the output to the results window

9204 = this will display the statistics used by the query (note, I can’t seem to get this to work when using the new CE at all)

Have fun and let me know what you find. Cheers, kt

15 thoughts on “SQLskills procs to analyze data skew and create filtered statistics

  1. “a maximum of 201 steps”
    Hmmm, I wonder if a more steps would help build more accurate QPs?
    Ya think?

    Also, I *think* SQL only promotes 3 verisons back forward, ie a 2000 DB in 2014 would get bumped to 2005, right?

    1. More steps could be beneficial but the secondary problem is that the statistic would have to be loaded / read for every use. If you really only want more detail for specific ranges (or specific values), then filtered stats are a better solution (albeit frustrating to have to create / manage). And, the irony is that SQL Server 7.0 actually had 300 steps and they decreased it to 2000 because it was a bit too many resources to give when we just didn’t have all that much memory. I do wish they’d increase it a bit but I also think FS are better overall.

      And, as for compatibility modes, some versions actually support compat modes further back than what is possibly for SQL 2014. But, yes, in SQL 2014 the only valid compat modes are 100, 110, and 120. Great point. Thanks!!

      Cheers,
      kt

    2. Actually we have the limit of 201 bucket due to the fact that a statistic is contained inside a data page and therefore 8 KB wide.
      Having more buckets would mean having a larger object.
      A statistic that uses more pages of data would probably have more precision but the management of statistics would read and write more data. Otherwise we would have to enlarge the statistics but only what we need to do for example if we are faced with data not evenly distributed.
      I write some in my blog at sqlserverperformace.blogspot.com
      Luca

      1. Actually, that’s not true. It was in 6.5 where statistics were limited to a single 2K page. As of SQL Server 7.0, pages moved to 8K but statistics went to a limit in ROWS and not pages. Statistics can easily be more than one 8K page (and often are) but they’re limited to 200 steps to keep the statistics objects *relatively* small (same reason but different way). Additionally, in SQL Server 7.0 they allowed up to 300 steps. What they found (especially as memory was much more expensive and therefore more scarce) is that 300 steps required more of it. So, in SQL Server 2000 they were actually reduced to 200 steps. To be honest, NOW, it doesn’t make as much sense. There should definitely be some flexibility here. But, that’s where filtered statistics come in. They’re not AS effective but they can solve a lot of problems that a single statistic of only 200 steps has!

        Hope that helps,
        k

  2. Wow – what a great video and script!

    We are currently experiencing bad estimates leading to bad plans leading to slow queries…and we’ve tracked it down to data skew and stat-imprecision. One thing that’s a bit different here is that our database is multi-tenant (partitioned by tenant) with tenants varying in size from 50 MB to 15 GB and we need to be able to accommodate up to 3k tenants.

    I’ve been toying with the idea of (and only for our larger tables) creating a separate filtered statistic for each partition (e.g. CREATE STATISTICS blah blah WHERE TenantId = 5). Is this a ridiculous idea?

    Furthermore, for some of our larger tenants I’m now considering taking it a step further and using filtered range stats to get even more resolution/granularity in the stats e.g.

    CREATE STATISTICS blah blah WHERE TenantId = 5 AND (StartDate >= 20140117 AND StartDate <= 20140307);

    Am I crazy? This feels weird.

    1. The only potential problems with the one that has a start date is that you can ONLY use it when your queries include BOTH a TenantID and a StartDate that’s within that range. If ALL of your queries do – then it’s fine.

      The general problem with this (which also applies to your idea of “partition-level” stats is that SQL Server requires that the predicate of the query is a subset of a single filtered interval (=interval subsumption). So, as long as your queries are [at least] TenantID specific then the “partition-level” stats will help. But, you MIGHT be able to take then even further by adding date if you’re mostly selecting specific subsets for a particular tenant.

      The last potential problem is fast-switching. If you create any indexes that aren’t partition-aligned then you CANNOT do fast-switching. So, that might limit your abilities to even create / use them. For these scenarios I usually recommend partitioned views over partitioned tables (also, there are no interval subsumption issues between tables so optimization is more flexible with tables over partitions).

      Hope that helps!
      Kimberly

      PS – Thanks for the complements on the scripts / video. I actually had a great time with both! ;-)

  3. Re: trace flag 9204 in SQL Server 2014. I also noticed that trace flag 9204 only seemed to work with
    the legacy CE.
    In the comments of the following blog post using trace flags 9204 & 9292, Paul White explains that with the new 2014 CE trace flag 2363 can be used to get info from the optimizer such as which stats were consulted.
    http://sqlblog.com/blogs/paul_white/archive/2011/09/21/how-to-find-the-statistics-used-to-compile-an-execution-plan.aspx

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.