New Course: “SQL Server: Common Query Tuning Problems and Solutions – Part 2″

Today, Pluralsight published my latest course, “SQL Server: Common Query Tuning Problems and Solutions – Part 2“.  Part two is 2 hours and 17 minutes long and covers the following:

Troubleshooting Inefficiency

  • Demo: One-Row-At-A-Time-Processing
  • Demo: The Forgotten Trigger
  • Demo: Scalar Function Inhibited Parallelism
  • Demo: Scalar Function Row by Row Execution
  • Demo: Too Few Rows Per Batch
  • Demo: Not Using TOP When You Can
  • Demo: Unnecessary Many-To-Many
  • Demo: Excessive Fragmentation and Scans

Tuning OLTP vs. DSS Patterns

  • Defining OLTP and DSS
  • OLTP Characteristics
  • Demo: Stop-And-Go Operators
  • Demo: Finding Timeout-Prone Queries
  • Demo: Over-Indexing
  • Demo: OLTP and Parallelism
  • Demo: OLTP and Deep Trees
  • DSS Characteristics
  • Demo: Expanding the AdventureWorksDW2012 Database
  • Demo: Unwanted Serial Plans
  • Demo: Gatekeeper Row Issue – Part 1
  • Demo: Gatekeeper Row Issue – Part 2
  • Demo: Gatekeeper Row Issue – Part 3
  • Demo: Aggregation Queries and Columnstore Index Opportunities
  • Demo: Distinct Aggregation Issue

Advanced Query Tuning Patterns

  • Demo: New-Feature Abuse
  • Demo: Recursive CTE Estimate Skews
  • Demo: Seeks, Scans, Predicates, and Filters – Part 1
  • Demo: Seeks, Scans, Predicates, and Filters – Part 2
  • Demo: Seeks, Scans, Predicates, and Filters – Part 3
  • Demo: Unmatched Indexes
  • Demo: Disused Filtered Statistics
  • Demo: Interpreting RetrievedFromCache
  • Demo: Amplification and Distinct Value Counts
  • Demo: Join Hints and Forced Order
  • Demo: CTEs vs. Derived Tables
  • Demo: One Procedure With Multiple Optional Input Parameters
  • Demo: Procedure Sub-Branching
  • Demo: Verifying Instance Workload Complexity

And if you are new to Pluralsight or considering a subscription, be sure to review the ever-growing course catalog from the entire SQLskills team!

New Course: “SQL Server: Common Query Tuning Problems and Solutions – Part 1”

Pluralsight published my latest course, “SQL Server: Common Query Tuning Problems and Solutions – Part 1“.  Part 1 is 2 hours and 31 minutes and covers the following:

Module: Validating Assumptions

  • Downstream Impact of Bad Assumptions
  • Cardinality Estimation and Predicates
  • Selectivity
  • Query Cardinality Questions
  • Cardinality Estimate Inputs
  • Statistics
  • Demo: DBCC SHOW_STATISTICS STAT_HEADER
  • Demo: DBCC SHOW_STATISTICS DENSITY_VECTOR
  • Demo: DBCC SHOW_STATISTICS HISTOGRAM
  • Demo: Histogram Direct Step Hit
  • Demo: Scaling the Estimate
  • Demo: Histogram Intra-Step Hit
  • Demo: Density Vector With an Unknown Runtime Value
  • Demo: Parameter Sniffed Value Estimation
  • Demo: Distinct Value Estimation
  • Impact to Query Execution and Performance
  • Identifying Bad Assumptions
  • Demo: Viewing Estimated Vs. Actual Row Counts
  • Demo: sys.dm_exec_query_profiles
  • Demo: Root-level Skews via sys.dm_exec_query_stats
  • What Variance is Problematic?

Module: Troubleshooting Bad Assumptions

  • Model Assumptions
  • Demo: Jagged Distributions
  • Demo: Correlated Predicates
  • Demo: Correlated Predicates in SQL Server 2014
  • Demo: Correlated Join Predicates, Independent Filter Predicates
  • Demo: Searching for Off-Histogram Values
  • Demo: Searching for Off-Histogram Values in SQL Server 2014
  • Troubleshooting Questions
  • Demo: Missing Statistics
  • Demo: Selectivity Guesses
  • Demo: Stale Statistics
  • Demo: Inadequate Sampling
  • Demo: Parameter Sensitivity
  • Demo: Table Variable Usage
  • Demo: MSTVF Usage – Part
  • Demo: Data Type Conversions
  • Demo: Buried Predicates

Module: Tuning Imprecision Problems

  • Demo: SELECT *
  • Demo: Unnecessary Sorts
  • Demo: Needless DISTINCT
  • Demo: UNION Vs. UNION ALL
  • Demo: Leading Wildcards
  • Demo: Hidden Cartesian Products
  • Demo: Data Type Conversion Issues
  • Demo: Redundant Logic and Overlapping Object References

And part 2 will be published soon!

SQLIntersection Post-Conference Session

I am really happy to announce that Kimberly Tripp and I will be delivering a post-conference in November’s SQLIntersection conference at the MGM Grand. The session is “Queries Gone Wild 2: Statistics and Cardinality in Versions 2008, 2008R2, 2012, and 2014” and will be delivered November 14th, from 9AM to 4:30PM.

You can find registration details for the main conference, pre-conference and post-conference choices here

It will be an incredibly fun day and you can be sure to walk away with practical query tuning techniques and a strong foundation in statistics and cardinality estimation concepts (both for the new and legacy cardinality estimator).  

Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator

I’ve been working on writing a new Microsoft white paper since January covering the main changes made in the SQL Server 2014 Cardinality Estimator, and happy to announce that it was just published:

Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator

A big thanks to the contributor and reviewer team – including Yi Fang (Microsoft), Shep Sheppard (Microsoft), Mike Weiner (Microsoft), Paul White (SQL Kiwi Limited), Barbara Kess (Microsoft),  Jimmy May (Microsoft), Sanjay Mishra (Microsoft), Vassilis Papadimos (Microsoft) and Jack Li (Microsoft).  And thanks to Kathy MacDonald (Microsoft) for managing the whole process!

Finding Timeout-Prone Code with max_elapsed_time

Consider a scenario where your application timeout setting is 30 seconds and you’ve been asked to proactively start tuning workloads that are at risk of timing out.  There are several different places you can start, and I want to demonstrate an additional option today.

To demonstrate timeout settings, I can configure it in SQL Server Management Studio via the “Execution time-out” setting:

image

Next – let’s create two procedures that have two different WAITFOR thresholds (one which doesn’t time out and another that will):

CREATE PROCEDURE [dbo].[TimeOutProne]
AS
WAITFOR DELAY '00:00:29';
GO

CREATE PROCEDURE [dbo].[TimeOutProneV2]
AS
WAITFOR DELAY '00:00:30';
GO

Execute both of these (starting with the one that succeeds and the one that times out next):

EXEC [dbo].[TimeOutProne];
GO

EXEC [dbo].[TimeOutProneV2];
GO

The first execute succeeds and the second times out with the following error message:

Msg -2, Level 11, State 0, Line 13
Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.

Now execute the following query against dm_exec_procedure_stats:

SELECT  [database_id] ,
OBJECT_NAME([object_id]) AS [object_name] ,
[max_elapsed_time],
[plan_handle],
[sql_handle]
FROM [sys].[dm_exec_procedure_stats]
WHERE [max_elapsed_time] > 28000000;  -- microseconds, "but only accurate to milliseconds"

Interestingly enough – even though the second query timed out – we still have accumulated max_elapsed_time stats for it prior to timeout:

image

I included plan_handle and sql_handle for later hook-ins – just keeping things minimal for demonstration purposes.

So if you’re seeing timeouts and want to start narrowing down workloads that are getting near or exceeding the timeout, both sys.dm_exec_procedure_stats and sys.sys.dm_exec_query_stats max_elapsed_time can be a helpful piece of information to work with.

Deck from “Practical SQL Server Cardinality Estimation”

I uploaded the deck from my session for SQLSaturday #287 Madison 2014 and you can download it here.

It was an excellent event!  Great organizers, volunteers, venue, speakers – and a matching big turnout with lots of first-time SQLSaturday attendees. 

Grateful for the experience and hope to be back again next year.

MSTVF Fixed Cardinality Value in SQL Server 2014

In SQL Server 2014 CTP2 in the AdventureWorks2012 database, execute the following batch:

 
USE [AdventureWorks2012];
GO

SELECT [PersonID], [FirstName], [LastName], [JobTitle], [BusinessEntityType]
FROM [dbo].[ufnGetContactInformation](2)
OPTION (QUERYTRACEON 9481);  -- Legacy CE

SELECT [PersonID], [FirstName], [LastName], [JobTitle], [BusinessEntityType]
FROM [dbo].[ufnGetContactInformation](2)
OPTION (QUERYTRACEON 2312); -- New CE

The first query uses the legacy cardinality estimator and the second query uses the new cardinality estimator.   Both queries are referencing a multi-statement table valued function.

Looking at the plan tree view in SQL Sentry Plan Explorer for the legacy CE plan, you’ll see the following (estimating 1 row for the function operators):

image

Looking at the new CE version of the plan tree, we see the following (estimating 100 rows for the function operators):

image

SQL Server 2014 uses a new default fixed cardinality for multi-statement table valued functions.

A few thoughts:

  • Whether 1 row or 100 rows, we’re still using a fixed guess that may or may not reflect reality
  • I’m very wary of using MSTVFs in scenarios where the estimate is critical for plan quality (and oftentimes it is)

March Speaking Engagements

If you’re in the area, just a heads-up that I’ll be speaking at the following events:

PASSMN Minnesota SQL Server User Group
Location:
    3601 West 76th Street, Suite 600 Edina, MN 55437
Date:    March 18, 2014
Time:    4:00 PM – 6:00 PM

SQLSaturday #287
Location: 6000 American Parkway, Building A, Madison, WI 53783
Date: March 29, 2014
Time: My session is in the morning, but there is a ton of good content all day!

I’ll be delivering the presentation “Practical SQL Server Cardinality Estimation”.  The session summary is as follows:

What is cardinality estimation and why should you care? In this session we’ll cover the mechanics behind cardinality estimation and why it is so very critical to overall query performance. We’ll cover concepts applicable to the last few major versions of SQL Server and also preview cardinality estimator changes being introduced in SQL Server 2014.

Hope to see you there!

Using the SQL Server 2014 Database Compatibility Level without the New CE

Consider the following scenario:

  • You want to migrate all databases on a specific SQL Server instance to a new SQL Server 2014 instance
  • You want to leverage new SQL Server 2014 functionality and move to the latest database compatibility level
  • You don’t want to enable the new Cardinality Estimator right away (various reasons why this might be – but perhaps you didn’t have time to fully test key workloads)
  • You don’t want to have to manually add QUERYTRACEON hints for specific queries

To accommodate this scenario, you can do the following:

  • Change the migrated databases to COMPATIBILITY_LEVEL = 120 in order to leverage the SQL Server 2014 database compatibility level
  • Enable trace flag 9481 at the server-level as a startup trace flag (or via DBCC TRACEON – but remembering this doesn’t persist on restarts unless you re-execute)

Trace flag 9481, for using the legacy CE behavior and trace flag 2312, for using the new CE behavior are both fully supported and documented by Microsoft here:

Enable plan-affecting SQL Server query optimizer behavior that can be controlled by different trace flags on a specific-query level

That KB article focuses mostly on QUERYTRACEON – but the CE trace flags can apply at the server-level scope as well.

There are other combinations of CE enabling/disabling that you can use, depending on your requirements, but I just wanted to point out what I think will be a more common scenario.

For the New CE, Database Session Context Matters

Testing on SQL Server 2014, CTP2, version 12.0.1524, imagine you have two databases – one called T1 and one called T2 configured with database compatibility levels as follows:

USE [master];
GO

ALTER DATABASE [T1] SET COMPATIBILITY_LEVEL = 120;
GO

ALTER DATABASE [T2] SET COMPATIBILITY_LEVEL = 110;
GO
 

Now with database T1, we know that using a compatibility level of 120 means we’ll be using the new cardinality estimator (CE) – assuming we’re in the database session context of a new CE database and don’t have a trace flag disabling the new CE behavior.

Executing the following query in the context of T1 does indeed mean we use the new CE:

USE [T1];
GO

SELECT    [member_no],
[provider_no],
[category_no]
FROM [dbo].[charge]
WHERE [charge_no] = 422;
 

<StmtSimple StatementCompId=”1″ StatementEstRows=”1″ StatementId=”1″ StatementOptmLevel=”TRIVIAL” CardinalityEstimationModelVersion=”120″ StatementSubTreeCost=”0.0032831″ StatementText=”SELECT [member_no],[provider_no],[category_no] FROM [dbo].[charge] WHERE [charge_no]=@1″ StatementType=”SELECT” QueryHash=”0x274BD0D496403EEE” QueryPlanHash=”0x6B5F27FE55FE8A5C” RetrievedFromCache=”true”>

But what if we change the query to be in the context of the T2 database (legacy CE) – but still access data from the T1 database?

USE [T2];
GO

SELECT    [member_no],
[provider_no],
[category_no]
FROM [T1].[dbo].[charge]
WHERE [charge_no] = 422;
 

Now we see the query used a legacy CE:

<StmtSimple StatementCompId=”1″ StatementEstRows=”1″ StatementId=”1″ StatementOptmLevel=”TRIVIAL” CardinalityEstimationModelVersion=”70″ StatementSubTreeCost=”0.0032831″ StatementText=”SELECT [member_no],[provider_no],[category_no] FROM [T1].[dbo].[charge] WHERE [charge_no]=@1″ StatementType=”SELECT” QueryHash=”0x274BD0D496403EEE” QueryPlanHash=”0x6B5F27FE55FE8A5C” RetrievedFromCache=”true”>

What if the cross-database query is executed from a new CE session context – but the destination is the legacy CE?

USE [T1];
GO

SELECT    [member_no],
[provider_no],
[category_no]
FROM [T2].[dbo].[charge]
WHERE [charge_no] = 422;
 

In this scenario, the query uses the new CE – based on the database session context – even though the destination database is set to compatibility level 110.

What about accessing data from two databases (rather than my previous example of just accessing data from one database)?  The following example results in a legacy CE plan:

USE [T2];
GO

SELECT    [member_no],
[provider_no],
[category_no]
FROM [T1].[dbo].[charge]
WHERE [charge_no] = 422
UNION
SELECT    [member_no],
[provider_no],
[category_no]
FROM [T2].[dbo].[charge]
WHERE [charge_no] = 422;
GO
 

And this query results in a new CE plan:

USE [T1];
GO

SELECT    [member_no],
[provider_no],
[category_no]
FROM [T1].[dbo].[charge]
WHERE [charge_no] = 422
UNION
SELECT    [member_no],
[provider_no],
[category_no]
FROM [T2].[dbo].[charge]
WHERE [charge_no] = 422;
GO
 

So – bottom line – using the new CE doesn’t mean just changing the database compatibility level.  Database session context also matters.