SQLskills SQL101: Trace Flags

“You should always use trace flag X for a SQL Server install.”

“Have you tried trace flag Y?”

“We fixed the problem using an undocumented trace flag.”

If you’re new to SQL Server, you might have heard or read some of the above statements.  If you’ve never used a trace flag, you might wonder why you might need one, and how you would know if you did need it.  SQL Server trace flags are used to change the behavior of the engine in some way.  A trace flag is ideally used for improvement, but there can be situations where a trace flag doesn’t provide the intended benefit.  In some cases, it can adversely affect the problem you’re trying to fix, or create a different issue.  As such, trace flags in SQL Server are something to use with caution.  The number one recommendation I always make when someone asks about using a trace flag is to test it, ideally in an identical or comparable situation.  This isn’t always possible, which is why there’s always a slight risk with trace flags.  There are only three (3) trace flags that we at SQLskills recommend, by default, for a SQL Server installation:

  • 1118 (for versions prior to SQL Server 2016)
  • 3023 (for versions prior to SQL Server 2014)
  • 3226

Trace flag 1118 addresses contention that can exist on a particular type of page in a database, the SGAM page.  This trace flag typically provides benefit for customers that make heavy use of the tempdb system database.  In SQL Server 2016, you change this behavior using the MIXED_PAGE_ALLOCATION database option, and there is no need for TF 1118.

Trace flag 3023 is used to enable the CHECKSUM option, by default, for all backups taken on an instance.  With this option enabled, page checksums are validated during a backup, and a checksum for the entire backup is generated.  Starting in SQL Server 2014, this option can be set instance-wide through sp_configure (‘backup checksum default’).

The last trace flag, 3226, prevents the writing of successful backup messages to the SQL Server ERRORLOG.  Information about successful backups is still written to msdb and can be queried using T-SQL.  For servers with multiple databases and regular transaction log backups, enabling this option means the ERRORLOG is no longer bloated with BACKUP DATABASE and Database backed up messages.  As a DBA, this is a good thing because when I look in my ERRORLOG, I really only want to see errors, I don’t want to scroll through hundreds or thousands of entries about successful backups.

You can find a list of supported trace flags on MSDN, and as I alluded to initially, there are undocumented trace flags.  An undocumented trace flag is one that is not supported by Microsoft.  If you ever use an undocumented trace flag and you have a problem, Microsoft will not provide support for that problem; if you decide to use an undocumented trace flag, tread carefully, particularly in production.

How will you know if you should use a trace flag?  Online you’ll typically come across a forum post, blog post, or article that describes a scenario that you might be having, with the recommendation that you fix it with a trace flag.  You could also attend a user group meeting, a SQLSaturday or conference session, and hear the same thing.  You may have it recommended to you by a consultant, or another DBA or developer.  In all cases, it’s important to first confirm that what you’re seeing in your environment matches the behavior described by the trace flag.  If you believe you should enable a trace flag, enable it in a test or development environment first where you can recreate the problem, and then test it thoroughly.  Finally, after it’s gone through rigorous testing, you can try it in production.  Notice I say “try” because even with all your testing, if may not be the right solution for your environment.

If you find you do want to give a trace flag try, there are two ways to enable/disable them:

Enabling a trace flag with DBCC TRACEON is done using T-SQL, and you have the option to set the trace flag at the session or global level.  Typically you want the trace flag to be used by the entire instance, so you enable it globally.  For testing purposes, you may just enable it at the session level.  To enable trace flag 3226 globally you would run:

DBCC TRACEON (3226, -1);
GO

The use of -1 turns on the flag for the entire instance.  To disable the trace flag you run:

DBCC TRACEOFF (3226, -1);
GO

The advantage of using DBCC TRACEON and DBCC TRACEOFF is that you don’t have to restart the instance to use the trace flag.  The drawback is that it can be disabled by anyone who has sysadmin membership and runs DBCC TRACEOFF, and that it will not persist through a restart.  I recommend using this option when testing a trace flag.

For cases where you’ve tested the trace flag and you know that you want it enabled, then you want to add it to the SQL Server service as a startup parameter.  This requires using SQL Server Configuration Manager.  Once you have Configuration Manager open, select Services on the left side and then you’ll see all the services listed on the right.  Highlight the SQL Server service, right-click and select Properties, then select the Startup Parameters tab.  To add a startup parameter use the syntax –T followed by the trace flag, as shown below:

Adding TF 3226 as a startup parameter for the SQL Server service

Adding TF 3226 as a startup parameter for the SQL Server service

Note: There should be no space between the –T and the trace flag (but if you try and put one there, SQL Server removes it for you).

Then select Add so it appears in the Existing parameters: window, and then OK, and you will be notified that the change will not take effect until you restart the instance.  If you are not able to restart the instance immediately, you can apply it using DBCC TRACEON, just be aware that someone could remove it.

Lastly, to check what trace flags, if any, are enabled for your instance, you can use DBCC TRACESTATUS.  In our case, the output shows that we have 3226 enabled globally:

DBCC TRACESTATUS;
GO
DBCC TRACESTATUS output showing TF 3226 enabled

DBCC TRACESTATUS output showing TF 3226 enabled

 

 

 

 

As you can see, using trace flags is pretty straight-forward.  However, deciding whether a trace flag is needed and then testing to ensure it provides benefit and not detriment is what requires real work.  Use trace flags wisely, and always test first!  And remember, if you want to find all of our SQLskills SQL101 blog posts visit SQLskills.com/help/SQL101.

Additional reading:

Upcoming Query Store Sessions

This past weekend at SQLSaturday Cleveland I presented a new session related to Query Store, Ensuring Plan Stability with Query Store.  I went into detail on forcing plans with Query Store and how that compares to Plan Guides, and I had some great questions – it was a really fun presentation.  However, I know a lot of companies have not yet upgraded to SQL Server 2016, therefore many DBAs and developers are still figuring out what Query Store and how it works and if they want to use it (quick answer: you do).  No worries, I’m here to help!  I’ve listed upcoming Query Store sessions below (they are all an introduction to QS) – hopefully I’ll see you at one?  Definitely let me know if you’re coming to the New England UG in April or SQLIntersection in May, it’s always nice to put a face to an email address or Twitter handle!

And lastly, HUGE thanks to the entire SQLSaturday Cleveland team – the organizers, volunteers, and speakers were amazing as usual.  This year I helped with registration in the morning and it was great to greet everyone as they walked in – even those that grumbled about being up so early on a Saturday!  And another shout out to all the speakers that traveled to attend our event.  We *love* having such a diverse group of individuals present on so many different SQL Server topics.  Thank you for making the time to submit, prepare, travel, and give us your best.  We truly appreciate it.

Ok, one more thing…the Patriots won the Super Bowl last night.  Tom Brady is the quarterback of the Patriots and now has won five (5!) Super Bowls.  Tom Brady went to the University of Michigan.  GO BLUE!!

Have a great Monday :)

Upcoming Query Store sessions (intro-level)

Tuesday, February 6, 2017 [Remote]: PASS DBA Fundamentals VC

Wednesday, April 12, 2017 [Burlington, MA]: New England SQL Server UG

Wednesday, May 24, 2017 [Orlando, FL]: SQLIntersection

Forced Plans and Compatibility Mode

A question came up recently about plan guides and compatibility mode, and it got me thinking about forced plans in Query Store and compatibility mode.  Imagine you upgraded to SQL Server 2016 and kept the compatibility mode for your database at 110 to use the legacy Cardinality Estimator.  At some point, you have a plan that you force for a specific query, and that works great.  As time goes on, you do testing with the new CE and eventually are ready to make the switch to compatibility mode 130.  When you do that, does the forced plan continue to use compatibility mode 110?  I had a guess at the answer but thought it was worth testing.

Setup

I restored a copy of WideWorldImporters to my SQL 2016 SP1 instance and set the compatibility mode to 110:

USE [master];
GO

RESTORE DATABASE [WideWorldImporters]
FROM  DISK = N'C:\Backups\WideWorldImporters-Full.bak'
WITH  FILE = 1,
MOVE N'WWI_Primary' TO N'C:\Databases\WideWorldImporters\WideWorldImporters.mdf',
MOVE N'WWI_UserData' TO N'C:\Databases\WideWorldImporters\WideWorldImporters_UserData.ndf',
MOVE N'WWI_Log' TO N'C:\Databases\WideWorldImporters\WideWorldImporters.ldf',
MOVE N'WWI_InMemory_Data_1' TO N'C:\Databases\WideWorldImporters\WideWorldImporters_InMemory_Data_1',
NOUNLOAD,
REPLACE,
STATS = 5;
GO

ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 110
GO

Then I enabled Query Store and cleared out any old data that might exist (remember that WideWorldImporters is a sample database so who knows what might exist in the Query Store views):

USE [master];
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE = ON
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE (
OPERATION_MODE = READ_WRITE,
CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30),
DATA_FLUSH_INTERVAL_SECONDS = 900,
INTERVAL_LENGTH_MINUTES = 60,
MAX_STORAGE_SIZE_MB = 512,
QUERY_CAPTURE_MODE = ALL,
SIZE_BASED_CLEANUP_MODE = AUTO,
MAX_PLANS_PER_QUERY = 200);
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE CLEAR;
GO

Next I’ll create a stored procedure to use for testing, and then I’ll run it twice with the RECOMPILE option, as this will generate two different plans.

USE [WideWorldImporters];
GO

DROP PROCEDURE IF EXISTS [Sales].[usp_GetFullProductInfo];
GO

CREATE PROCEDURE [Sales].[usp_GetFullProductInfo]
@StockItemID INT
AS
SELECT
[o].[CustomerID],
[o].[OrderDate],
[ol].[StockItemID],
[ol].[Quantity],
[ol].[UnitPrice]
FROM [Sales].[Orders] [o]
JOIN [Sales].[OrderLines] [ol] on [o].[OrderID] = [ol].[OrderID]
WHERE [StockItemID] = @StockItemID
ORDER BY [o].[OrderDate] DESC;
GO

EXEC [Sales].[usp_GetFullProductInfo] 220  WITH RECOMPILE;
GO

EXEC [Sales].[usp_GetFullProductInfo] 105  WITH RECOMPILE;
GO

Forcing a plan

We’ll start by looking at the two different plans in Query Store.  You can do this through the UI, or by using TSQL.  I’ll use both, just for fun, and we’ll start with TSQL.

SELECT
[q].[query_id],
[q].[object_id],
[o].[name],
[p].[compatibility_level],
[qt].[query_sql_text],
[p].[plan_id],
TRY_CONVERT(XML,[p].[query_plan]) AS [QueryPlan]
FROM [sys].[query_store_query] [q]
JOIN [sys].[query_store_query_text] [qt]
ON [q].[query_text_id] = [qt].[query_text_id]
JOIN [sys].[query_store_plan] [p]
ON [q].[query_id] = [p].[query_id]
JOIN [sys].[objects] [o]
ON [q].[object_id] = [o].[object_id]
WHERE [q].[object_id] = OBJECT_ID(N'Sales.usp_GetFullProductInfo');
GO

 

Query Store output - two different plans

Query Store output – two different plans

 

You can see in the output that there are two different plans (plan_id 3 and plan_id 4)for this stored procedure query.  I can click on the XML link to see each plan, and then compare them, or I can do this from within Query Store.  It’s easier within Query Store, I just need to know the query_id (3).  Within Management Studio, expand the WideWorldImporters database, expand Query Store, then double-click on Tracked Queries and enter the query_id in the Tracking Query box.

 

Two different plans for query_id 3

Two different plans for query_id 3

 

You’ll see that there are two plans, and to compare them you click on both plans in the plan id window (hold down the CTRL key to get them both) and then select Compare Plans.

 

Comparing both plans

Comparing both plans

 

In looking at the plans, you see that the shapes are similar, but Plan 3 has a Nested Loop, while Plan 4 Merge Join that’s fed by a Sort.  For this example, we’ll decide that the Nested Loop plan is “better” for this query, so that’s the one we will force.

However, before we make that change, let’s see if we get a different plan with compatibility mode 130.

USE [master];
GO

ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 130;
GO

USE [WideWorldImporters];
GO

EXEC [Sales].[usp_GetFullProductInfo] 105  WITH RECOMPILE;
GO

Check Query Store again…

SELECT
[q].[query_id],
[q].[object_id],
[o].[name],
[p].[compatibility_level],
[qt].[query_sql_text],
[p].[plan_id],
TRY_CONVERT(XML,[p].[query_plan]) AS [QueryPlan]
FROM [sys].[query_store_query] [q]
JOIN [sys].[query_store_query_text] [qt]
ON [q].[query_text_id] = [qt].[query_text_id]
JOIN [sys].[query_store_plan] [p]
ON [q].[query_id] = [p].[query_id]
JOIN [sys].[objects] [o]
ON [q].[object_id] = [o].[object_id]
WHERE [q].[object_id] = OBJECT_ID(N'Sales.usp_GetFullProductInfo');
GO
Query Store output - now three different plans

Query Store output – now three different plans

We DO have a different plan!  If we look at the plan, we see that the shape is still similar, but now we have a Hash Match with a Filter operator and a Clustered Index Scan.

Plan from compatibility mode 130

Plan from compatibility mode 130

 

Now we want to force that Nested Loop plan.  First, change the compatibility mode back to 110:

USE [master];
GO

ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 110;
GO

Next, force the plan that has the Nested Loop, and we can do this in the UI, or with TSQL.  In the UI just go back to the Tracked Queries window, select the plan, and then Force Plan.  To force the plan with TSQL, you need to know the query_id and plan_id:

USE [WideWorldImporters];
GO

EXEC sp_query_store_force_plan @query_id = 3, @plan_id = 3;
GO

Now the plan is forced.  If we enable the actual execution plan and re-run our stored procedure without the RECOMPILE on it (because why would you use RECOMPILE on a query with a forced plan?) we see that the Nested Loop plan is used:

EXEC [Sales].[usp_GetFullProductInfo] 105;
GO
Stored procedure's execution plan, after being forced

Stored procedure’s execution plan, after being forced

And here’s the big test…  Change compatibility mode to 130 again, free procedure cache just for fun (this does not matter – when a plan is forced, it doesn’t matter if the plan exists in cache or not), and then run the stored procedure and check the plan:

USE [master];
GO
ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 130;
GO

USE [WideWorldImporters];
GO

EXEC [Sales].[usp_GetFullProductInfo] 105  WITH RECOMPILE;
GO
Stored procedure's execution plan, after compatibility mode changed to 130

Stored procedure’s execution plan, after compatibility mode changed to 130

Surprised?  The Nested Loop plan is still used.  This is expected!  It does not matter if the compatibility mode for the database is different than the compatibility mode for the plan.  The forced plan is what’s used.

Summary

In this example, even when the compatibility mode for the database changed, the forced plan was still used.  Thus, forced plans are not tied to compatibility mode.  This is a good thing.  If you’ve upgraded to SQL Server 2016 and you are working to fix query performance issues related to the new cardinality estimator, forcing plans can be incredibly helpful in stabilizing performance without changing code to include trace flags or hints.  However, do not assume that a forced plan will always be used.  If you look at the Best Practice with the Query Store guidelines, there’s a section titled “Check the Status of Forced Plans Regularly.”  Within that section is this note:

However, as with plan hints and plan guides, forcing a plan is not a guarantee that it will be used in future executions.

Therefore, while you force a plan because you want it to be used – to make query performance more stable – SQL Server does not guarantee it will always be used.  There are cases when it cannot, and should not, be used, hence the recommendation to check the status of forced plans in sys.query_store_plan.

SQLSaturday Cleveland 2017

Cleveland peeps – we are a week away from SQLSaturday Cleveland, are you registered?!  There’s still time if you’re not, AND there is still time to register for one of the fantastic pre-cons we’re hosting this year.  Your options:

Pre-con cost is $175, which is a deal compared to what it would cost if you attended the same pre-con at the PASS Summit (add in travel costs, and it’s a steal).  Then consider that the group will be much smaller than what it would be at Summit, so you’ll have plenty of opportunities to talk directly to Adam or Ginger to ask questions.  It’s a no-brainer…so much so that I’m attending Adam’s session.  I spend a fair bit of time tuning but there is always more to learn and different perspectives are great for improving troubleshooting skills.

So talk to your manager, re-work your schedule for next week, and sign up!  If you’ll be there, stop by and say hi, or say hi on Saturday where I’ll be at the registration desk (warning: I don’t do mornings well so forgive me if I’m in a daze!) and then I’ll be presenting my new Query Store session at 2:45 PM.  I hope to see you there, and happy Fri-yay!

p.s. Don’t forget to print your SpeedPass!  :)

T-SQL Tuesday #86: Storing and Accessing the Query Store Data

T-SQL Tuesday

T-SQL Tuesday

The crew over at Brent Ozar Unlimited is hosting this month’s T-SQL Tuesday, and the request was to find an interesting Connect item (bug fix or enhancement request) and write about it.  I’ve been knee-deep in Query Store, and there are two Connect items that immediately came to mind.

One of the primary uses of Query Store is finding query regressions – that’s where I start with every presentation and that’s how it’s been pitched by Microsoft from the beginning.  And typically we talk about troubleshooting production issues that are happening right now, but Query Store is also great for finding regressions and problems before you upgrade or make changes.  If you run Query Store in production, and have some code changes you want to test, you can restore a backup of production, make changes, then run your queries and use Query Store to do comparisons.  But when I start talking about this, I can see people start to think about their own testing process, and they start asking questions….

Where does the Query Store data live?  In the user database.

Can I change the location of the Query Store data? No, it resides in the PRIMARY filegroup.

Can I export the Query Store data? Technically yes, but it’s not a supported method.

What if I have sensitive data in Query Store that can’t exist in a non-Production environment? Just like any restore of a database with sensitive data, you’re going to have to obfuscate it.

Two of these requests have Connect items, which I’m listing below.  The first allows you to change the location of Query Store data to reside somewhere in the user database besides the PRIMARY filegroup:

Option to store query store data in a filegroup other than PRIMARY

The other request is related to exporting that data, which is technically possible now, but it’s not a supported method so it’s not something I really want to implement in a client environment.  I’ve had many people describe their process for testing which includes restoring database nightly.  If they’re using Query Store as part of testing, that data is lost every night by the restore.

Export Query Store tables separately from the database tables

Find time to up-vote the Connect items you think are most valuable!  Haven’t used Connect before?  You just need to create a Microsoft account (it’s free) and then you can vote.  If you use SQL Server on a daily basis, I expect you’d like provide input into the development of the product you’re supporting.  Have fun!

Trace Flag Information in Query Plans

I was perusing the release notes for SQL Server 2014 SP2 and found this gem:

Information about enabled trace flags is added to the showplan XML in SQL Server 2014 SP2

Ohhhhhh, very cool.  This is great information for troubleshooting!

I fired up my 2014 VM and applied SP2, then verified the trace flags I had enabled for the instance:

DBCC TRACESTATUS;
GO
Trace flags enabled

Trace flags enabled

I have three trace flags enabled, which we typically recommend for all installations*.

So let’s run a few queries and then check the plan.  For the first query I’m not adding any other trace flags:

SELECT [p].[ProductID],[p].[StandardCost], [p].[ListPrice], [pi].[Quantity], [p].[Name]
FROM [Production].[Product] [p]
JOIN [Production].[ProductInventory] [pi] ON [p].[ProductID] = [pi].[ProductID]
WHERE [p].[SellEndDate] IS NOT NULL
AND [pi].[Quantity]  < 1000;
GO
Query Plan

Query Plan

The plan is nothing too crazy – it’s not great because it’s scanning the clustered indexes, but I’m not tuning here so we’ll ignore that for now.  But notice that when I click on the SELECT operator with the Properties window open, you’ll see an entry for TraceFlags at the bottom:

Properties for the SELECT operator

Properties for the SELECT operator

If we expand that, we can see the three trace flags that I have enabled, and it’s noted whether they are enabled globally or locally, and at compile time (IsCompileTime True) and run time (IsCompileTime False):

Trace flag information from the properties window

Trace flag information from the properties window

And I can also go directly to the XML to find the information:

Trace flag information from the plan XML

Trace flag information from the plan XML

Cool.  What happens if I use a trace flag, like 9130 to push out residual predicates into a FILTER operator, in a query hint?

SELECT [p].[ProductID],[p].[StandardCost], [p].[ListPrice], [pi].[Quantity], [p].[Name]
FROM [Production].[Product] [p]
JOIN [Production].[ProductInventory] [pi] ON [p].[ProductID] = [pi].[ProductID]
WHERE [p].[SellEndDate] IS NOT NULL
AND [pi].[Quantity]  < 1000
OPTION (QUERYTRACEON 9130);
GO

We see the FILTER operators in the plan:

Query plan with FILTER operator

Query plan with FILTER operator

And in the XML we see 9130 for the compilation as a session trace flag, but it doesn’t show up in the execution flags:

Trace flags (with hint) from qeury plan XML

Trace flags (with hint) from qeury plan XML

Now, in the aforementioned examples, I’ve run the query in Management Studio and captured the actual plan.  Do I get all the same information from the plan cache?  To check, I’ll clear the plan cache, re-run the most query, and then interrogate the plan cache to see what I get:

DBCC FREEPROCCACHE; /* not for production use! */
GO

SELECT [p].[ProductID],[p].[StandardCost], [p].[ListPrice], [pi].[Quantity], [p].[Name]
FROM [Production].[Product] [p]
JOIN [Production].[ProductInventory] [pi] ON [p].[ProductID] = [pi].[ProductID]
WHERE [p].[SellEndDate] IS NOT NULL
AND [pi].[Quantity]  < 1000
OPTION (QUERYTRACEON 9130);
GO

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
GO

SELECT [s].1, [qs].[last_execution_time], [qp].[query_plan]
FROM [sys].[dm_exec_query_stats] [qs]
CROSS APPLY [sys].[dm_exec_query_plan] ([qs].[plan_handle]) qp
CROSS APPLY [sys].[dm_exec_sql_text]([qs].[plan_handle]) [s]
WHERE [s].1
LIKE '%SellEndDate%';
GO

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
GO

When I open the query plan from the output and look at the XML, I see compilation information, but no runtime info:

Trace flag information from query plan XML retrieved from cache

Trace flag information from query plan XML retrieved from cache

Presumably, if I run the query again, without clearing the plan cache, and taking off the 9130 trace flag, I would get a new entry:

SELECT [p].[ProductID],[p].[StandardCost], [p].[ListPrice], [pi].[Quantity], [p].[Name]
FROM [Production].[Product] [p]
JOIN [Production].[ProductInventory] [pi] ON [p].[ProductID] = [pi].[ProductID]
WHERE [p].[SellEndDate] IS NOT NULL
AND [pi].[Quantity]  < 1000;
GO

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
GO

SELECT [s].1, [qs].[last_execution_time], [qp].[query_plan]
FROM [sys].[dm_exec_query_stats] [qs]
CROSS APPLY [sys].[dm_exec_query_plan] ([qs].[plan_handle]) qp
CROSS APPLY [sys].[dm_exec_sql_text]([qs].[plan_handle]) [s]
WHERE [s].1
LIKE '%SellEndDate%';
GO

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
GO
Query information in the plan cache

Query information in the plan cache

True.  When I check the plan for the second entry, with the later execution time, in the XML I find:

Trace flags from query plan XML (no hint)

Trace flags from query plan XML (no hint)

Ok, so I can find what trace flags are enabled when a query is compiled, and if anyone is using QUERYTRACEON to enable trace flags, I can see that here as well.  (If you’re wondering, yes, I could see also see that from querying the plan cache because it’s in the text of the query.)  On a bigger scale, I can mine the plan cache for this information.  HUGE thanks to Jonathan for saving me from a losing battle with XQuery:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
GO

WITH XMLNAMESPACES
(DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT
STUFF((SELECT ', ' +  tf.value('(./@Value)[1]', 'varchar(10)')
FROM stmt.nodes('./QueryPlan/TraceFlags[@IsCompileTime="1"]/TraceFlag[@Scope="Global"]') AS t(tf)
FOR XML PATH('')
), 1, 2, '') AS GlobalTraceFlags,
STUFF((SELECT ', ' +  tf.value('(./@Value)[1]', 'varchar(10)')
FROM stmt.nodes('./QueryPlan/TraceFlags[@IsCompileTime="1"]/TraceFlag[@Scope="Session"]') AS t(tf)
FOR XML PATH('')
), 1, 2, '') AS SessionTraceFlags,
stmt.query('.') as stmt_node,
cp.usecounts,
qp.query_plan,
cp.plan_handle,
qp.objectid,
cp.objtype,
cp.cacheobjtype,
cp.size_in_bytes
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp
CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS batch(stmt)
WHERE stmt.exist('./QueryPlan/TraceFlags[@IsCompileTime="1"]') = 1
OPTION(RECOMPILE, MAXDOP 1);
Output from query that checks the plan cache

Output from query that checks the plan cache

Awesome…Jon’s code breaks out global and session trace flags from the query plan, which makes it easy to find any session level flags in use.  Note that this query will return the entire plan cache, so I recommend including additional predicates in the WHERE clause (e.g. where the session flag is NOT NULL).

*Why we recommend 3226 and 1118:

Fed up with BACKUP success messages bloating your error logs?

Misconceptions around TF 1118

*Note that 4199 is no longer needed in SQL Server 2016 if you’re using database compatibility mode 130, see SQL Server query optimizer hotfix trace flag 4199 servicing model.

T-SQL Tuesday #84: Growing New Speakers

T-SQL TuesdayThis week I’m in Chicago for our last SQLskills Immersion Event of the year, IE0, which is one of my favorite classes to teach.  Jon is on right now talking about security, and I’m up next to talk about backups.  There’s a bit of irony here in that today’s T-SQL Tuesday is hosted by Andy Yun, whom I was fortunate enough to see last night at our Happy Hour.  I got to catch up with Andy and hear about his first Summit presentation, and he was talking about his path to becoming a presenter (I’ll let him share that story).

So when I read about today’s TSQL Tuesday, I realized I should chime in :)

I’ve written about presenting before:

There’s a lot of information in those posts.  I re-read all of them, and everything still rings true.  What can I add that’s new?  A note about comfort.

For some of my presentations, I have gotten to the point where I am incredibly comfortable with the material.  This comes with having presented the content many, many times, and a lot of real-world experience.  But I don’t ever let the material, or my delivery of it, get stale.  I am constantly tweaking my demos and often my slides.  Rarely do I re-write something entirely, but I will change the order of slides because I think of a better way to present a concept.  I will add something to a demo based on a question, or because I think it will help solidify what I’m trying to teach.  In short: I don’t stop evolving a presentation.  The core content may be the same, but I can always do better.

Don’t settle, don’t get too comfortable.  It’s not about perfection, it’s about improving.

 

SQL Server 2016 Upgrade Testing with the New Cardinality Estimator: Context Matters

This week I’ve been working with SQL Server 2016 and it’s been a lot of fun – this is truly a great release from the SQL Server team.  Yesterday I was working on some upgrade testing with regard to the new Cardinality Estimator (CE), introduced in SQL Server 2104 and well explained in this whitepaper written by my friend Joe Sack.  If you are upgrading to SQL Server 2014 or 2016, you should test not just your upgrade process, but also the performance of your queries with the new CE.  In some cases we have seen good/great improvements in query performance with the new CE, but in other cases we have seen significant regressions which dramatically affect overall system performance.

There are a variety of things to look for when testing beyond just query duration and resource use – you should also be looking at estimates and plan shape, which means capturing the execution plan.  I was doing just that yesterday, and in my testing I was changing the compatibility mode for the user database and running my queries with the different CEs.  I was then confirming the CE version used for the query by looking at the plan and checking the CardinalityEstimationModelVersion attribute.  Remember that CE version is tied to the compatibility mode, and the CE that will be used is tied to the current database context.  So if you’re running cross database or distributed queries, this is something to which you need to pay attention.

Consider this scenario…

I have a new installation of SQL Server 2016, and I restore my user database to the instance.  By default the compatibility is not changed unless the database you’re restoring has a compatibility level of 90 (remember you can upgrade from SQL Server 2005+ to any other version).  In that case, it will automatically be bumped up to 100 for both SQL Server 2014 and SQL Server 2016.  For more details on supported compatibility levels for versions and the differences between the levels, see ALTER Database Compatibility Level.

Because this is a new installation of SQL Server 2016, the system databases have compatibility level of 130.  I restore two user databases to my instance (previously running on SQL Server 2012), call them AdventureWorks and WideWorldImporters.  I leave the compatibility level for AdventureWorks at 110, and set it to 130 for WideWorldImporters.  I then start running queries, some of which are specific to a database, others which query both databases.

In the case where I’m in the context of AdventureWorks, the old CE version used – even when I query the WideWorldImporters database.  And if I’m in the WideWorldImporters database, querying across to AdventureWorks, the new CE is used.  Here’s setup code:

/*
	Restore the databases
	(change locations as appropriate)
*/
USE [master];
GO

RESTORE DATABASE [WideWorldImporters]
	FROM  DISK = N'C:\Backups\WideWorldImporters-Full.bak'
	WITH  FILE = 1,
	MOVE N'WWI_Primary' TO N'C:\Databases\WideWorldImporters\WideWorldImporters.mdf',
	MOVE N'WWI_UserData' TO N'C:\Databases\WideWorldImporters\WideWorldImporters_UserData.ndf',
	MOVE N'WWI_Log' TO N'C:\Databases\WideWorldImporters\WideWorldImporters.ldf',
	MOVE N'WWI_InMemory_Data_1' TO N'C:\Databases\WideWorldImporters\WideWorldImporters_InMemory_Data_1',
	NOUNLOAD,
	REPLACE,
	STATS = 5;
GO

RESTORE DATABASE [AdventureWorks2016]
	FROM  DISK = N'C:\Backups\AW2014_Base.bak'
	WITH  FILE = 1,
	MOVE N'AdventureWorks2014_Data' TO N'C:\Databases\AdventureWorks2016\AdventureWorks2016_Data.mdf',
	MOVE N'AdventureWorks2014_Log' TO N'C:\Databases\AdventureWorks2016\AdventureWorks2016_Log.ldf',
	NOUNLOAD,
	REPLACE,
	STATS = 5;
GO

/*
	Set the compatibility levels
*/
USE [master];
GO
ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 130;
GO
ALTER DATABASE [AdventureWorks2016] SET COMPATIBILITY_LEVEL = 110;
GO

First, run the query in the context of WideWorldImporters, with the execution plan displayed:

USE [WideWorldImporters];
GO

SET STATISTICS XML ON;
GO

SELECT
	[o].[CustomerID],
	1.[StoreID],
	[o].[OrderDate],
	[ol].[StockItemID],
	[ol].[Quantity],
	[ol].[UnitPrice]
FROM [WideWorldImporters].[Sales].[Orders] [o]
JOIN [WideWorldImporters].[Sales].[OrderLines] [ol] on [o].[OrderID] = [ol].[OrderID]
JOIN [AdventureWorks2016].[Sales].[Customer] 1 ON 1.[CustomerID] = [o].[CustomerID]
WHERE [o].[OrderDate] BETWEEN '2016-05-01' AND '2016-05-31'
ORDER BY [o].[OrderDate] DESC;
GO

SET STATISTICS XML OFF;
GO

If you open up the graphical plan, and display the Properties window (F4) and then click on the SELECT operator in the plan, I can see that the CardinalityEstimationModelVersion is 130 (you can also view the XML and do a search to find it there):

Execution plan using the new CE

Execution plan using the new CE

Now run the query in the context of AdventureWorks, again with the execution plan displayed:

USE [AdventureWorks2016];
GO

SET STATISTICS XML ON;
GO

SELECT
	[o].[CustomerID],
	1.[StoreID],
	[o].[OrderDate],
	[ol].[StockItemID],
	[ol].[Quantity],
	[ol].[UnitPrice]
FROM [WideWorldImporters].[Sales].[Orders] [o]
JOIN [WideWorldImporters].[Sales].[OrderLines] [ol] on [o].[OrderID] = [ol].[OrderID]
JOIN [AdventureWorks2016].[Sales].[Customer] 1 ON 1.[CustomerID] = [o].[CustomerID]
WHERE [o].[OrderDate] BETWEEN '2016-05-01' AND '2016-05-31'
ORDER BY [o].[OrderDate] DESC;
GO

SET STATISTICS XML OFF;
GO

This time when you look at the graphical plan, the CardinalityEstimationModelVersion is 70, indicating it’s using the old CE:

Execution plan using the old CE

Execution plan using the old CE

In this example, I have users connecting to both databases, and they can be querying one database or both.  Note that if you happen to keep users in the context of one database (e.g. a user database that has no data in it, but you use as a “gateway” to other database), then the compatibility level for that database will determine what CE the queries use.  The exception to all this, of course, is the CE-related traceflags.  You can override the compatibility level and force the optimizer to use a specific CE by using one of two trace flags:

  • Use Trace Flag 9481 to revert to the legacy CE behavior from the context of a database with a compatibility level of 120 or higher
  • Use Trace Flag 2312 to enable to the new CE from the context of a database with a compatibility level below 120

Therefore, when you’re testing your upgrade to SQL Server 2014 or 2016, take the time to look beyond the basics.  Upgrading is not just about making sure nothing breaks, it’s also about making sure that performance doesn’t degrade, even if you don’t change hardware (especially if you don’t change hardware!).  We don’t typically expect that upgrading to a newer software version would negatively affect performance, but because the change the SQL Server’s Cardinality Estimator is significant, this is one you definitely want to test.

 

Mapping Events from Trace to Extended Events

Every time I talk about migrating from Profiler and Trace to Extended Events (XE), I seem to add something to my presentation.  It’s not always intentional, and I don’t know if it’s because I just can’t stop tweaking my demos and contents, or something else, but here in London at IEPTO2 this week, I added a query that helps you see what event in Extended Events is comparable to the event you’re used to using in Trace/Profiler.  While most of the events in XE have a name similar to the event in Trace (e.g. sp_statement_completed for SP:StmtCompleted), some mappings aren’t so intuitive.  For example, SP:Starting in Trace is module_start in XE, and SP:Completed in Trace is module_end in XE.  That’s not so bad, but if you’re ever monitoring file growths, note that the database_file_size_change event in XE is the event for the following four events in trace: Data File Auto Grow, Data File Auto Shrink, Log File Auto Grow, and Log File Auto Shrink.

(Note: if you happen to notice databases_data_file_size_changed or databases_log_file_size_changed in the list of XE, do not use either event.  Both are deprecated and won’t return any information.  Use database_file_size_change.)

Anyway, those fun one-offs aside, you’ll see that the query and its output are pretty straightforward:

SELECT 
	[t].[trace_event_id] [TraceEventID], 
	[t].[name] [Trace_Event], 
	CASE	
		WHEN [t].[name] LIKE '%Audit%' AND [xe].[xe_event_name] IS NULL THEN 'This event is not implemented in Extended Events, it may be a Server Audit Event'
		WHEN ([t].[name] LIKE '%Audit Lo%' OR [t].[name] LIKE'%Audit Server%') AND [xe].[xe_event_name] IS NOT NULL THEN 'The XE Event [' + [xe].[xe_event_name] + '] exists, but use Server Audit for this event instead'
		WHEN [xe].[xe_event_name] IS NULL THEN 'No comparable XE Event'
		ELSE [xe].[xe_event_name]
	END	[XE_Event],
	[xe].[package_name] [XE_Package]
FROM [sys].[trace_events] [t] 
LEFT OUTER JOIN [sys].[trace_xe_event_map] [xe] ON [t].[trace_event_id] = [xe].[trace_event_id]
ORDER BY [t].[name] ;
GO 

You can run this on SQL Server 2012 or higher.  You could run it on SQL Server 2008/2008R2, but remember that there was not a comparable event in Extended Events for every event in Trace until SQL Server 2012.  Thus you will have some unmapped events in the 2008/2008R2 releases.

Something that you’ll notice in the query: I’m accounting for Audit events.  The  Audit Feature was introduced in SQL Server 2008 but was Enterprise Edition only.  In SQL Server 2012, Server Audit is supported in all editions, so all Audit events from Trace should instead be captured with Server Audit (which leverages Extended Events behind the scenes).

Finally, don’t forget that if you’re looking to take an existing Trace and convert it to Extended Events, you can use Jonathan’s stored procedure to get the event session DDL.

Happy Extended Eventing!

 

SQL Trace versus Extended Events: How long does it take to startup a trace vs. an event session?

Yesterday I asked how everyone uses Trace and Profiler…  Jonathan and I have talked about creating a library of Extended Events script to help people migrate off of Profiler and Trace and that post was designed to generate some ideas about what people might find useful.

I haven’t gotten a ton of comments, but I did get a few (thank you to those have responded!), and I decided to take one of them and create a Trace and create an Extended Events session and see how long it took for each.  Jonathan has mentioned before that he can create an XE session as fast as a Trace, and I’ve been thinking that I can as well so I thought I’d test it.  It’s a straight-forward Trace versus Extended Events test.  Want to see what’s faster?  Watch the video here.