Thoughts on public speaking / presenting / teaching

A colleague of mine asked me this on Twitter the other day:

When you started speaking did you know straight away that it was something you loved doing?

My answer: No.

It’s a really good question, and I said I’d go more in depth.  We have to go way back in time.  In asking the question, I believe my colleague was thinking about speaking in the SQL Server community, but for me it started before I found the SQL Server community.

I don’t think there are many people that love public speaking from the get-go.  At the University of Michigan I had to take Communications 101 (a public speaking course) in order to graduate.  I dreaded it.  Most people did.  But I took in in the fall of my sophomore year and got an A.  (Yes, I went and checked my college transcript.)

But the first time I really spoke to a group of peers and professors to explain or teach something was my first year of graduate school.  We had a day to celebrate the accomplishments within the Kinesiology department, and I had been working on a grant that tested the effects of Botox on children with cerebral palsy.  My advisor, Dr. Brown, wanted me to present our initial findings.  I had 10 minutes.  I created 10 slides and had a one minute video to show.  I remember Dr. Brown telling me that she used talk about one slide for 10 minutes, she had no idea how I’d get through all 10.  I was terrified I’d finish in 5 minutes.

I have hazy memory of my talk – I remember what I wore, I remember thinking my voice was shaking, I remember feeling nervous, I remember nodding at Dr. Watkins to start the video…and that’s it.

I can’t remember any feedback, but I do remember thinking I didn’t want to do that again.

Flash forward a couple months to Dr. Brown’s idea that I could teach the motor control section of the Movement Science 110 course.  Teach to freshman and sophomores.  People who were PAYING a lot of money to go to school at Michigan.  Again, I was terrified, despite Dr. Brown’s logic: I’d get paid, I would experience teaching, and it gave me a chance to learn the material even better.  I didn’t even have to create the content – I could just use what she had already been using.  I don’t know if I even tried to argue, I probably knew I wouldn’t win (Dr. Brown was pretty persistent).  So in the fall of 1997, I started teaching.  On the first day I had student argue with me about theories.  THEORIES!  I was teaching science.  I wanted to quit, but I didn’t.  I taught that class for two years, and I probably learned more than my students did.

Fast-forward a couple years to my first job in technology, at a software company, providing technical support.  I was soon asked if I was interested in training customers as well, as there was only one other person who handled training at that time.  I said yes – voluntarily this time.  I learned the software, I learned how to teach other people how to use it, and I got better.

By the time I worked in the Database Services department at Hyland I sought out opportunities to teach.  Every year there was a user conference, and during my first year on the team I asked a senior member of management if I could help with his presentation.  Now, I don’t remember the impetus, but we started co-presenting, until the year that he looked at me and said: “You can do this without me, I’m about to retire.”  I taught that class at multiple conferences over the next few years.  I asked to add database classes to the conferences and I developed and delivered those.  I provided internal training and recorded material to be viewed by partners and users online.  By then, I loved it.

When I discovered the SQL Server community and found out there was a conference every year (the PASS Summit) my initial thought was, “I want to present at that!”  And so I worked my way up.  I presented to my user group in the winter of 2010, and then at the Cleveland SQLSaturday in February 2011.  My first Summit was that same year, with a lot of other SQLSaturday events in between.

I’ve now been “presenting” off and on for about 20 years.  And I put presenting in quotes because I don’t think of it that way; I think I’m always teaching.  I’ve gotten a lot of experience in those years, and as a result I’ve gotten comfortable in front of a crowd and have developed my own style.  And while I’m proud of what I’ve accomplished, I still work to improve.  I tweak every session trying to figure out how to make an explanation even clearer.  I change demos all the time, trying to get them *just right* so they easily demonstrate a concept.  I continually read an audience and make adjustments on the fly when I can.  It doesn’t end, and I’m ok with that.  I do enjoy presenting/teaching now, but I didn’t when I started…because it was uncomfortable, because it was hard, because I didn’t know what I doing.  Because like everything else, it takes practice to become good, even if you have a knack for it from the start.

The greats weren’t great because at birth they could paint
The greats were great cause they paint a lot
~Macklemore and Ryan Lewis

Endpoints for Mirroring and AGs in SQL Server 2016

I migrated a customer to SQL Server 2016 last weekend (YAY!) and ran into an interesting issue. The original environment was SQL Server 2012 on server A. The new environment, running SQL Server 2016, is a three-node Availability Group with servers B, C, and D. I had already set up the AG with a test database in the new environment, with B as the primary and C and D as replicas. To upgrade with little downtime, I mirrored from server A to server B, and that’s where I ran into this error:

Alter failed for Database ‘AdminSQLskills’. (Microsoft.SqlServer.Smo)
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
The server network address “TCP://avengers.com:5022″ can not be reached or does not next.
Check the network address name and that the ports for the local and remote endpoints are operational. (Microsoft SQL Server, Error: 1418)

[You can see the image of the error in this StackOverflow post, which is also where I found the solution.]

I verified the following:

1. The databases on server B had been restored with NORECOVERY.
2. The accounts I used had the same permissions on both instances.
3. The endpoints existed.
4. Encryption was enabled for both endpoints.

Then I found my issue. The endpoints had different encryption methods.

For SQL Server 2014 and earlier, the endpoints use RC4 encryption by default. Starting in SQL Server 2016, the end points use AES encryption by default (see CREATE ENDPOINT). According to endpoint documentation, the RC4 encryption is deprecated.

The fix was easy, on the 2012 server I changed the encryption to AES:

ALTER ENDPOINT [Mirroring]
    FOR DATA_MIRRORING ( ENCRYPTION  = REQUIRED ALGORITHM AES);
GO

Note that if I had changed the encryption on the 2016 instance to use RC4 encryption, the Availability Group would no longer work.

Once I made this change, mirroring was up and running. All my prep work paid off, as the upgrade last weekend took minutes once we confirmed all services were shut down and users were out of the system. We had minimal post-upgrade issues to work through, and my next step is to enable Query Store :) Hooray for 2016!

Remove Files From tempdb

I made a mistake with a script today. I created three new tempdb files sized at 10GB each that filled up a hard drive.

Whoops.

Luckily it was in one of my own testing VMs, so it wasn’t awful. Fixing it, however, was a fun one.

**NOTE: All work was done in a test environment. Proceed with caution if you’re running these commands in Production and make sure you understand the ramifications.

In order to remove a file from a database in SQL Server, it has to be empty. For each file I wanted to remove I needed to run:

USE [tempdb];
GO
DBCC SHRINKFILE (logicalname, EMPTYFILE);
GO

However, every time I tried to run this command for any file, I would get a message like this:

DBCC SHRINKFILE: Page 4:130 could not be moved because it is a work table page.
Msg 2555, Level 16, State 1, Line 1
Cannot move all contents of file “logicalname” to other places to complete the emptyfile operation.

This error came up for each file, even if I restarted the instance and did nothing, and even if I restarted it in single-user mode.

Then I found some posts about clearing the procedure cache, and the session cache, so I cleared everything…go big or go home right? Remember, I’m working in a local test environment so this isn’t a big deal.

DBCC DROPCLEANBUFFERS
GO
DBCC FREEPROCCACHE
GO
DBCC FREESESSIONCACHE
GO
DBCC FREESYSTEMCACHE ( 'ALL')
GO

If I tried to empty the file after that, it still failed.

**Note: In talking with Jonathan after the fact, he said he’s seen this before, where every file in tempdb has a workfile in it that you cannot remove. He thinks the behavior started with SQL Server 2012. I haven’t found any documentation from Microsoft about this…yet…

Now I was getting annoyed (mostly with myself for this mistake in the first place). Finally, I tried started SQL Server with minimal configuration, using -f, and connected with sqlcmd. The documentation notes that “tempdb is configured at the smallest possible size.” So small that not all the files were there! I couldn’t run the DBCC SHRINKFILE command because the additional files weren’t available. Perfect, as then I could just remove them:

ALTER DATABASE [tempdb]  REMOVE FILE [logicalname]
GO

I ran the ALTER DATABASE [tempdb] REMOVE FILE for each of the three files I added, shut down the instance, removed -f, and restarted. The files were removed! However, they were still sitting out on the drive, but because they were no longer in use I could delete them. Space reclaimed, time for some chocolate.

SQLskills SQL101: Updating SQL Server Statistics Part II – Scheduled Updates

In last week’s post I discussed the basics of how automatic updates to statistics occur in SQL Server.  This week I want to talk about scheduled (aka manual) updates, because as you might remember, we really want to control when statistics are updated.

In terms of updating statistics you have multiple options, including:

  • Update Statistics Task (Maintenance Plan)
  • sp_updatestats
  • UPDATE STATISTICS

For systems that do not have a full-time DBA, one of the easiest methods for managing statistics is the Update Statistics Task.  This task can be configured for all databases or certain databases, and you can determine what statistics it updates:

Update Statistics Task- deciding which statistics to update

Update Statistics Task – deciding which statistics to update

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You might think you want to update All existing statistics.  If you just had a plan with just this task, that might be true.  But what I see most often is that someone configures the Rebuild Index task, and then has the Update Statistics task as the next step.  In that case, if you are running SQL Server 2014 and below, you want to update Column statistics only.  When you run the Rebuild Index task in SQL Server 2014, you rebuild all indexes, and when you rebuild an index, its statistic is updated with a fullscan.  Therefore, there is no need to update Index statistics after you rebuild all your indexes, but you do need to update column statistics.

This is a bit more complicated in SQL Server 2016.  The Rebuild Index task has more options in SQL Server 2016, which is nice for that specific task, but it makes providing guidance about statistics updates a bit trickier.  In SQL Server 2016 you can configure the Rebuild Index task so that it only rebuilds an index if a certain level of fragmentation exists.  Therefore, some of your indexes will rebuild (and thus have statistics updated) and some will not (and not have updated statistics).  How do you manage that with the Update Statistics task?  Well, in that case you probably select All existing statistics and update some statistics for a second time, which is really a waste.  Therefore, if you’re on SQL Server 2016, you probably want to look at more intelligent updates.

One method, which I would not say is intelligent, but it is an option, is to use sp_updatestats in a scheduled job that runs on a regular basis.  This command is one you run for a database, not for a specific statistic or index or table.  The sp_updatestats command will only update statistics if data has changed.  That sounds good, but the caveat is that only one (1) row has to have changed.  If I have a table with 2,000,000 rows, and only 5 rows have changed, I really don’t need to update statistics.

The other method is to use UPDATE STATISTICS in a scheduled job.  The UPDATE STATISTICS command can be run for individual statistics or for a table (updating all statistics for a table).  You can develop an intelligent method to use this command, which is what I recommend.  Rather than a blanket update to all statistics, or statistics where one row has changed, I prefer to update statistics that are outdated based on the amount of data that has changed.  Consider the aforementioned table with 2,000,000 rows.  If I let SQL Server update statistics automatically, I would need 400,500 rows to change.  It’s quite possible that with a table of that size I would want to statistics to update sooner – say after 200,000 rows had changed, or 10% of the table.

We can programmatically determine whether we need to update statistics using the sys.dm_db_stats_properties DMF.  This DMF tracks modifications, and also tells us how many rows were in the table when statistics were last updated, and the date statistics were updated. For example, if I update some rows in Sales.SalesOrderDetail, and then look at the output from the DMF, you can see that the modification counter matches the number of rows I changed* for the ProductID index:

USE [AdventureWorks2012];
GO

UPDATE [Sales].[SalesOrderDetail]
SET [ProductID] = [ProductID]
WHERE [ProductID] IN (921,873,712);
GO

SELECT
[so].[name] [TableName],
[ss].[name] [StatisticName],
[ss].[stats_id] [StatisticID],
[sp].[last_updated] [LastUpdated],
[sp].[rows] [RowsInTableWhenUpdated],
[sp].[rows_sampled] [RowsSampled],
[sp].[modification_counter] [NumberOfModifications]
FROM [sys].[stats] [ss]
JOIN [sys].[objects] [so] ON [ss].[object_id] = [so].[object_id]
CROSS APPLY [sys].[dm_db_stats_properties] ([so].[object_id], [ss].stats_id) [sp]
WHERE [so].[name] =  N'SalesOrderDetail';
GO
Output from sys.dm_db_stats_properties

Output from sys.dm_db_stats_properties

 

 

 

 

 

 

*You’re correct, I technically didn’t change ProductID to a new value, but SQL Server doesn’t know that.  Also, there’s a foreign key on that column which is why I can’t easily change it a random number.

Armed with this type of data, we can intelligently decide whether we should update statistics because a percentage of rows (rather than just a fixed number of rows) have changed.  In the example above, only 8% of data changed – probably not enough to require me to update statistics.  It’s quite possible that some statistics need to be updated daily because there is a high rate of change, and other statistics only need to be updated weekly or monthly because data doesn’t change much at all.

Ultimately, when it comes to scheduled updates of statistics, you can go the sledgehammer route (Update Statistics task or sp_updatestats) or the selective update route (UPDATE STATISTICS and sys.dm_db_stats_properties).  Using the Update Statistics task or sp_updatestats is easier if you’re not familiar with SQL Server and if you have the maintenance window and resources for it to run.  To be perfectly clear: if you’re a system administrator and want to update statistics, I’d rather you use this approach than nothing at all.  Presumably, if you don’t have a full-time DBA, you also don’t need the system to be available 24×7, so you can take the performance hit at night or on the weekend while all statistics update.  In that situation I’m ok with the approach.

But, if you are a DBA and you know how to write T-SQL, then you can absolutely write some code that programmatically looks at your statistics and decides what to update and what to skip.  Whatever method you use, just make sure your updates are scheduled to run regularly through an Agent Job, and make sure you have Auto Update Statistics enabled just in case the job doesn’t run and you don’t get notified for some reason (this would be Plan B, because it’s always good for DBAs to have a Plan B!).

Additional Resources

SQLskills SQL101: Updating SQL Server Statistics Part I – Automatic Updates

One of my favorite topics in SQL Server is statistics, and in my next two posts I want to cover how they are updated: either by SQL Server or by you.

We’ll start with updates by SQL Server, and these happen automatically. In order for automatic updates of statistics to occur, the AUTO UPDATE STATISTICS database option must be enabled for the database:

Auto Update Statistics option via SSMS

Auto Update Statistics option via SSMS

 

 

 

 

 

 

 

 

 

 

 

 

This option is enabled by default for every new database you create in SQL Server 2005 and higher, and it is recommended to leave this option enabled. If you’re not sure if this option is enabled, you can check in the UI or you can use the following T-SQL:

SELECT
	[name] [DatabaseName],
	CASE
		WHEN [is_auto_update_stats_on] = 1 THEN 'Enabled'
		ELSE 'Disabled'
	END [AutoUpdateStats]
FROM [sys].[databases]
ORDER BY [name];
GO

If you want to enable the option, you can run:

USE [master];
GO
ALTER DATABASE [<database_name_here] SET AUTO_UPDATE_STATISTICS ON WITH NO_WAIT;
GO

With the option enabled, SQL Server marks statistics as out of date based on internal thresholds.

For SQL Server 2014 and earlier, the threshold was 500 rows plus 20% of the total rows in a table. For example, if I have a table with 10,000 rows in it, when 2500 rows have changed, then SQL Server marks the statistic as out of date. There are exceptions to this (e.g. when a table has less than 500 rows, or if the table is temporary), but in general this threshold is what you need to remember.

A new trace flag, 2371, was introduced in SQL Server 2008R2 SP1 to lower this threshold. This change was designed to target large tables. Imagine a table with 10 million rows; over 2 million rows would need to change before statistics would be marked as out of date. With trace flag 2371, the threshold is lower.

In SQL Server 2016, the threshold introduced by trace flag 2371 is used if you have the compatibility mode for a database set to 130. This means that in SQL Server 2016, you only need to use trace flag 2371 to get that lower threshold if you have the database compatibility mode set to 120 or lower.
If statistics have been marked as out of date, then they will be updated by SQL Server automatically the next time they are used in a query. Understand that they are not updated the moment they are out of date…they are not updated until they are needed. Imagine the following scenarios using the original threshold:

Example 1 – PhysicianData

Date/Time Action
Sunday, March 19, 2017 2:00 AM Statistics updated for table PhysicianData, which has 500,000 rows in it
Monday, March 21, 6:00 AM Processing job runs, and 50,000 new rows are added to the PhysicianData table
Tuesday, March 21, 6:00 AM Processing job runs, and 50,500 new rows are added to the PhysicianData table; statistics for PhysicianData are marked as out of date
Tuesday, March 21, 7:35 AM A user queries PhysicianData for the first time since processing ran at 6:00 AM; statistics for PhysicianData are updated

 

Example 2 – PatientData

Date/Time Action
Sunday, March 19, 2017 2:00 AM Statistics updated for table PatientData, which has 2,000,000 rows in it
Monday, March 20, all day Different processes and user activities access PatientData, adding new rows, changing existing rows.  By the end of day 100,000 rows have changed or been added.
Tuesday, March 21, all day Different processes and user activities access PatientData, adding new rows, changing existing rows.  By the end of day 250,000 rows have changed or been added.
Wednesday, March 22, all day Different processes and user activities access PatientData, adding new rows, changing existing rows.  At 8:15PM, 400,500 rows have changed or been added.
Wednesday, March 22, 8:16 PM A user queries PatientData; statistics for PatientData are updated

 

I’ve given two very contrived example to help you understand that statistics are not always updated the exact moment they are marked as out of date.  They might be – if the table has a lot of activity, but they might not be.

As I stated originally, it is recommended to leave this option enabled for a database.  However, we do not want to rely on SQL Server for our statistics updates.  In fact, think of this option as a safety net for statistics.  We want to control when statistics are updated, not SQL Server.  Consider of the first scenario I described, where statistics updated at 7:35AM.  If that’s a busy time of day and this is a large table, it could affect performance in the system.  It’s preferable to have statistics updated when the system has less activity, so that resource use doesn’t contend with user activity, but we always want to leave Auto Update Statistics enabled for a database…just in case.

Additional Resources:

SQLskills SQL101: The SQL Server ERRORLOG

One of the most useful logs you can review when there’s a problem in SQL Server is the ERRORLOG.  It may not always be the answer to your problem, but it’s a good place to start.

When you initially install SQL Server it only keeps the most recent six (6) ERRORLOG files, in addition to the current, active one.  A new ERRORLOG file is generated when the instance restarts, or when you run sp_cycle_errorlog.  There are drawbacks to this default configuration.  If you do not regularly restart your instance (which is perfectly fine), then one ERRORLOG file could contain months, maybe even a year or more, of information.  That’s a lot of entries to read through if you’re looking for patterns or unusual errors.  In addition, if you happen to run into a scenario where you restart the instance multiple times in succession – three or four times for example – you could potentially lose months of history.

The solution is to recycle the ERRORLOG on a regular basis (I like to do this weekly), and increase the number of files retained.  To recycle the ERRORLOG every week, set up an Agent job that calls sp_cycle_errorlog.  I’ve included code at the end of this post to create the Agent job and weekly schedule.

Next, increase the number of ERRORLOG files you keep.  You can do this through Management Studio.  Expand the instance, then Management, right-click on SQL Server Logs and select Configure.  Enable the option Limit the number of error log files before they are recycled and then enter a number for Maximum number of error log files:  I like to keep 30 around.  That usually equates to about six months of time, including a few unplanned restarts.

Configure SQL Server to keep 30 ERRORLOG files

Configure SQL Server to keep 30 ERRORLOG files

 

 

 

 

 

 

 

 

 

You can also make this change with T-SQL:

USE [master];
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 30;
GO

Checking the ERRORLOG configuration is always something we do as part of a health audit, and I’m always happy when I find systems with at least a few months’ worth of files that are less than a few MB in size (I think the largest ERRORLOG I’ve seen is 4GB…that one took a long time to open).  If this isn’t something you’ve configured on your SQL Server instances yet, take a few minutes and knock it out.  You won’t regret having this information when a problem comes up, or when you’re looking to see if a problem occurred a few months ago but maybe no one realized it.

If you’re interested in other posts in our SQLskills SQL101 series, check out SQLskills.com/help/SQL101.

Additional reading:

 

Code to create a SQL Agent job to run sp_cycle_errorlog weekly (Sundays at 12:01 AM):

USE [msdb];
GO
/****** Object:  Job [SQLskills Cycle ERRORLOG Weekly] ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [Database Maintenance] ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'Database Maintenance' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'Database Maintenance'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END
DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'SQLskills Cycle ERRORLOG Weekly',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Cycle the ERRORLOG once a week.',
@category_name=N'Database Maintenance',
@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Cycle ERRORLOG] PM ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Cycle ERRORLOG',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'EXEC sp_cycle_errorlog;
GO',
@database_name=N'msdb',
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Weekly cycle of ERRORLOG',
@enabled=1,
@freq_type=8,
@freq_interval=1,
@freq_subday_type=1,
@freq_subday_interval=0,
@freq_relative_interval=0,
@freq_recurrence_factor=1,
@active_start_date=20170301,
@active_end_date=99991231,
@active_start_time=100,
@active_end_time=235959,
@schedule_uid=N'23a32e3e-c803-451f-b85a-b77d5b97ab3a'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

 

 

SQLskills SQL101: Trace Flags

“You should always use trace flag X for a SQL Server install.”

“Have you tried trace flag Y?”

“We fixed the problem using an undocumented trace flag.”

If you’re new to SQL Server, you might have heard or read some of the above statements.  If you’ve never used a trace flag, you might wonder why you might need one, and how you would know if you did need it.  SQL Server trace flags are used to change the behavior of the engine in some way.  A trace flag is ideally used for improvement, but there can be situations where a trace flag doesn’t provide the intended benefit.  In some cases, it can adversely affect the problem you’re trying to fix, or create a different issue.  As such, trace flags in SQL Server are something to use with caution.  The number one recommendation I always make when someone asks about using a trace flag is to test it, ideally in an identical or comparable situation.  This isn’t always possible, which is why there’s always a slight risk with trace flags.  There are only three (3) trace flags that we at SQLskills recommend, by default, for a SQL Server installation:

  • 1118 (for versions prior to SQL Server 2016)
  • 3023 (for versions prior to SQL Server 2014)
  • 3226

Trace flag 1118 addresses contention that can exist on a particular type of page in a database, the SGAM page.  This trace flag typically provides benefit for customers that make heavy use of the tempdb system database.  In SQL Server 2016, you change this behavior using the MIXED_PAGE_ALLOCATION database option, and there is no need for TF 1118.

Trace flag 3023 is used to enable the CHECKSUM option, by default, for all backups taken on an instance.  With this option enabled, page checksums are validated during a backup, and a checksum for the entire backup is generated.  Starting in SQL Server 2014, this option can be set instance-wide through sp_configure (‘backup checksum default’).

The last trace flag, 3226, prevents the writing of successful backup messages to the SQL Server ERRORLOG.  Information about successful backups is still written to msdb and can be queried using T-SQL.  For servers with multiple databases and regular transaction log backups, enabling this option means the ERRORLOG is no longer bloated with BACKUP DATABASE and Database backed up messages.  As a DBA, this is a good thing because when I look in my ERRORLOG, I really only want to see errors, I don’t want to scroll through hundreds or thousands of entries about successful backups.

You can find a list of supported trace flags on MSDN, and as I alluded to initially, there are undocumented trace flags.  An undocumented trace flag is one that is not supported by Microsoft.  If you ever use an undocumented trace flag and you have a problem, Microsoft will not provide support for that problem; if you decide to use an undocumented trace flag, tread carefully, particularly in production.

How will you know if you should use a trace flag?  Online you’ll typically come across a forum post, blog post, or article that describes a scenario that you might be having, with the recommendation that you fix it with a trace flag.  You could also attend a user group meeting, a SQLSaturday or conference session, and hear the same thing.  You may have it recommended to you by a consultant, or another DBA or developer.  In all cases, it’s important to first confirm that what you’re seeing in your environment matches the behavior described by the trace flag.  If you believe you should enable a trace flag, enable it in a test or development environment first where you can recreate the problem, and then test it thoroughly.  Finally, after it’s gone through rigorous testing, you can try it in production.  Notice I say “try” because even with all your testing, if may not be the right solution for your environment.

If you find you do want to give a trace flag try, there are two ways to enable/disable them:

Enabling a trace flag with DBCC TRACEON is done using T-SQL, and you have the option to set the trace flag at the session or global level.  Typically you want the trace flag to be used by the entire instance, so you enable it globally.  For testing purposes, you may just enable it at the session level.  To enable trace flag 3226 globally you would run:

DBCC TRACEON (3226, -1);
GO

The use of -1 turns on the flag for the entire instance.  To disable the trace flag you run:

DBCC TRACEOFF (3226, -1);
GO

The advantage of using DBCC TRACEON and DBCC TRACEOFF is that you don’t have to restart the instance to use the trace flag.  The drawback is that it can be disabled by anyone who has sysadmin membership and runs DBCC TRACEOFF, and that it will not persist through a restart.  I recommend using this option when testing a trace flag.

For cases where you’ve tested the trace flag and you know that you want it enabled, then you want to add it to the SQL Server service as a startup parameter.  This requires using SQL Server Configuration Manager.  Once you have Configuration Manager open, select Services on the left side and then you’ll see all the services listed on the right.  Highlight the SQL Server service, right-click and select Properties, then select the Startup Parameters tab.  To add a startup parameter use the syntax –T followed by the trace flag, as shown below:

Adding TF 3226 as a startup parameter for the SQL Server service

Adding TF 3226 as a startup parameter for the SQL Server service

Note: There should be no space between the –T and the trace flag (but if you try and put one there, SQL Server removes it for you).

Then select Add so it appears in the Existing parameters: window, and then OK, and you will be notified that the change will not take effect until you restart the instance.  If you are not able to restart the instance immediately, you can apply it using DBCC TRACEON, just be aware that someone could remove it.

Lastly, to check what trace flags, if any, are enabled for your instance, you can use DBCC TRACESTATUS.  In our case, the output shows that we have 3226 enabled globally:

DBCC TRACESTATUS;
GO
DBCC TRACESTATUS output showing TF 3226 enabled

DBCC TRACESTATUS output showing TF 3226 enabled

 

 

 

 

As you can see, using trace flags is pretty straight-forward.  However, deciding whether a trace flag is needed and then testing to ensure it provides benefit and not detriment is what requires real work.  Use trace flags wisely, and always test first!  And remember, if you want to find all of our SQLskills SQL101 blog posts visit SQLskills.com/help/SQL101.

Additional reading:

Upcoming Query Store Sessions

This past weekend at SQLSaturday Cleveland I presented a new session related to Query Store, Ensuring Plan Stability with Query Store.  I went into detail on forcing plans with Query Store and how that compares to Plan Guides, and I had some great questions – it was a really fun presentation.  However, I know a lot of companies have not yet upgraded to SQL Server 2016, therefore many DBAs and developers are still figuring out what Query Store and how it works and if they want to use it (quick answer: you do).  No worries, I’m here to help!  I’ve listed upcoming Query Store sessions below (they are all an introduction to QS) – hopefully I’ll see you at one?  Definitely let me know if you’re coming to the New England UG in April or SQLIntersection in May, it’s always nice to put a face to an email address or Twitter handle!

And lastly, HUGE thanks to the entire SQLSaturday Cleveland team – the organizers, volunteers, and speakers were amazing as usual.  This year I helped with registration in the morning and it was great to greet everyone as they walked in – even those that grumbled about being up so early on a Saturday!  And another shout out to all the speakers that traveled to attend our event.  We *love* having such a diverse group of individuals present on so many different SQL Server topics.  Thank you for making the time to submit, prepare, travel, and give us your best.  We truly appreciate it.

Ok, one more thing…the Patriots won the Super Bowl last night.  Tom Brady is the quarterback of the Patriots and now has won five (5!) Super Bowls.  Tom Brady went to the University of Michigan.  GO BLUE!!

Have a great Monday :)

Upcoming Query Store sessions (intro-level)

Tuesday, February 6, 2017 [Remote]: PASS DBA Fundamentals VC

Wednesday, April 12, 2017 [Burlington, MA]: New England SQL Server UG

Wednesday, May 24, 2017 [Orlando, FL]: SQLIntersection

Forced Plans and Compatibility Mode

A question came up recently about plan guides and compatibility mode, and it got me thinking about forced plans in Query Store and compatibility mode.  Imagine you upgraded to SQL Server 2016 and kept the compatibility mode for your database at 110 to use the legacy Cardinality Estimator.  At some point, you have a plan that you force for a specific query, and that works great.  As time goes on, you do testing with the new CE and eventually are ready to make the switch to compatibility mode 130.  When you do that, does the forced plan continue to use compatibility mode 110?  I had a guess at the answer but thought it was worth testing.

Setup

I restored a copy of WideWorldImporters to my SQL 2016 SP1 instance and set the compatibility mode to 110:

USE [master];
GO

RESTORE DATABASE [WideWorldImporters]
FROM  DISK = N'C:\Backups\WideWorldImporters-Full.bak'
WITH  FILE = 1,
MOVE N'WWI_Primary' TO N'C:\Databases\WideWorldImporters\WideWorldImporters.mdf',
MOVE N'WWI_UserData' TO N'C:\Databases\WideWorldImporters\WideWorldImporters_UserData.ndf',
MOVE N'WWI_Log' TO N'C:\Databases\WideWorldImporters\WideWorldImporters.ldf',
MOVE N'WWI_InMemory_Data_1' TO N'C:\Databases\WideWorldImporters\WideWorldImporters_InMemory_Data_1',
NOUNLOAD,
REPLACE,
STATS = 5;
GO

ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 110
GO

Then I enabled Query Store and cleared out any old data that might exist (remember that WideWorldImporters is a sample database so who knows what might exist in the Query Store views):

USE [master];
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE = ON
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE (
OPERATION_MODE = READ_WRITE,
CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30),
DATA_FLUSH_INTERVAL_SECONDS = 900,
INTERVAL_LENGTH_MINUTES = 60,
MAX_STORAGE_SIZE_MB = 512,
QUERY_CAPTURE_MODE = ALL,
SIZE_BASED_CLEANUP_MODE = AUTO,
MAX_PLANS_PER_QUERY = 200);
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE CLEAR;
GO

Next I’ll create a stored procedure to use for testing, and then I’ll run it twice with the RECOMPILE option, as this will generate two different plans.

USE [WideWorldImporters];
GO

DROP PROCEDURE IF EXISTS [Sales].[usp_GetFullProductInfo];
GO

CREATE PROCEDURE [Sales].[usp_GetFullProductInfo]
@StockItemID INT
AS
SELECT
[o].[CustomerID],
[o].[OrderDate],
[ol].[StockItemID],
[ol].[Quantity],
[ol].[UnitPrice]
FROM [Sales].[Orders] [o]
JOIN [Sales].[OrderLines] [ol] on [o].[OrderID] = [ol].[OrderID]
WHERE [StockItemID] = @StockItemID
ORDER BY [o].[OrderDate] DESC;
GO

EXEC [Sales].[usp_GetFullProductInfo] 220  WITH RECOMPILE;
GO

EXEC [Sales].[usp_GetFullProductInfo] 105  WITH RECOMPILE;
GO

Forcing a plan

We’ll start by looking at the two different plans in Query Store.  You can do this through the UI, or by using TSQL.  I’ll use both, just for fun, and we’ll start with TSQL.

SELECT
[q].[query_id],
[q].[object_id],
[o].[name],
[p].[compatibility_level],
[qt].[query_sql_text],
[p].[plan_id],
TRY_CONVERT(XML,[p].[query_plan]) AS [QueryPlan]
FROM [sys].[query_store_query] [q]
JOIN [sys].[query_store_query_text] [qt]
ON [q].[query_text_id] = [qt].[query_text_id]
JOIN [sys].[query_store_plan] [p]
ON [q].[query_id] = [p].[query_id]
JOIN [sys].[objects] [o]
ON [q].[object_id] = [o].[object_id]
WHERE [q].[object_id] = OBJECT_ID(N'Sales.usp_GetFullProductInfo');
GO

 

Query Store output - two different plans

Query Store output – two different plans

 

You can see in the output that there are two different plans (plan_id 3 and plan_id 4)for this stored procedure query.  I can click on the XML link to see each plan, and then compare them, or I can do this from within Query Store.  It’s easier within Query Store, I just need to know the query_id (3).  Within Management Studio, expand the WideWorldImporters database, expand Query Store, then double-click on Tracked Queries and enter the query_id in the Tracking Query box.

 

Two different plans for query_id 3

Two different plans for query_id 3

 

You’ll see that there are two plans, and to compare them you click on both plans in the plan id window (hold down the CTRL key to get them both) and then select Compare Plans.

 

Comparing both plans

Comparing both plans

 

In looking at the plans, you see that the shapes are similar, but Plan 3 has a Nested Loop, while Plan 4 Merge Join that’s fed by a Sort.  For this example, we’ll decide that the Nested Loop plan is “better” for this query, so that’s the one we will force.

However, before we make that change, let’s see if we get a different plan with compatibility mode 130.

USE [master];
GO

ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 130;
GO

USE [WideWorldImporters];
GO

EXEC [Sales].[usp_GetFullProductInfo] 105  WITH RECOMPILE;
GO

Check Query Store again…

SELECT
[q].[query_id],
[q].[object_id],
[o].[name],
[p].[compatibility_level],
[qt].[query_sql_text],
[p].[plan_id],
TRY_CONVERT(XML,[p].[query_plan]) AS [QueryPlan]
FROM [sys].[query_store_query] [q]
JOIN [sys].[query_store_query_text] [qt]
ON [q].[query_text_id] = [qt].[query_text_id]
JOIN [sys].[query_store_plan] [p]
ON [q].[query_id] = [p].[query_id]
JOIN [sys].[objects] [o]
ON [q].[object_id] = [o].[object_id]
WHERE [q].[object_id] = OBJECT_ID(N'Sales.usp_GetFullProductInfo');
GO
Query Store output - now three different plans

Query Store output – now three different plans

We DO have a different plan!  If we look at the plan, we see that the shape is still similar, but now we have a Hash Match with a Filter operator and a Clustered Index Scan.

Plan from compatibility mode 130

Plan from compatibility mode 130

 

Now we want to force that Nested Loop plan.  First, change the compatibility mode back to 110:

USE [master];
GO

ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 110;
GO

Next, force the plan that has the Nested Loop, and we can do this in the UI, or with TSQL.  In the UI just go back to the Tracked Queries window, select the plan, and then Force Plan.  To force the plan with TSQL, you need to know the query_id and plan_id:

USE [WideWorldImporters];
GO

EXEC sp_query_store_force_plan @query_id = 3, @plan_id = 3;
GO

Now the plan is forced.  If we enable the actual execution plan and re-run our stored procedure without the RECOMPILE on it (because why would you use RECOMPILE on a query with a forced plan?) we see that the Nested Loop plan is used:

EXEC [Sales].[usp_GetFullProductInfo] 105;
GO
Stored procedure's execution plan, after being forced

Stored procedure’s execution plan, after being forced

And here’s the big test…  Change compatibility mode to 130 again, free procedure cache just for fun (this does not matter – when a plan is forced, it doesn’t matter if the plan exists in cache or not), and then run the stored procedure and check the plan:

USE [master];
GO
ALTER DATABASE [WideWorldImporters] SET COMPATIBILITY_LEVEL = 130;
GO

USE [WideWorldImporters];
GO

EXEC [Sales].[usp_GetFullProductInfo] 105  WITH RECOMPILE;
GO
Stored procedure's execution plan, after compatibility mode changed to 130

Stored procedure’s execution plan, after compatibility mode changed to 130

Surprised?  The Nested Loop plan is still used.  This is expected!  It does not matter if the compatibility mode for the database is different than the compatibility mode for the plan.  The forced plan is what’s used.

Summary

In this example, even when the compatibility mode for the database changed, the forced plan was still used.  Thus, forced plans are not tied to compatibility mode.  This is a good thing.  If you’ve upgraded to SQL Server 2016 and you are working to fix query performance issues related to the new cardinality estimator, forcing plans can be incredibly helpful in stabilizing performance without changing code to include trace flags or hints.  However, do not assume that a forced plan will always be used.  If you look at the Best Practice with the Query Store guidelines, there’s a section titled “Check the Status of Forced Plans Regularly.”  Within that section is this note:

However, as with plan hints and plan guides, forcing a plan is not a guarantee that it will be used in future executions.

Therefore, while you force a plan because you want it to be used – to make query performance more stable – SQL Server does not guarantee it will always be used.  There are cases when it cannot, and should not, be used, hence the recommendation to check the status of forced plans in sys.query_store_plan.

SQLSaturday Cleveland 2017

Cleveland peeps – we are a week away from SQLSaturday Cleveland, are you registered?!  There’s still time if you’re not, AND there is still time to register for one of the fantastic pre-cons we’re hosting this year.  Your options:

Pre-con cost is $175, which is a deal compared to what it would cost if you attended the same pre-con at the PASS Summit (add in travel costs, and it’s a steal).  Then consider that the group will be much smaller than what it would be at Summit, so you’ll have plenty of opportunities to talk directly to Adam or Ginger to ask questions.  It’s a no-brainer…so much so that I’m attending Adam’s session.  I spend a fair bit of time tuning but there is always more to learn and different perspectives are great for improving troubleshooting skills.

So talk to your manager, re-work your schedule for next week, and sign up!  If you’ll be there, stop by and say hi, or say hi on Saturday where I’ll be at the registration desk (warning: I don’t do mornings well so forgive me if I’m in a daze!) and then I’ll be presenting my new Query Store session at 2:45 PM.  I hope to see you there, and happy Fri-yay!

p.s. Don’t forget to print your SpeedPass!  :)