SQLskills SQL101: Switching recovery models

As Kimberly blogged about recently, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

One of the things that can catch people out is the effect of switching out of the full recovery model temporarily. In this post I’ll briefly describe the three recovery models and then the problems you can have switching from full to simple, and from full to bulk-logged.

Recovery models

There are three recovery models:

  • Full recovery model (the default and the most commonly used)
    • All modifications in the database a fully logged. This doesn’t mean that every modification has a separate log record, as some operations are logged with fewer log records while still logging the entire effect of the operation (for instance, TRUNCATE TABLE operations – see here for a deep explanation).
    • The transaction log will not clear (i.e. portions are available for reuse) until a transaction log backup is performed (see here for a deep explanation).
    • All recovery options are available when a database is in the full recovery model (and has been since the last backup).
  • Bulk-logged recovery model
    • Some modifications (like an index rebuild or a bulk load, but NOT regular insert/update/deletes) can be minimally logged, which reduces the amount of log records generated so the transaction log does not have to grow really large during the operation. Note that this doesn’t change the size of subsequent log backups. For full instructions on how to get minimal logging for your operation, see the Data Loading Performance Guide whitepaper, which lists all the various conditions that have to be met.
    • The transaction log will not clear until a transaction log backup is performed (exactly the same as the full recovery model).
    • Using bulk-logged, you trade off some recovery options (point-in-time restore and tail-of-the-log backups) for the performance gains associated with minimally logged operations.
  • Simple recovery model
    • Some modifications can be minimally logged (exactly the same as the bulk-logged recovery model).
    • The log will not clear until a checkpoint occurs (usually automatically).
    • Transaction log backups are not possible, so this is the most limited in terms of recovery options.

Most people use the full recovery model, to allow log backups and permit all possible restore operations. The main thing to remember is that if your database uses the full or bulk-logged recovery model, you must perform periodic transaction log backups or the transaction log will grow forever.

Some circumstances call for simple; if you don’t need the ability to do point-in-time restore or zero-to-minimal data loss restores using log backups. An example would be a scratch database that’s repopulated once per day and any changes can be lost or easily regenerated.

Switching to Simple

Often I hear of people who switch to the simple recovery model  to try to avoid transaction log growth during a bulk load or index rebuild, when what they really mean to do is to use the bulk-logged recovery model. There are also persistent myths out there that some regular operations *require* being in the simple recovery model – this is simply (ha ha) not true.

Switching to the simple recovery model breaks the log backup chain, requiring a full or differential backup before any further log backups can be performed.

Furthermore, it limits your ability to recover during a disaster because you’ve now only got one full backup from which you can restore: the one you performed most recently. Think about it: your restore options become:

  • Full backup after switch to simple, plus the latest differential backup after that full (if you’re using differential backups) and any log backups since the switch back; or
  • Most recent full backup before switch to simple, plus the latest differential after the switch back from simple, plus any log backups

If that most-recent full backup (before or after the switch to simple) is damaged, you cannot restore – period. You can’t fall back on using the next older full backup, as that only allows the restore sequence up to, but not past, the switch to simple. Well, I guess you could do that, but then you lose all work since the switch to simple.

Switching to the simple recovery model is not something you automate or do repeatedly. About the only time when you would temporarily switch to simple is if your transaction log had run out of space and there is no way to allow it to clear (i.e. you cannot perform a log backup or add another log file) except by switching to simple and forcing a checkpoint operation. In that case you’re taking a drastic step to allow operations to continue, and being fully cognizant of the limited restore options available to you right then.

Unless you have this emergency situation, or you decide to use the simple recovery model permanently, you should not switch to simple ever.

Switching to Bulk-logged

Switching to bulk-logged during a load or index maintenance process is acceptable to avoid transaction log growth. In fact, switching back-and-forth between full and bulk-logged repeatedly doesn’t affect the log backup chain in any way. And doing so also doesn’t have any effect on log shipping or replication, but you can’t switch out of full when using database mirroring or an availability group as they mandate the full recovery model.

However, using bulk-logged can cause problems for disaster recovery, so even though its behavior may be desirable, you may need to avoid using it so you don’t risk compromising your disaster recovery options.

Problem 1: a log backup that contains a minimally-logged operation cannot be used during a point-in-time restore. This means the time you specify in the WITH STOPAT clause of the restore statement cannot be a time covered by such a log backup. You can use that log backup as part of a restore sequence, and stop at any point in time after it (as long as that point in time is not covered by another log backup containing a minimally-logged operation, of course), but just not during it.

Problem 2: if you need to perform a tail-of-the-log backup to capture all the log generated since the most recent scheduled log backup, the data files are inaccessible or damaged, and the log to be backed up contains a minimally-logged operation, that backup will fail prior to SQL Server 2008 R2, and from SQL Server 2008 R2 onward it will succeed, but be will corrupt the database when restored.

So if you’re going to use bulk-logged to save on log space during large operations, you need to make sure that a) there’s no possibility you’re going to want to restore between the last log backup and the next one, and b) there are no changes made to the database that you cannot recreate in case a disaster occurs and you can’t take a valid tail-of-the-log backup.

Switching recovery models between full and bulk-logged may not be as safe as you might think.

Summary

For every database that you’re responsible for, make sure that you understand the ramifications of changing the recovery model, as doing so could cause you problems with disaster recovery.

Do you get our bi-weekly newsletter?

You may not know this, but for the last six years I’ve been producing a bi-weekly newsletter for everyone signed up for our mailing list (which we only use for the newsletter). It’s a bit of a labor of love, and a way for me to post opinions and other information that doesn’t really fit on my blog.

I’ve just expanded the sections in the newsletter so I want to make sure you all know about it. It’s 100% free of any 3rd-party adverts and marketing and your email address will only ever be used for sending the newsletters – nothing else. And it’s 100% free to join, no special requirements.

Every newsletter has the following sections:

  • SQLskills News A quick round-up of new Pluralsight courses, classes, conferences, user groups, and SQLSaturdays where we’ll be presenting.
  • Book Review I read a huge amount and like to post a 1-2 paragraph review of something interesting I’ve read since the last newsletter. I’ve been told by many people that this is one of the sections they look forward to most!
  • The Curious Case of… Describing and explaining a problem we’ve recently found on a client system (or come across in an email question or while teaching a class) that is something others are likely to come across in their environments.
  • Paul’s Ponderings Where I write about a topic of interest to the SQL Server community.
  • Video Demo A never-before-seen demo video of something to do with SQL Server from one of the team. Occassionally I’ll pick a demo video from a Pluralsight course too.
  • #TBT (Turn Back Time) Highlighting some older resources around a particular topic, plus a selection of recent SQLskills blog posts.

Here’s an example newsletter from March 2017 so you can see what it’s like.

If you’re interested, just enter your email address below and hit Subscribe and you’ll be added to our newsletter list. No in-depth registration, no fuss, no time! Once you sign up you’ll be able to read all the previous newsletters too.

Enjoy!

SQLskills SQL101: Temporary table misuse

As Kimberly blogged about recently, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

After I left Microsoft in 2007, one of the first clients I worked with (who we’re still working with today) threw an interesting problem at me: “We can’t run stored proc X any more because it causes tempdb to fill the drive and then it fails.” Game on. I built some monitoring infrastructure into the proc using the DMV sys.dm_db_task_space_usage to figure out how much tempdb space was being used at various points and find the problem area.

It turned out to be problem *areas*, and in fact the proc was loaded with temporary table (I’ll just use the common contraction ‘temp table’ from now on) misuse, illustrating all three of the common temp table problems I’m going to describe below. Once I fixed that proc, (reducing the tempdb usage from more than 60GB down to under 1GB, and the run time from many minutes to a few tens of seconds) I implemented some automated monitoring built around the sys.dm_db_task_space_usage DMV to identify procedures and ad hoc queries that were misusing temp tables. We’ve since used this monitoring at many other clients to identify temp table misuse.

In this post I’d like to describe the three main ways that temp table are misused:

  • Over-population of temp tables
  • Incorrect indexing on temp tables
  • Using a temp table where none are required

Don’t get me wrong though – temp tables are great – when they’re used efficiently.

Over-Population of a Temp Table

This problem involves creating a temp table using something like a SELECT … INTO #temptable construct and pulling far more data into the temp table than is necessary.

The most common thing we see is pulling lots of user table columns into the temp table, where some of the columns are not used ever again in subsequent code. This is a HUGE waste of I/O and CPU resources (extracting the columns from the user table in the first place – and imagine the extra CPU involved if the source data is compressed!) and a big waste of tempdb space (storing the columns in the temp table). I’ve seen code pulling large varchar columns into a temp table that aren’t used, and with multi-million row datasets…

The other facet of over-population of temp tables is pulling in too many rows. For instance, if your code is interested in what happened over the last 12 months, you don’t need to pull in all the data from the last ten years. Not only will it be bloating the temp table, it will also drastically slow down the query operations. This was one of the biggest problems in the client scenario I described above.

The key to better performance is making sure your selection/projection is as focused as possible. To limit your selection, use an effective WHERE clause. To limit your projection, list only the necessary columns in your select list.

Incorrect Indexing on a Temp Table

This problem involves either creating indexes before populating the table (so that no statistics are generated) or creating a bunch of inappropriate indexes that are not used.

The most common example we see is creating a single-column nonclustered index for each of the temp table columns. Those are usually just taking up space for no use whatsoever. Temp tables *DO* need indexes (preferably after load) but as with any form of query tuning – only the RIGHT indexes. Consider creating permanent tables that mimic what’s going on in your temporary objects and then using the Database Tuning Advisor (DTA) to see if it has recommendations. While DTA’s not perfect, it’s often WAY better than guessing. Kimberly has a great post in our Accidental DBA series that discusses indexing strategies – start there.

Also, don’t create any nonclustered indexes until the temp table has been populated, otherwise they won’t have any statistics, which will slow down query performance, possibly drastically.

Oh yes, and, don’t create a clustered index for the temp table before populating it unless you know that the data being entered is already sorted to exactly match the cluster key you’ve chosen. If not, inserts into the temp table are going to cause index fragmentation which will really slow down the time it takes to populate the temp table. If you know the data is sorted and you create the clustered index first, there’s still no guarantee that the Storage Engine will feed the data into the temp table in the right order, so be careful. And if you go that route, remember that you’ll need to update the statistics of the clustered index after the temp table creation.

You need to be careful here because in some versions of SQL Server, changing the schema of a temp table in a stored proc can cause recompilation issues. Do some testing and pick the sequence of events that makes the most sense for performance in your situation.

Using a Temp Table Where None is Required

The SQL Server Query Optimizer is a fabulous beast and is very good at figuring out the most efficient way to execute most queries. If you choose to take some of the query operation and pre-calculate it into a temp table, sometimes you’re causing more harm than good. Any time you populate a temp table you’re forcing SQL Server to materialize the complete set of results of whatever query you ran to populate the temp table. This can really limit SQL Server’s ability to produce a pipeline of data flowing efficiently through a query plan and making use of parallelism and collapsing data flows when possible.

While it’s true that you might be able to do better than the optimizer sometimes, don’t expect that it’s the case all the time. Don’t just go straight to using temp tables, give the optimizer a chance – and, make sure to retest your code/expectations around Service Packs and hot fixes as these may have eliminated the need for temp tables as well.

A good way to test whether a temp table is actually a hindrance to performance is to take the tempdb-creation code, embed it as a derived table in the main query, and see if query performance improves.

It’s quite often the case that temp tables because an architectural standard in an environment when they proved useful long ago and now everyone used them, without ever checking if they’re *really* good for all cases.

One other thing you can consider is replacing temp tables with In-Memory OLTP  memory-optimized tables, in all Editions of SQL Server 2016 SP1 and later, and in Enterprise Edition of SQL Server 2014. That’s beyond the scope of this post, but you can read about it in this Books Online page on MSDN.

Summary

Always try to follow these guidelines when using a temp table:

  • Determine if a temp table is the most efficient way to achieve the goal of the code you’re writing
  • Limit the number of columns being pulled into the temp table
  • Limit the number of rows being pulled into the temp table
  • Create appropriate indexes for the temp table

Take a look at your current temp table usage. You may be surprised to find a lot of tempdb space and CPU resources being consumed by inappropriate temp table usage, population, and indexing.

Hope you found this helpful!