SQLskills SQL101: Log shipping performance problems

As Kimberly blogged about earlier this year, SQLskills has an ongoing initiative to blog about basic topics, which we’re calling SQL101. We’re all blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

One question I’m asked regularly is this:  When our log shipping secondary is applying log backups, sometimes it takes a lot longer than usual. Any idea why this might be the case?

Log shipping has been around forever, and it’s still a hugely applicable and useful feature for very simply maintaining one or more secondary copies of a database. You can also use a secondary copy for reporting, where the restore of the log backup uses the WITH STANDBY option, leaving the secondary database in an accessible, but read-only state (when the logs aren’t being applied).

This works as follows:

  1. Make sure all users are disconnected from the secondary database
  2. Write all the log records from the log backup into the secondary database’s log file
  3. Perform the REDO part of recovery (ensuring that all operations from committed transactions are present in the secondary database)
  4. Perform the UNDO part of recovery (ensuring that all operations from uncommitted transactions are not present in the secondary database)

Step 4 writes all the log records generated by the UNDO operations into a special file called the undo file. This means that the secondary database is in read-only mode and is transactionally-consistent so that users can access it. The reason the log records are written into the undo file is so that the transaction log of the secondary database is not altered in any way, allowing subsequent log backups to be restored. If this weren’t the case, the UNDO log records would advance the secondary database’s LSN (Log Sequence Number), meaning that subsequent log backup restore operations would fail.

When the restore process begins on the secondary database, if an undo file exists, there is another step that is performed before steps 2-4 above. This additional step needs to take all the log records in the undo file and undo the effects of them – essentially putting the secondary database back into the state as of the end of step 3 from the previous restore. This database state is the same as if the previous log backup had been restored using WITH NORECOVERY instead of WITH STANDBY.

The occasional long-running restore problem happens when a log backup is restored that contains a long-running transaction that does not commit before the end of the log backup. This means that it must be completely undone as part of restoring the log backup (step 4), resulting in a very large undo file. This in itself can make restoring a log backup take a lot longer than usual. When the next log backup is restored, the additional step that undoes all the log records in the undo file has a very large undo file to process and takes much, much longer than usual. And if the log backup being restored also has an uncommitted, long-running transaction then it’s the perfect storm as the step 4 will also take a long time. These steps are all made even longer still if the log file has too many VLFs (called VLF fragmentation).

The situation where I’ve seen this most often is when the primary database is undergoing index maintenance and a log backup finishes near the end of a very long-running index rebuild operation of a large clustered index. The initial restore of that log backup on the secondary database takes much longer than usual to complete because of step 4 in the restore process. The next log backup on the primary also completes just before an index rebuild completes. When it is restored on the secondary, the whole of the large undo file has to be undone again, then the log restore occurs, and then another large undo file is generated to undo the second uncommitted index rebuild.

This is a possibility you have to be aware of if the secondary database must be available 24×7 for reporting, with only minimal downtime when each log backup is restored. In that case I would carefully augment the index maintenance operations on the primary with log backups to ensure that only complete, committed index rebuilds are present in the log backups being restored on the secondary database. Similar precautions should be taken if you have other, occasional, long-running operations.

An alternative would be to move from log shipping to database mirroring or availability groups, where the log records are continually being sent from the principal to the mirror database (or primary to secondary replica databases, in availability group terms) and there are no extra steps involving undoing log operations multiple times. With database mirroring, the drawback of this is that reporting would have to use database snapshots, so there’s a complexity trade-off involved. With availability groups, the reporting would have to use a readable secondary, which can lead to index fragmentation on the primary replica, but that can be compensated for with index fill factors (see here for more details).

So there you have it. Another example where understanding how SQL Server performs common operations can make it much easier to diagnose performance problems.

Whitepaper on high-availability architectures published

The 20-page whitepaper I wrote this Spring for the SQL team has just been published. This whitepaper is titled Proven SQL Server Architectures for High Availability and Disaster Recovery and ties in with the longer whitepaper I wrote last year on High Availability with SQL Server 2008 (see here).

This new whitepaper describes the five most commonly deployed high-availability and disaster-recovery architectures deployed by customers, along with a case study of each. It covers:

  • Failover Clustering for High Availability with Database Mirroring for Disaster Recovery
  • Database Mirroring for High Availability and Disaster Recovery
  • Geo-Clustering for High Availability and Disaster Recovery
  • Failover Clustering for High Availability Combined with SAN-Based Replication for Disaster Recovery
  • Peer-to-Peer Replication for High Availability and Disaster Recovery

You can get it from this link.


PS Checkout our 5-day class in Bellevue, WA in August – see here.

A SQL Server DBA myth a day: (7/30) multiple mirrors and log shipping load delays

(Look in the Misconceptions blog category for the rest of the month’s posts and check out the 60-page PDF with all the myths and misconceptions blog posts collected together when you join our Insider list, plus my online Myths and Misconceptions training course on Pluralsight.)

A short one today as I'm teaching a class on HA strategies and technologies for Microsoft DBAs on the Redmond campus, so let's make it an appropriate topic:

Myth #7: A database can have multiple mirrors.


This one's pretty cut and dried – database mirroring only allows a single mirror of a principal database. If you want to have extra copies of the principal database, consider using log shipping. You can have as many log shipping secondaries as you want.

One other cool thing about log shipping is that you can have one of the secondaries set to have a load delay of, say, 8 hours. This means the log backups taken on the principal (don't you love it that the various technologies have different nomenclature:

  • database mirroring: principal – mirror
  • log shipping: primary  – secondary
  • replication: publisher – subscriber

Ok – this parenthetical clause kind-of got a life of it's own…) won't be restored on the log shipping secondary until 8 hours have passed. If someone drops a table in production, it will pretty much immediately get dropped in the mirror (with whatever delay the SEND and WAIT queues have at the time – but you can't *stop it*) but the log shipping secondary with the load delay will still have it intact.

Incidentally, the SQLCAT team wrote a really good article debunking the myth (which stems from Books Online) that you can only mirror 10 databases per instance – see Mirroring a Large Number of Databases in a Single SQL Server Instance. Also take a look at the KB article I wrote for CSS last year which discusses the same thing: KB 2001270 Things to consider when setting up database mirroring in SQL Server.