As Kimberly blogged about recently, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.
One question I get asked every so often is why it can sometimes take longer to restore a database from a full backup than it took to perform the backup in the first place. The answer is that in cases like that, there’s more work to do during the restore process.
A full backup has the following main phases:
- Perform a checkpoint.
- Read all in-use data from the data files (technically, reading all allocated extents, regardless of whether all 8 pages in the extent are in use).
- Read all transaction log from the start of the oldest uncommitted transaction as of the initial checkpoint up to the time that phase 2 finished. This is necessary so the database can be recovered to a consistent point in time during the restore process (see this post for more details).
- (Optionally test all page checksums, optionally perform backup compression, and optionally perform backup encryption).
A full restore has the following main phases:
- Create the data files (and zero initialize them if instant file initialization is not enabled).
- Copy data from the backup to the data files.
- Create the log file and zero initialize it. The log file must always be zero initialized when created (see this post for more details).
- Copy transaction log from the backup to the log file.
- Run crash recovery on the database.
- (Optionally test all page checksums during phase 2, perform decompression if the backup is compressed, and perform decryption if the backup is encrypted.)
Phase 3 above can often be the longest phase in the restore process, and is proportional to the size of the transaction log. This is done as a separate phase rather than being done in parallel with phases 1 and 2, and for a deep investigation of this, see Bob Ward’s recent blog post.
Phase 5 above might be the longest phase in the restore process if there were any long-running, uncommitted transactions when the backup was performed. This will be even more so if there are a very large number of virtual log files (thousands) in the transaction log, as that hugely slows down the mechanism that rolls back uncommitted transactions.
Here’s a list of things you can do to make restoring a full backup go faster:
- Ensure that instant file initialization is enabled on the SQL Server instance performing the restore operation, to avoid spending time zero-initializing any data files that must be created. This can save hours of downtime for very large data files.
- Consider backup compression, which can speed up both backup and restore operations, and save disk space and storage costs.
- Consider using multiple backup files, each on a separate volume. SQL Server will recognize this situation and use parallel write threads (one per volume) to write to the files during the backup, and to read from them during the restore – speeding things up. If you have multiple database data files, a similar I/O parallelism will occur – providing even more of a speed boost.
- Try to avoid having long-running transactions that will take time to roll back.
- Manage your transaction log to avoid having an excessive number of virtual log files, so if there are transactions to roll back, the roll back will go as fast as possible. See this blog post for more details.
Hope this helps!
7 thoughts on “SQLskills SQL101: Why is restore slower than backup”
I did a mild double-take on this because (at least for the databases I’ve paid attention to on this) I typically see the opposite pattern in place…slightly faster restores. Interesting to note that may be the exception rather than the rule.
When compression figures into it I would think that the relative speed of I/O would factor as well…for instance a 100GB database with a 10GB compressed backup…to oversimplify:
Backup: Read 100GB from source database, write 10GB to file location
Restore: Read 10GB from file location, write 100GB to target database
Restore is (in that case) much more write-intensive than the backup due to compression. Which would make you think that the restore would, again, per your post, be slower. Plenty of other factors though, of course (we’re using LiteSpeed, for instance).
Yup – it’s a generalization, not an absolute.
Would creating of an empty database equal size of to be restored database speed up restore process?