8 Steps to better Transaction Log throughput

(Be sure to join our community to get our monthly newsletter with exclusive content, advance notice of classes with discount codes, and other SQL Server goodies!)

[Edit 2019 by Paul: Although this is an old post, it’s still relevant. I’ve updated the post to reflect current numbers and guidance.

Also check out these more recent articles of mine:

And my 7.5 hour Pluralsight course: SQL Server: Logging, Recovery, and the Transaction Log.


NOTE/UPDATE: Be sure to read this post as well: Transaction Log VLFs too many or too few? after reading this one. Not only can you have too many small VLFs but if incorrectly sized, you can have too few! Enjoy!!

NOTE/UPDATE 2: Be sure to read this post as well: Bug: log file growth broken for multiples of 4GB after reading this one. In the end, the best amount is to allocate with a value of 8000MB instead of 8GB. [Edit 2019 by Paul: this only applies to pre-SQL Server 2012.]

On a few of my last trips, I’ve had the pleasure of doing a lot of customer visits – visiting some of the larger implementations of SQL Server. Many of these implementations are at banks where both performance and recovery are critical. After my trip to Turkey (where I did customer visits prior to presenting at the Microsoft Professional Developers Summit), I received this email:

As you might remember we talked at xxxbank, Turkey about performance problems in transaction log backups and splitting tempdb data file into equal size chunks equal to number of CPUs. I have implemented both optimizations in my troubled server, and the results are great! Transaction log backups do not impact the server at all and with lowered congestion in tempdb, overall system performance has gone up as well.

So – this is my first of two blog entries targeting these two VERY typical problems:
* Transaction log fragmentation (both internal and external – this customer was having problems with internal fragmentation)
* Optimizing TempDB

We’ll start with 8 Steps to Optimizing your Transaction Log – not necessarily in any order. It’s best to review all of these to make sure you have a comprehensive view of how to improve both performance and recovery of your transaction logs!

1) [Edit 2019 by Paul: this may not be relevant on fast flash storage.] Try to dedicate the transaction log portion of your database to its own physical disk. In high volume OLTP system, isolating the transaction log can allow the disk head to be ready for the next write by not having other files contend for the physical disk resource. If your database already exists, the best way to “move” your transaction log is to detach your database (sp_detach_db) and then reattach it (sp_attach_db). Here’s a great KB article titled: Moving SQL Server databases to a new location with Detach/Attach.

2) [Edit 2019 by Paul: this is no longer relevant.] Defrag the disk(s) on which your transaction logs reside. This will get rid of external fragmentation of the transaction log – better known as disk file fragmentation. This will require that your server be taken offline but if your files have had a lot of autogrowth and/or they reside on a disk with a lot of other files that have been modified, then all of your files (incl. the transaction log file) are likely to be interleaved and fragmented. The good news is that this is a one time operation (unless the files continue to grow and shrink excessively – but we’ll fix that as well). The bad news is that you’ll need to shutdown SQL Server and then use some form of system tool – Microsoft or third party – to defrag. On Windows XP and Windows Server 2003 machines, the disk defragmenter tool can be launched by running: dfrg.msc.

3) [Edit 2019 by Paul: this is still current.] Create only ONE transaction log file. Even though you can create multiple transaction log files, you only need one… SQL Server DOES not “stripe” across multiple transaction log files. Instead, SQL Server uses the transaction log files sequentially. While this might sound bad – it’s not. If you want to get better performance out of the transaction log, place it on faster disks and/or a more performant disk (RAID) configuration. You could place the transaction log on a RAID 0 array but you’ll likely want fault tolerance as well as performance. Isolation, as per number 1 is a good start but if you also need increased capacity then using hardware based RAID to combine disks for both better performance and redundancy is your next choice. See 4.

4) [Edit 2019 by Paul: redundancy is till relevant, but performance implications may not be on fast flash storage.] Not only should you try to isolate the transaction log to its own physical disk but you should make sure that the logical/physical disk configuration is as efficient as possible. Try to use an isolated RAID 1 mirroring set if you don’t need significant capacity. If you need a greater capacity OR you want better performance, consider a combination of RAID 0 and RAID 1 (either RAID 0 + 1 or RAID 1 + 0). While RAID 0 + 1 can often offer better performance, RAID 1 + 0 offers better reliability. If you’re new to RAID and are interested in learning more – check out the RAID Tutorial here: http://www.raidarray.com/04_01_00.html.

5) [Edit 2019 by Paul: this is still relevant.] Don’t be caught up in nothing but transaction log speed, you’ll also want to make sure that your transaction log is always available as this can help you in times of disaster. Even if the data is damaged, if the transaction log is available and you have a series of backups up to and including the last transaction log then you can make a final backup of the transaction log that will represent all of the changes since your last transaction log backup. If this backup is possible (it’s called backing up the “tail” of the log), then you can achieve up-to-the-minute recovery. This is only possible when the log is available. By placing the log on mirrored disks you can increase your chances of recovering data and minimize data loss!

6) [Edit 2019 by Paul: this is still relevant.] Create transaction log files with a reasonable initial size. When you create a database it’s ideal to PRE-ALLOCATE both your data files and your transaction log file. A little bit of capacity planning goes a long way… Now, if you think that you’ve got absolutely no idea how to size your transaction log you’re going to need, well – here are the things that have the greatest impact:

  • Type of activity – transaction processing or decision support
  • Frequency of that activity – the more frequent the changes, the faster the transaction log will grow
  • Recovery Model – the recovery model of the database
  • Frequency of transaction log backups
  • Whether or not replication is used (since the log reader relies on the transaction log)

I wish I could give you a rough idea on sizing but if you look only at database size and none of these other factors, you could end up with a transaction log that’s seriously oversized or seriously undersized. I’ve seen recommendations of 10-25% of the size of the data but there’s really no formula. A larger database with very frequent transaction log backups may not need a transaction log that’s even 1% of the size of the data… The best way is to setup your development environment similar to that of your production environment (including backup jobs) and then see how the transaction log grows. If you have a lot of autogrowth (because your guess was wrong), you can later clean up the fragmentation that has occurred and get back to a reasonable, intact, and optimal transaction log. If you want to know more about Recovery Models and their impact on the transaction log, review a sample book chapter from an MSPress book (Microsoft® SQL Server™ 2000 High Availability) that I helped to co-author. I only helped out on a couple of chapters but this specific chapter explains a lot about the general database environment settings that impact recovery. You can download Chapter 9: Database Environment Basics for Recovery here: http://www.sqlskills.com/resources/SQLServerHAChapter9.pdf

7) [Edit 2019 by Paul: I’ve edited this to be current.] Don’t let autogrowth get out of control. As important as capacity planning, you’re not likely to be spot-on in your estimates. I don’t recommend completely turning off autogrowth but in general I also don’t like the default growth rate (or max size). In general, I would recommend setting ALL of the transaction log files attributes: initial size, growth rate AND maximum size. For the growth rate, I recommend something that can be allocated somewhat quickly and something of a fixed size. In general, I recommend a value which is less than or equal to 1GB (based on total size) but something that doesn’t mean that you’re going to autogrow again soon. So, for databases whose transaction logs are under 1GB then you might set autogrow to somewhere between 20 and 100 MB. For those of you who have transaction logs measured in GB, then I’d set the autogrowth to 500MB or 1GB. In general, I don’t like the percentage because it needs to be calculated and the larger the file the larger the autogrowth and the longer it takes. Autogrowth can create blocking…so, it’s best to minimize this in general.

8) [Edit 2019 by Paul: I’ve edited this to be current.] Check/fix your internal fragmentation. OK, so this is the one that really helped the customer in Turkey. Often, when transaction logs are not pre-allocated and/or when there’s been a lot of autogrowths, the transaction log can become internally fragmented. Internally your transaction logs are broken down into smaller more granular chunks called VLFs (Virtual Log Files). The size and number of VLFs you’ll have depends largely on the size that the chunk is when it’s added to you transaction log. If you add a new chunk to the transaction log which is 20MB (through autogrowth or through manual growth) then the number of VLFs that are added is 4. If you add a chunk which is greater than 64MB but less than or equal to 1GB, you’ll add 8 VLFs. If you add more than 1GB then you’ll add 16VLFs. However, in SQL Server 2014, a change was made to the algorithm so that if you’re growth size is less than 1/8 of the total size, you only get 1 VLF. See Paul’s post here for more details.

In general, most transaction logs will fewer than a few hundreds VLFs – even a thousand could be reasonable depending on the total size of the transaction log (e.g a TB-sized log). However, in many cases what happens is that excessive autogrowths can cause an excessive number of VLFs to be added – sometimes resulting in many thousands of VLFs. Having an excessive number of VLFs can negatively impact all transaction log related activities, especially around restores and crash recovery. To see how many VLFs you have solely look at the number of rows returned by DBCC LOGINFO. The number of rows returned equals the number of VLFs your transaction log file has. If you have more than hundreds, I would recommend fixing it and adjusting your autogrowth so that it doesn’t occur as frequently. Check out Paul’s post from 2010 here for some guidance on size vs. number of VLFs.

To get rid of all of the execessive VLFs, follow these easy steps to shrink off the fragmented chunk and add a new, clean chunk to your transaction log:

1. Wait for an inactive time of day (ideally, it would be best to put the database into single user mode first) and then clear all transaction log activity through a regular transaction log backup. If you’re using the simple recovery model then you don’t need to do a log backup… Instead, just clear the transaction log by running a checkpoint.

 BACKUP LOG databasename TO devicename

2. Shrink the log to as small a size as possible (truncateonly)

DBCC SHRINKFILE(transactionloglogicalfilename, TRUNCATEONLY)

NOTE: if you don’t know the logical filename of your transaction log use sp_helpfile to list all of your database files.

3. You may have to repeat steps 1 and 2 several times.

4. Alter the database to modify the transaction log file to the appropriate size – in one step

ALTER DATABASE databasename
NAME = transactionloglogicalfilename
, SIZE = newtotalsize

NOTE: Depending on the total size desired, you might want to break this into multiple chunks. Be sure to read this post as well: Transaction Log VLFs too many or too few? after reading this one. Not only can you have too many small VLFs but if incorrectly sized, you can have too few! And… there’s a bug referenced/mentioned there. You’ll want to read that post as well!

And, that should do it!

Other articles

SQLskills SQL101: Partitioning

Continuing on our path to understanding the basics and core concepts, there’s a big topic that’s often greatly misunderstood and that’s partitioning. I know I’m


Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.