8 Steps to better Transaction Log throughput

(Be sure to join our community to get our monthly newsletter with exclusive content, advance notice of classes with discount codes, and other SQL Server goodies!)

[Edit 2019 by Paul: Although this is an old post, it’s still relevant. I’ve updated the post to reflect current numbers and guidance.

Also check out these more recent articles of mine:

And my 7.5 hour Pluralsight course: SQL Server: Logging, Recovery, and the Transaction Log.

/Edit]

NOTE/UPDATE: Be sure to read this post as well: Transaction Log VLFs too many or too few? after reading this one. Not only can you have too many small VLFs but if incorrectly sized, you can have too few! Enjoy!!

NOTE/UPDATE 2: Be sure to read this post as well: Bug: log file growth broken for multiples of 4GB after reading this one. In the end, the best amount is to allocate with a value of 8000MB instead of 8GB. [Edit 2019 by Paul: this only applies to pre-SQL Server 2012.]

On a few of my last trips, I’ve had the pleasure of doing a lot of customer visits – visiting some of the larger implementations of SQL Server. Many of these implementations are at banks where both performance and recovery are critical. After my trip to Turkey (where I did customer visits prior to presenting at the Microsoft Professional Developers Summit), I received this email:

As you might remember we talked at xxxbank, Turkey about performance problems in transaction log backups and splitting tempdb data file into equal size chunks equal to number of CPUs. I have implemented both optimizations in my troubled server, and the results are great! Transaction log backups do not impact the server at all and with lowered congestion in tempdb, overall system performance has gone up as well.

So – this is my first of two blog entries targeting these two VERY typical problems:
* Transaction log fragmentation (both internal and external – this customer was having problems with internal fragmentation)
* Optimizing TempDB

We’ll start with 8 Steps to Optimizing your Transaction Log – not necessarily in any order. It’s best to review all of these to make sure you have a comprehensive view of how to improve both performance and recovery of your transaction logs!

1) [Edit 2019 by Paul: this may not be relevant on fast flash storage.] Try to dedicate the transaction log portion of your database to its own physical disk. In high volume OLTP system, isolating the transaction log can allow the disk head to be ready for the next write by not having other files contend for the physical disk resource. If your database already exists, the best way to “move” your transaction log is to detach your database (sp_detach_db) and then reattach it (sp_attach_db). Here’s a great KB article titled: Moving SQL Server databases to a new location with Detach/Attach.

2) [Edit 2019 by Paul: this is no longer relevant.] Defrag the disk(s) on which your transaction logs reside. This will get rid of external fragmentation of the transaction log – better known as disk file fragmentation. This will require that your server be taken offline but if your files have had a lot of autogrowth and/or they reside on a disk with a lot of other files that have been modified, then all of your files (incl. the transaction log file) are likely to be interleaved and fragmented. The good news is that this is a one time operation (unless the files continue to grow and shrink excessively – but we’ll fix that as well). The bad news is that you’ll need to shutdown SQL Server and then use some form of system tool – Microsoft or third party – to defrag. On Windows XP and Windows Server 2003 machines, the disk defragmenter tool can be launched by running: dfrg.msc.

3) [Edit 2019 by Paul: this is still current.] Create only ONE transaction log file. Even though you can create multiple transaction log files, you only need one… SQL Server DOES not “stripe” across multiple transaction log files. Instead, SQL Server uses the transaction log files sequentially. While this might sound bad – it’s not. If you want to get better performance out of the transaction log, place it on faster disks and/or a more performant disk (RAID) configuration. You could place the transaction log on a RAID 0 array but you’ll likely want fault tolerance as well as performance. Isolation, as per number 1 is a good start but if you also need increased capacity then using hardware based RAID to combine disks for both better performance and redundancy is your next choice. See 4.

4) [Edit 2019 by Paul: redundancy is till relevant, but performance implications may not be on fast flash storage.] Not only should you try to isolate the transaction log to its own physical disk but you should make sure that the logical/physical disk configuration is as efficient as possible. Try to use an isolated RAID 1 mirroring set if you don’t need significant capacity. If you need a greater capacity OR you want better performance, consider a combination of RAID 0 and RAID 1 (either RAID 0 + 1 or RAID 1 + 0). While RAID 0 + 1 can often offer better performance, RAID 1 + 0 offers better reliability. If you’re new to RAID and are interested in learning more – check out the RAID Tutorial here: http://www.raidarray.com/04_01_00.html.

5) [Edit 2019 by Paul: this is still relevant.] Don’t be caught up in nothing but transaction log speed, you’ll also want to make sure that your transaction log is always available as this can help you in times of disaster. Even if the data is damaged, if the transaction log is available and you have a series of backups up to and including the last transaction log then you can make a final backup of the transaction log that will represent all of the changes since your last transaction log backup. If this backup is possible (it’s called backing up the “tail” of the log), then you can achieve up-to-the-minute recovery. This is only possible when the log is available. By placing the log on mirrored disks you can increase your chances of recovering data and minimize data loss!

6) [Edit 2019 by Paul: this is still relevant.] Create transaction log files with a reasonable initial size. When you create a database it’s ideal to PRE-ALLOCATE both your data files and your transaction log file. A little bit of capacity planning goes a long way… Now, if you think that you’ve got absolutely no idea how to size your transaction log you’re going to need, well – here are the things that have the greatest impact:

  • Type of activity – transaction processing or decision support
  • Frequency of that activity – the more frequent the changes, the faster the transaction log will grow
  • Recovery Model – the recovery model of the database
  • Frequency of transaction log backups
  • Whether or not replication is used (since the log reader relies on the transaction log)

I wish I could give you a rough idea on sizing but if you look only at database size and none of these other factors, you could end up with a transaction log that’s seriously oversized or seriously undersized. I’ve seen recommendations of 10-25% of the size of the data but there’s really no formula. A larger database with very frequent transaction log backups may not need a transaction log that’s even 1% of the size of the data… The best way is to setup your development environment similar to that of your production environment (including backup jobs) and then see how the transaction log grows. If you have a lot of autogrowth (because your guess was wrong), you can later clean up the fragmentation that has occurred and get back to a reasonable, intact, and optimal transaction log. If you want to know more about Recovery Models and their impact on the transaction log, review a sample book chapter from an MSPress book (Microsoft® SQL Server™ 2000 High Availability) that I helped to co-author. I only helped out on a couple of chapters but this specific chapter explains a lot about the general database environment settings that impact recovery. You can download Chapter 9: Database Environment Basics for Recovery here: https://www.sqlskills.com/resources/SQLServerHAChapter9.pdf

7) [Edit 2019 by Paul: I’ve edited this to be current.] Don’t let autogrowth get out of control. As important as capacity planning, you’re not likely to be spot-on in your estimates. I don’t recommend completely turning off autogrowth but in general I also don’t like the default growth rate (or max size). In general, I would recommend setting ALL of the transaction log files attributes: initial size, growth rate AND maximum size. For the growth rate, I recommend something that can be allocated somewhat quickly and something of a fixed size. In general, I recommend a value which is less than or equal to 1GB (based on total size) but something that doesn’t mean that you’re going to autogrow again soon. So, for databases whose transaction logs are under 1GB then you might set autogrow to somewhere between 20 and 100 MB. For those of you who have transaction logs measured in GB, then I’d set the autogrowth to 500MB or 1GB. In general, I don’t like the percentage because it needs to be calculated and the larger the file the larger the autogrowth and the longer it takes. Autogrowth can create blocking…so, it’s best to minimize this in general.

8) [Edit 2019 by Paul: I’ve edited this to be current.] Check/fix your internal fragmentation. OK, so this is the one that really helped the customer in Turkey. Often, when transaction logs are not pre-allocated and/or when there’s been a lot of autogrowths, the transaction log can become internally fragmented. Internally your transaction logs are broken down into smaller more granular chunks called VLFs (Virtual Log Files). The size and number of VLFs you’ll have depends largely on the size that the chunk is when it’s added to you transaction log. If you add a new chunk to the transaction log which is 20MB (through autogrowth or through manual growth) then the number of VLFs that are added is 4. If you add a chunk which is greater than 64MB but less than or equal to 1GB, you’ll add 8 VLFs. If you add more than 1GB then you’ll add 16VLFs. However, in SQL Server 2014, a change was made to the algorithm so that if you’re growth size is less than 1/8 of the total size, you only get 1 VLF. See Paul’s post here for more details.

In general, most transaction logs will fewer than a few hundreds VLFs – even a thousand could be reasonable depending on the total size of the transaction log (e.g a TB-sized log). However, in many cases what happens is that excessive autogrowths can cause an excessive number of VLFs to be added – sometimes resulting in many thousands of VLFs. Having an excessive number of VLFs can negatively impact all transaction log related activities, especially around restores and crash recovery. To see how many VLFs you have solely look at the number of rows returned by DBCC LOGINFO. The number of rows returned equals the number of VLFs your transaction log file has. If you have more than hundreds, I would recommend fixing it and adjusting your autogrowth so that it doesn’t occur as frequently. Check out Paul’s post from 2010 here for some guidance on size vs. number of VLFs.

To get rid of all of the execessive VLFs, follow these easy steps to shrink off the fragmented chunk and add a new, clean chunk to your transaction log:

1. Wait for an inactive time of day (ideally, it would be best to put the database into single user mode first) and then clear all transaction log activity through a regular transaction log backup. If you’re using the simple recovery model then you don’t need to do a log backup… Instead, just clear the transaction log by running a checkpoint.

 BACKUP LOG databasename TO devicename

2. Shrink the log to as small a size as possible (truncateonly)

DBCC SHRINKFILE(transactionloglogicalfilename, TRUNCATEONLY)

NOTE: if you don’t know the logical filename of your transaction log use sp_helpfile to list all of your database files.

3. You may have to repeat steps 1 and 2 several times.

4. Alter the database to modify the transaction log file to the appropriate size – in one step

ALTER DATABASE databasename
MODIFY FILE
(
NAME = transactionloglogicalfilename
, SIZE = newtotalsize
)

NOTE: Depending on the total size desired, you might want to break this into multiple chunks. Be sure to read this post as well: Transaction Log VLFs too many or too few? after reading this one. Not only can you have too many small VLFs but if incorrectly sized, you can have too few! And… there’s a bug referenced/mentioned there. You’ll want to read that post as well!

And, that should do it!

41 thoughts on “8 Steps to better Transaction Log throughput

  1. I have a question about SQL Server filegroups and performance and I’m hoping you can shed some light on the topic!

    I’m assuming the following: (Just a little context)

    1) I’m using the same total number of disks for my data files
    2) I don’t need filegroups for anything fancy like the new data partitioning schemes in Yukon\backups\read-only areas
    3) Question is for both DSS and OLTP workloads

    I’ve heard conflicting information about filegroups and/or total # of files and I/O performance with SQL Server 200X. Some articles recommend extending filegroups by adding additional data files (even when using the same physical volume). These articles tend to suggest that systems using more than 1 CPU can achieve higher levels of I/O throughput and lower levels of contention experienced by that databases running with just a single data file.

    I’ve also seen in BOL and in recent MSFT webcasts by Steven Wort that the RAID subsystem should be left to handle contention and that using multiple files does not buy you anything in terms of performance when located on the same volume.

    You wrote about this recently but the article was exclusively focused on tempdb I/O behavior. As far as I understand the physical structure of tempdb, it’s really no different than a regular user database so would increasing data file count proportionally to each CPU for all user databases be advantageous for most OLTP and DSS workloads?

    Looking forward to hearing your thoughts on this topic.

    BTW- your site is awesome!

  2. Hey there Jonas – I just updated my blog entry on TempDB Optimizations. You’re right though… I didn’t really say why it was *just* for tempdb. However, I did update the blog entry just now with this paragraph:

    —–
    Before I say what… let me tell you why? TempDB has a large number of objects being created all the time. For an object to be created, space must be allocated to it. Space allocation is determined by looking at some of the internal system pages (the GAM, and SGAM). In the end, it is these pages that start to have significant contention (with just one file) in a VERY active TempDB. To minimize that contention you can create multiple files.
    —–

    Now, can you apply this concept to user databases…no, not really. User databases don’t get a lot of new objects (not usually) and in the end, user databases don’t get the same intense load for page requests (like tempdb does). So…. in short, do you want lots of files for user databases – NO. Instead you might want 2 or even possibly 3 for faster insert performance (to allocate extents faster) BUT in the end it might hurt you (because of disk contention) for reads. I too would suggest that you let RAID best handle the IO; I definitely agree with Steven.

    Thanks!
    kt

  3. Hi Kemberly,
    2 questions,
    1) Do you need to defrag the disk even if the disks are on SAN? What the SAN vendor told me was that SAN disks are different from normal disks as they rely a lot of SAN controller cache. A normal windows defrag utility would not be able to read SAN disks properly and might cause corrupted if force used.
    2) Since VLF are pre-determined when they’re created. Does it makes sense to create a say, a single 20GB transaction log file 4 times 5GB each time?

    Thanks,
    Simon

  4. My company has a standard build process with the database loaded on E: (used for Binaries and Backups), F: for Transaction logs and G: for SQL Database files.

    My question is the TEMPDB is not rehomed to G: for the MDF file and the F: for the LDF file. Is it worthwhile to move the associated files for TEMPDB to F: and G: as appropriate?

  5. Hi,

    For a volatile read/write intensive environment, the number of files can also assist in increasing the performance as regards to processing.

    Since sql will use 1 thread per file, the overall processing would be improved.

    Would you agree with that?

    Aasim

  6. Kimberly:

    Thanks for all of the insight on how to properly maintain the log file for a given database. I have one small question for the VLF fragmentation fix. Would it make sense to specify a FILEGROWTH parameter in the last step? If you specify this parameter you will hopefully guarantee that SQL Server will grow the log file by an appropriate size. This would help minimize further fragmentation of the log file. It’s not always a guaranteed thing that the correct FILEGROWTH parameter will be specified.
    Thanks again,

    Drew

  7. I would like a comment on the t-log issue described under 3) "SQL Server uses the transaction log files sequentially". I now this, but do you not agree that this becomes a minor benefit using multiple databases with multiple log files on an isolated physical drive or an isolated logic drive in a SAN environment.
    In these situations I would normaly recommend a raid 10 configuration. What is your comment/surgestion?

    Thanks
    /peter.

  8. Simon,

    I was just reading this looking for some DB tips. From the SAN side of things. If it is truly a SAN and presenting block level storage to your OS. Your OS formats the space. You absolutely need to defrag from the OS level. The SAN may do a defrag to re-organize at the block level, but the SAN has no visibility to what is contained on those blocks. It does not reorganize files. Typically a SAN will consolidate LUNs to the outer edge of the disk, which offers the best performance. So if you delete a LUN it will reorganize the LUNs towards the outer edge w/o out any concern for where files are. I guess a basic SAN concept is that it cannot see files, only blocks.

    Mike G.

  9. Hi,

    I have the following setup, would like to know what’s best for log files, data files, indexes, tempdb files.
    Also, since the server is used solely for SQL 2005 Reporting Services, how many tempdb data files and tempdb log files
    should I create, also how many userdb data files and userdb log files. Want to make reports fly.

    Disk configuration:
    C: (135GB), RAID 1 – Operating System.
    D: (1.35TB), RAID 5 – SQL Data files.
    E: (556GB) RAID 5 – SQL Log files.
    G: (999GB) RAID 5 (on SAN) – SQL Index files.

    MEMORY = 8GB DDR2, Processor = Intel Xeon Dual 3.60GHz.
    Windows Server 2003 Enterprise, SQL Server 2005 Enterprise Edition.

    Thanks
    sonyt65@yahoo.com

  10. We have many large multi-terabyte databases in busy healthcare environments hosted on shared client SANs; we create secondary log files to prevent disaster scenarios whenever client backups fail (shockingly common) and primary log drives located on fast spindles or ramsans fill up. We place these secondary log files on slower volumes in their SANs because failure of transaction log backups during certain high processing events could cause downtime to labs and hospital networks that are exchanging clinical patient information, test results, images…so we do whatever we can to eliminate potential disaster (when it’s not reasonable or easy to shut off databases or truncate logs without doing log backups). Sadly, many mid-sized SAN environments do not possess the storage or performance capacities to handle frequent snapshoting of changes.

    We’ve found that once SQL Server moves over to the secondary transaction log file, it never chooses to move back to the primary log file unless the secondary completely fills up (requiring another transaction log backup failure event).

    Is there a way to cause SQL Server to move back to the primary transaction log file once pages in that file have been marked free (the transaction log backup process has restored and completed its sweep of the transaction log files)?

    Thanks!

    1. Only by resizing it (shrinking it) and then removing it once it’s not active and once you’ve moved back to the first log file.

      Having said that though, if you want to keep it at the defined size and just have SQL move back to the first log before it fills the second file, then no.

      Sorry!
      kt

  11. Hello,
    Is there a reliable way to correct bad sizing of the first two vlfs in the primary database log file (even if it requires reliable clean shutdown, etc)? I have two cases in mind:
    1) more frequent but probably less critical when first 2 vlfs are 1 mb or less and remaining vlfs are desired size of 250 or 500 mb
    2) when log was created for heavy growth and first two vlfs are 4 gb each.
    In each case the log can be truncated to the first two vlfs, but is there aaa good way to correct the size of the initial vlfs for the log file?
    Even “create database… for attach_rebuild_log” seems to always create an initial log of 1 mb with vlfs too small for my liking.

    1. Unfortunately not. You either have the case where the initial VLF size is tied to an overly large initial transaction log file OR where you let SQL go with the default (which is tied to model). But, I would just let the first ones stay small and worry more about larger files and/or too many. A couple of extra small files won’t really create a huge problem.

      But, for new DBs, definitely better to allocate the first 8GB and then grow for subsequent 8GB chunks (to your desired size).

      hth,
      kt

  12. It is not true that transaction logs are not spanned across multiple files. At least for SQL Server 2008.

  13. Hey Kimberly good to see you’re still replying to this great old blog post! This stuff still helps especially us “accidental” DBA’s!

    I’m a “DBA” not by choice for a medium sized company ~40 SQL instances. We typically run our DB’s in Simple recovery if anything for reduced maintenance due to log growth disk space mgt. Anyway I have a SQL 2012 database that supports a help desk app (Trackit) that is in Simple mode. I typically set log autogrowth from default % to a set Megabyte value (256Meg in this case (blind guess)). After a few months of use the logfile has fragmented into 605 VLF’s and counting and 13Gig.

    Other than the large number of VLF’s, no big deal right? The weird things is the status on ALL but one of the 605 VLF’s is = 2. So this log has one unused VLF and is 99% full. The data file is still at the 256Meg I set it to at the beginning. I’m guessing all relevant data is stuck in the log file and check pointing is not happening. Any ideas for how to fix this situation?

    Thanks!
    Ron

    1. Hey there Ron – Hard to say. You need to check the log_reuse_wait_desc and see what’s causing this. It’s unlikely that checkpoints aren’t happening (unless someone messed with the recovery interval?). If you’re unfamiliar with log_reuse_wait_desc – google / bing this. Add Randal to the search if you want to see some posts that Paul’s specifically done on this!

      Hope that helps!
      k

      PS – Sorry for my delay. I’ve just found a bunch of these messages in SPAM. Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.