Search Engine Q&A #23: My transaction log is full – now what?

There's been a spate of problems on the forums and in my inbox with people's transaction logs filling up. With the increase in "involuntary DBAs", there are more people being thrust into the role of database administration without being given the chance to learn the skills, tips, and tricks necessary. One of the most common problems I see is the tranaction log filling up and the database grinding to a halt, and the most common cause of this is the lack of log backups when in the FULL recovery mode.

The very first post in my Search Engine Q&A series last September dealt with this problem – and showed two examples of filling up a transaction log plus the command to see why a transaction log is full (examining the log_reuse_wait_desc column in master.sys.databases). See here for more details. One thing I don't go into too much detail about in that post is the way that transactional replication or database mirroring can cause the log to grow. Both of these technologies rely on reading the transaction log for an unbroken stream of changes to be sent to the secondary systems (but with slightly different mechanisms) – but both will cause the log to grow if there is transaction log that hasn't yet been read (usually because of a performance issue). The same thing applies to the new Change Data Capture technology in SQL Server 2008 – it uses the transactional replication log reader to harvest changes from the log. The addition of any of these technologies could cause your log to grow excessively.

So it's grown too big – now what? Let's deal with the case where you've taken a full database backup in the FULL recovery mode but haven't taken any log backups. There are a number of options:

  1. Take a log backup! This might be easier said than done if you're like the customer I once had who had a 10GB database with an (I kid you not) 987GB log file.
  2. Switch to the SIMPLE recovery model (either permanantly or temporarily). If you're not interested in point-in-time recovery, there's no need for you to be in the FULL recovery mode (unless you're using a feature that requires it, like database mirroring, in which case you have no choice but to start taking log backups)
  3. In SQL Server 2005 and before, you can manually throw away the log contents – see my blog post here. This is not advisable.
  4. Grow the log file. This really just addresses the symptom but not the cause – the log will fill up again eventually.

Note that options #1-3 will just remove the restriction that the log contents are still needed (called "clearing" or "truncating" the log). None of these will cause the log file to shrink. The ONLY way to change the size of the log file is to use DBCC SHRINKFILE. After doing one of #1-3, you must manually shrink the log file using DBCC SHRINKFILE. If you do #2, and then go back into the FULL recovery mode, you should immediately take a full database backup. After shrinking the log file, you should manually grow it again using ALTER DATABASE so that it doesn't suffer repeated auto-grows and lead to performance issues with the log (see Kimberly's post here).

This topic is really a two-hour lecture in the database maintenance class I teach, but at least here (and with the blog post links) I've given you a bit of insight into why it can happen and how to solve the problem.

8 thoughts on “Search Engine Q&A #23: My transaction log is full – now what?

  1. Paul thanks alot.

    my question is” see if i have series of log backups say from 10 am to 4 pm which is of 30 mins intervals and when i try to restore them only few logbackups were not restored say the backup which was taken at 12 noon,12;30, and 1 pm backup failed to restore? what will b the possible reasons and how to fix it?

    1. All kinds of things can prevent a backup restoring, like corruption. No way to fix a non-restoring backup – your restore sequence ends with the last one that restored. Have multiple copies of backups, and a redundant copy of the database if you want to be able to recover if your scenario occurs.

  2. Hi Paul,
    I have a question for you. My database log file was about 1 GB in size. After running index maintenance job which rebuilt and/or reorganized indexes based upon fragmentation level, log file grew upto 22GB. This database was in full recovery mode, full backups every night and log backups every 2 hours.
    But this growth filled up all the remaining disk space. To deal with it, I switched the recovery model to Simple from Full. Then, took a full backup and shrank the log file and changed recovery model back to Full.
    Is this correct approach? What would you advise?
    Thanks,

    1. Not really as it breaks your log backup chain and makes it impossible to restore from an earlier full backup if the one you took is corrupt (until you take the next one).

      Better approach would have been to temporarily add another log file. With shrinking the log, it’s just going to grow again next time a large rebuild/reorganize is required, causing a perf hit while it zero-initializes. You should tweak your maintenance so it doesn’t do so much rebuilding, take more frequent log backups to prevent the log growing so large, and maybe consider switching to bulk_logged if you’re going to rebuild to reduce the log growth required.

  3. Thanks!
    I restored that DB on dev and tweaked the index maintenance plan a little. This time file grew from 1GB to 6GB only, with Full recovery model. Gonna try with bulk_logged recovery model too. this is fun!
    Thank you,

  4. Hi Paul,
    we application which uses MS SQL 2008 r2 64 to store the data through linked OLEDB provider.
    every night we do a backups of MDF and LDF files of this database using MS Shadow Copy. We do this around 2 years and all this time the log file was continually growing up. This database setup was done through application’s installation program and didn’t ask about a lot of details of SQL setup. So, I assume it takes all default settings of model.MDB. Of course, the setting for incrementing the log file is 10%.
    So, currently we have 80 GB log with 30 GB data database and it’s a kind of problem, because we transfer backup files (more than 100Gb) every night through municipal WAN and takes it around 2 hours.

    I looked at ‘log_reuse_wait_desc’ files and found it with “LOG_BACKUP” value. Which is weird as we do backup daily. I see in a log files how many pages it dumped with first and last LSNs etc. using DBCC loginfo, I found that there 765 VLFs in this log, all with status 2.

    My question is what is a proper direction to fix the log growing issue?
    I’m about to do manual back up to local SQL device and see if it will change the ‘log_reuse_wait_desc’ value. If it helps, i will shrink log file. But I’m not sure now if I can use Shadow copy for SQL Backups.

    Also, what is a proper Database backup procedure? Should you first configure SQL backup device and copy on it? and only than you can use backup software like VSS, ViceVersa etc?

    1. Something that the MS Shadow Copy backup is doing is making the log backup a copy_only log backup, which means SQL Server doesn’t consider a real log backup to have happened. The only backup procedure I’m familiar with is using SQL Server. If you want to use any other method, you need to make sure it’s doing SQL Server backups, or you are.

      1. Thanks for the reply Paul.
        I figured out how to deal with the growing log files. I do manual backups from time to time using SQL backup command for LOGs only, shrinking LDF with Truncate only option.
        Now I try to find out why two identical servers has completely different behaviors, trying to find something what could explain this. The settings (database properties) are identical. The difference is the load. but this, as I assume, should not have an effect like this. Server A dumps 3.7 M pages a day, Server B only 1.5 M pages/day.
        Server B had a log file 5 Mb with 8 VLFs. ‘log_reuse_wait_desc’ field is always “NOTHIHNG”. And even if it has Active status (2) for some VLFs, it will flush it and auto truncate the log file after next checkpoint. I query fn_dblog for the raws with some dates in Checkpoint Begin and Ends fields (like ‘%2015%’). Server B always returns two raws only: one for the beginning (LOP_BEGIN_CKPT) and one for the end (LOP_END_CKPT)
        Server A has a growing LDF file (incrementing by 1 Gb). If I query fn_dblog for the checkpoints , it returns me all checkpoints since last LOG backup and truncation. it could take hours for the current size of 4 GB.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.