One of the great things about the blog engine we use is that it shows all the search engine queries that led to someone clicking through to the site. I’ve been looking through the logs to see what kind of problems people are having that end up here. In this occasional series, I’m going to pick a search engine query and blog about it – hopefully helping out people who have the problem in future.

First up is running out of transaction log space. This happens when the transaction log fills up to the point where it has to grow but either autogrow is not turned on or the volume on which the transaction log is placed has no more space for the file to grow. So what causes the transaction log to fill up in the first place? It could be a number of different things. The two most common ones I’ve seen are:

  • The database is in full recovery mode with normal processing, a full database backup has been taken but no log backups have been taken. This commonly happens when an application goes into production and someone decides to take a backup, without realizing the consequences. Log truncation is not automatic in such a case – a log backup is required to allow the log to truncate.
  • The database is in any recovery mode but there’s a very long-running, uncommitted transaction that prevents log truncation. (Even in full recovery mode with regular log backups, all the log records for the long-running transaction are required in case it rolls-back – and SQL Server can’t selectively truncate log records from the log for some transactions but not others.)

Let’s have a look at both of these in SQL Server 2005 and example solutions.

The log backup case…

First thing to do is create a database with a very small transaction log that we can easily fill up.

USE [master];

IF DATABASEPROPERTY (N'tinylogtest', 'Version') > 0 DROP DATABASE [tinylogtest];

CREATE DATABASE [tinylogtest] ON (
    NAME = N'fgt_mdf',
    FILENAME = N'c:\tinylogtest\tinylogtest.mdf',
    SIZE= 2MB)
    NAME = N'fgt_log',
    FILENAME = N'c:\tinylogtest\tinylogtest.ldf',
    SIZE = 512KB,
    FILEGROWTH = 0);

Notice that I’ve specifically turned auto-growth of the log file off by setting the file growth to zero. Now let’s set the database into full recovery mode and take a full database backup.


BACKUP DATABASE [tinylogtest] TO DISK = N'C:\tinylogtest\dummybackup.bck';

And then create a simple table and fill up the transaction log.


CREATE TABLE [tinylogtest]..[testtable] ([c1] INT, [c2] CHAR (3000));

WHILE (1=1)
    INSERT INTO [tinylogtest]..[testtable] VALUES (1, 'a')

And we get the following error:

Msg 9002, Level 17, State 2, Line 4
The transaction log for database 'tinylogtest' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
This is much better than you get on SQL Server 2000:
Msg 9002, Level 17, State 2, Line 4
The log file for database 'tinylogtest' is full.

This has no helpful information at all. So, looking at the 2005 message, it’s telling us to look in the sys.databases table for more info. Let’s do that:

SELECT [log_reuse_wait_desc] FROM sys.databases WHERE [name] = N'tinylogtest';

And the result is:


That’s pretty clear – take a log backup! Here are the various values this can take (at the time of writing), from the Books Online entry for sys.databases:


Their meanings are explained in the Books Online entry Factors That Can Delay Log Truncation. Note that the value returned is what was stopping log truncation when it was last attempted.

Now if we take a log backup, the log can be truncated.

BACKUP LOG [tinylogtest] TO DISK = N'C:\tinylogtest\dummybackup.bck';

And checking the state in sys.databases again returns:


Simple – but it’s amazing how many times I see this problem on newsgroups and forums.

The long-running transaction case…

For this test, assume I’ve run the T-SQL above to create the database, put it into full recovery mode, take the first full backup, and then create the table. Now in a second connection, I’ll create a long-running transaction:


INSERT INTO [tinylogtest]..[testtable] VALUES (1,'a');

Note that the transaction hasn’t been committed or rolled back. In the original connection, I’ll execute the loop to fill up the log again. When we check sys.databases we get back:


So we take a log backup and check the state again. This time we get:


We need to use the DBCC OPENTRAN command to find out what the long running transaction is:

DBCC OPENTRAN (N'tinylogtest');

And we get back:

Transaction information for database 'tinylogtest'.

Oldest active transaction:
SPID (server process ID): 54
UID (user ID) : -1
Name            : PAULSTRAN
LSN               : (18:98:1)
Start time      : Sep 23 2007  5:49:53:077PM
SID               : 0x010500000000000515000000ae4da6eadcd59cf661d6bb58ed030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

From this info we can track down the transaction and commit it or roll it back. In our example, once we roll back the PAULSTRAN transaction, there’s enough log space to insert one more record into the table, but then we run out of log space again. What’s going on? Because the PAULSTRAN transaction we very close to the start of the transaction log, all of the log it was effectively holding hostage was backed up but couldn’t be cleared. In this case we need to perform another log backup to allow the log to be cleared. This makes sense and is documented behavior.


Of course, there are other causes of the transaction log filling up and other things you can do apart from taking log backups or identifying long-running transactions – such as growing the log file or moving the log file to a volume with more free space. Here are some good resources you can use for further reading:

Hope this helps!