Search Engine Q&A #25: Why isn’t my log backup the same size as my log?

I woke up this morning and someone had replaced my wife with someone who likes to blog :-). Kimberly’s turned over a new leaf and is going to blog much more often – in fact she’s blogged 4 times today already. Check out her blog here.

Continuing on the transaction log theme of the last few Search Engine Q&A posts, this one addresses a question I’ve heard a few times, most recently on an MVP discussion group. Let me paraphrase:

If I have a transaction that inserts a huge amount of data, the transaction log grows to 50-GB. I then rollback the transaction. When I take the next log backup, it’s way smaller than 50-GB. What’s going on?

Let’s see if we can repro the scenario. I’ve created a database with a 500-MB data file and a 1-MB log file, with 100-MB and 1-MB auto-growth intervals. I want the log to be as small as possible and to grow in small chunks so I can see just how much it *has* to grow by, rather than having a large growth size. Then I set the recovery mode to full and took a database backup to make sure the log won’t truncate until it’s backed up.

(NAME = N‘LogSizeTest’,
FILENAME = N‘C:\SQLskills\LogSizeTest.mdf’,
SIZE = 512MB,
(NAME = N‘LogSizeTest_log’,
FILENAME = N‘C:\SQLskills\LogSizeTest_log.ldf’,
   SIZE = 1MB,


BACKUP DATABASE LogSizeTest TO DISK = ‘C:\SQLskills\LogSizeTest.bak’;

Let’s check the size of the log:


Database Name  Log Size (MB) Log Space Used (%) Status
————– ————- —————— ———–
LogSizeTest    0.9921875     36.66339           0

This gives back info for all databases, I’ve trimmed down the output just for the LogSizeTest database.

Now I’m going to create a table, start an explicit transaction and add about 500-MB of info to the table.

USE LogSizeTest;
TABLE Test (c1 INT IDENTITY, C2 CHAR (8000) DEFAULT (REPLICATE (‘a’, 8000)));


SELECT @count = 0;
WHILE (@count < 64000)
SELECT @count = @count + 1;

Checking the log file size again gives:


Database Name  Log Size (MB) Log Space Used (%) Status
————– ————- —————— ———–
LogSizeTest    703.9922      99.98737           0

The log size has grown to about 700-MB, way more than the size of the data I was inserting, and it’s completely full. Now let’s rollback the transaction and check the log size again.



Database Name  Log Size (MB) Log Space Used (%) Status
————– ————- —————— ———–
LogSizeTest    703.9922      85.21268           0

The size of the log file is the same, but the percentage used has actually gone down! How can that happen? Let’s take a backup and checkout it’s size:

BACKUP LOG LogSizeTest TO DISK = ‘C:\SQLskills\LogSizeTest_log.bck’;
RESTORE HEADERONLY FROM DISK = ‘C:\SQLskills\LogSizeTest_log.bck’;

The BackupSize in the output from the RESTORE HEADERONLY is 631454208, which is 602.2-MB. Taking the numbers from the DBCC SQLPERF output above, 85.21268% of 703.9922-MB is 599.89-MB – so the backup is roughly the same size as the used transaction log. That’s what I’d expect, but why is it smaller than the total size of the transaction log?

So what’s going on? Why did the transaction log need to grow so much larger than it needed to, and why did the percentage used actually *drop* after the transaction rolled back?

The answer is in the way the transaction log works. Whenever a logged operation occurs in a transaction, there is some transaction log space reserved in case the transaction rolls back. The idea is that there’s always enough space available in the transaction log for a transaction to roll back, without having to grow the transaction log and potentially have that fail. If a transaction could not roll back successfully because the log didn’t have enough space, the database would become transactionally inconsistent, would be taken offline and the state changed to SUSPECT.

The behavior we saw was the Storage Engine reserving transaction log space for a potential roll back. When the roll back occured, the transaction log records necessary to undo the effects of the transaction (called compensation log records) are created and written to the log. The issue is that they usually don’t take up as much space as the Storage Engine reserved, as it tends to be very conservative in its estimates of how much log space to reserve, to avoid the potential for SUSPECT databases. This explains the difference between the various sizes and percentages we saw above.

The Storage Engine code to do the reservations is quite interesting – I remember fixing a couple of bugs in it during SQL Server 2000 development in 1999 while I was getting to know the internals of the logging and recovery system before tackling some of the (since removed) log-reading code in DBCC CHECKDB in SQL Server 2000.

Anyway, there you have it. Log space reservation is the answer, and is also one of the reasons why it can be tricky to estimate how large a transaction log should be when a database is created.

Search Engine Q&A #24: Why can’t the transaction log use instant initialization?

This is a quickie in response to a blog comment from my previous post on instant initialization: How to tell if you have instant initialization enabled? The comment was:

I must say, I love instant initialization. It’s awesome. But I always wondered why it’s not available for the log file. I assume there’s a technical reason… but what is it? Does it depend on having the rest of the file be zeroed out? Doesn’t it already know where it’s start and stop points are anyways, since the log is circular?

There is a lot of metadata kicking around in the Storage Engine about the transaction log (mostly in the boot page – see my post Search Engine Q&A #20: Boot pages, and boot page corruption), including where to start reading the log during crash recovery. However, there’s nothing persisted about where the end of the log is, so when a crash occurs SQL Server has to determine where the active transaction log ends (i.e. where should crash recovery stop processing log records).

The way this is done is to have each log sector have parity bits stamped on it. When the log is first created, it is zero-initialized (with zero being an illegal log sector parity value). As the log is written, each 512-byte sector in a VLF has parity bits stamped on it (initially 64). When the end of the log is reached, and it wraps around to the start of the log file, the initial VLF is reused and the parity bits are flipped to 128, so that overwritten log sectors have the new parity bits (and then for each successive reuse of a VLF the parity bits flip back-and-forth between 64 and 128). When a crash occurs, log sectors are read and processed until a log sector in a VLF is found that has the ‘old’ parity bit setting for that VLF.

This entire process will not work if there’s already random data in the space used by the log file – some of the random data could just look like a valid set of parity bits and cause the recovery system to try to process a log sector full of garbage, leading to a suspect database, at best.

All new portions of the transaction log most be zero-intialized, but only once, when the space is first added to the log file.

Search Engine Q&A #23: My transaction log is full – now what?

There's been a spate of problems on the forums and in my inbox with people's transaction logs filling up. With the increase in "involuntary DBAs", there are more people being thrust into the role of database administration without being given the chance to learn the skills, tips, and tricks necessary. One of the most common problems I see is the tranaction log filling up and the database grinding to a halt, and the most common cause of this is the lack of log backups when in the FULL recovery mode.

The very first post in my Search Engine Q&A series last September dealt with this problem – and showed two examples of filling up a transaction log plus the command to see why a transaction log is full (examining the log_reuse_wait_desc column in master.sys.databases). See here for more details. One thing I don't go into too much detail about in that post is the way that transactional replication or database mirroring can cause the log to grow. Both of these technologies rely on reading the transaction log for an unbroken stream of changes to be sent to the secondary systems (but with slightly different mechanisms) – but both will cause the log to grow if there is transaction log that hasn't yet been read (usually because of a performance issue). The same thing applies to the new Change Data Capture technology in SQL Server 2008 – it uses the transactional replication log reader to harvest changes from the log. The addition of any of these technologies could cause your log to grow excessively.

So it's grown too big – now what? Let's deal with the case where you've taken a full database backup in the FULL recovery mode but haven't taken any log backups. There are a number of options:

  1. Take a log backup! This might be easier said than done if you're like the customer I once had who had a 10GB database with an (I kid you not) 987GB log file.
  2. Switch to the SIMPLE recovery model (either permanantly or temporarily). If you're not interested in point-in-time recovery, there's no need for you to be in the FULL recovery mode (unless you're using a feature that requires it, like database mirroring, in which case you have no choice but to start taking log backups)
  3. In SQL Server 2005 and before, you can manually throw away the log contents – see my blog post here. This is not advisable.
  4. Grow the log file. This really just addresses the symptom but not the cause – the log will fill up again eventually.

Note that options #1-3 will just remove the restriction that the log contents are still needed (called "clearing" or "truncating" the log). None of these will cause the log file to shrink. The ONLY way to change the size of the log file is to use DBCC SHRINKFILE. After doing one of #1-3, you must manually shrink the log file using DBCC SHRINKFILE. If you do #2, and then go back into the FULL recovery mode, you should immediately take a full database backup. After shrinking the log file, you should manually grow it again using ALTER DATABASE so that it doesn't suffer repeated auto-grows and lead to performance issues with the log (see Kimberly's post here).

This topic is really a two-hour lecture in the database maintenance class I teach, but at least here (and with the blog post links) I've given you a bit of insight into why it can happen and how to solve the problem.