Search Engine Q&A #6: Using fn_dblog to tell if a transaction is contained in a backup

Here’s a really interesting question that was in my search engine logs yesterday – if I have a transaction that runs and completes while a backup is running, will the complete transaction be in the backup? The answer is…. it depends!

In terms of what gets backed up, the way a full backup works is:

  1. Note the transaction log’s LSN (Log Sequence Number)
  2. Read all allocated extents in the various data files
  3. Note the LSN again
  4. Read all the transaction log between the starting LSN and the ending LSN

Any transaction that commits before or on the LSN read in step 3 will be fully reflected when the database is restored. If not, the transaction will be undone. So you can’t just go by the completion time of the backup and the completion time of the transaction. The transaction may well commit before the backup operation completes, but it may complete during step 4, and so it will get rolled back during a restore. In this case, it’s necessary to take a log backup as well and restore that too to make the transaction be fully reflected after a restore.

As you know I always like to prove things  – so here’s my proof of what I just said. I’m going to use the AdventureWorks database to do this. First thing is to set it to full recovery mode (and take the first full backup to start full recovery mode logging):


BACKUP DATABASE AdventureWorks TO DISK=‘C:\SQLskills\AdventureWorks.bck’ WITH INIT;


Now I’m going to flush out the backup history tables in MSDB:

USE msdb;


EXEC sp_delete_backuphistory ’10/6/07′;


I’ve got a really contrived example that I’m going to use to show whether my transaction is wholely contained in the full backup. Using the HumanResources.Employee table, there’s a column VacationHours which I’m going to set to 0 and then force all the changes pages to disk. This is my base state:

UPDATE AdventureWorks.HumanResources.Employee SET VacationHours = 0;




My contrived transaction is going to be in a tight loop updating all the rows in the table, which will generate lots of transaction log, and timed to complete just before the backup completes (i.e. in stage 4). In one connection I start the backup:

BACKUP DATABASE AdventureWorks TO DISK=‘C:\SQLskills\AdventureWorks.bck’ WITH INIT;



and in another I start my contrived transaction, after starting the backup:




SELECT @a = 1

WHILE (@a < 6)


SELECT @b = 1

WHILE (@b < 201)


UPDATE AdventureWorks.HumanResources.Employee SET VacationHours = @b

SELECT @b=@b+1


SELECT @a=@a+1





It’s not pretty but it does the job. The backup finishes at 2007-10-05 17:42:38.983 and the transaction finishes at 2007-10-05 17:42:38.107 – before the backup finishes. Remember I set the VacationHours all to zero before running my transaction – let’s check the transaction did actually change them:

SELECT MAX (VacationHours) FROM AdventureWorks.HumanResources.Employee;


This returns 200 – which is what I’d expect. But is the transaction wholely contained in the backup? Let’s look at the backup history to find out the last LSN that was captured in the full backup:

SELECT Backup_Start_Date, Backup_Finish_Date, First_LSN, Last_LSN

FROM msdb.dbo.backupset WHERE database_name = ‘AdventureWorks’;


And the output we get is:

Backup_Start_Date       Backup_Finish_Date      First_LSN           Last_LSN
———————– ———————– ——————- ——————-
2007-10-05 17:42:22.000 2007-10-05 17:42:38.000 86000000001600029   91000000625600001

The LSNs are in decimal, so we need to convert the three numbers to hex so we can compare them to what’s in the log – giving: 5B:1870:1. This is the LSN of the last log record that was backed up in the full backup. Now let’s take a look at the transaction log for AdventureWorks using the undocumented fn_dblog function. This is undocumented but very well known. It’s a fully composable alternative to using the old DBCC LOG command.

USE AdventureWorks;


SELECT [Current LSN], Operation, [Transaction ID], AllocUnitName FROM fn_dblog (NULL, NULL);


The log record at that LSN is:

00000058:00001870:0001  LOP_MODIFY_ROW     0000:00001338  HumanResources.Employee.PK_Employee_EmployeeID

This is obviously in the middle of my contrived transaction – showing that it isn’t all in the full backup. The end of the transaction isn’t until way later in the log:

0000005e:00000628:01b1  LOP_MODIFY_ROW     0000:00001338  HumanResources.Employee.PK_Employee_EmployeeID
0000005e:00000628:01b2  LOP_COMMIT_XACT    0000:00001338  NULL

Before I do anything else, I want to take a log backup to preserve my transaction:

BACKUP LOG AdventureWorks TO DISK=‘C:\SQLskills\AdventureWorks_Log.bck’ WITH INIT;


Now I want to restore the full backup and really show that my transaction isn’t all in there:

USE master;




SELECT MAX (VacationHours) FROM AdventureWorks.HumanResources.Employee;


This returns 0. Clearly my transaction isn’t all in there, and the parts that are were rolled back during the restore. Now let’s do the same thing but using NORECOVERY for the restore of the full backup and also applying the log backup I took:



RESTORE LOG AdventureWorks FROM DISK=‘C:\SQLskills\AdventureWorks_Log.bck’ WITH RECOVERY;


SELECT MAX (VacationHours) FROM AdventureWorks.HumanResources.Employee;


This time the SELECT returns 200. And now you can start playng around with fn_dblog if you didn’t know about it before. I’ll be posting more about Storage Engine internals that you can figure out from the transaction log in future.

3 thoughts on “Search Engine Q&A #6: Using fn_dblog to tell if a transaction is contained in a backup

  1. Hi Paul,

    Thanks for the post. My question is not directly related to transactions in backup, but rather to the fn_dblog function.

    I was testing Katmai november CTP on my laptop playing with recovery models when I realized something:
    Why is the DELETE statement so hungry with transaction log space, compared to INSERT or UPDATE on the same amount of data being modified ?

    I have a table which is 117 Mb and about 500000 rows in it:

    exec sp_spaceused PROPALOUER
    PROPALOUER 496382 117256 KB 116784 KB 344 KB 128 KB

    When I delete all the lines in a transaction, the log grows up to 617Mb, which you can break down in:
    – 177 Mb* of transaction log being used.
    – 440 Mb* of transaction log being reserved.

    (* these values are from the sys.dm_tran_database_transactions DMV)
    If I commit, the reserved space (440 Mb) is freed and the remaining log records (177 Mb) are kept until I backup the log (the database is in the full recovery model). If I rollback, nothing is freed until I backup the tlog.

    Now I try an INSERT or an UPDATE on the same amount of data and I have quite different figures: either for an INSERT or an UPDATE statement, the log grows up to 166Mb, and reserves only 58 Mb:

    – 166 Mb of transaction log being used.
    – 58 Mb of transaction log being reserved.

    I have used this ::fn_dblog() you provided to find out what kind of log record saves so many space in the DELETE:

    select Operation,
    sum(convert(numeric, [Log Record Length])),
    sum(convert(numeric, [Log Reserve]))
    from ::fn_dblog(‘1070:6614:1’, null)
    group by Operation order by 3 desc

    Operation Log Record Length Log Reserve
    LOP_BEGIN_XACT 3113308 234078240
    LOP_DELETE_ROWS 160886860 166271980
    LOP_MODIFY_HEADER 3692428 3954870
    LOP_MODIFY_ROW 2432280 2520090
    LOP_INSERT_ROWS 2312012 2037478
    LOP_COMMIT_XACT 1475708 1645982
    LOP_HOBT_DELTA 936640 1024450
    LOP_SET_BITS 1036784 241164
    LOP_END_CKPT 136 0
    LOP_COUNT_DELTA 3744 0
    LOP_LOCK_XACT 1751280 0
    LOP_XACT_CKPT 84 0

    TOTAL 177642000 411774254

    And surprise, the LOP_BEGIN_XACT records have reserved more than 230 Mb. I know DELETE is supposed to store the before images of the records it is deleting, but still, 440 Mb is a huge amount of data to me (4 times the size of the table + index). Do you know what is this space used for and espacially for DELETE operations ? I have noticed the same behaviour in YUKON.

    Thanks for your help !


Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.