An XEvent a Day (23 of 31) – How it Works – Multiple Transaction Log Files

While working on yesterday’s blog post The Future – fn_dblog() No More? Tracking Transaction Log Activity in Denali I did a quick Google search to find a specific blog post by Paul Randal to use it as a reference, and in the results returned another blog post titled, Investigating Multiple Transaction Log Files in SQL Server caught my eye so I opened it in a new tab in IE and went about finishing the blog post.  It probably wouldn’t have gotten my attention if it hadn’t been on the SqlServerPedia site.  When I was finished I went back and read through the post, and I found that some of the information presented in it was incorrect, so I attempted to post a comment, and not surprisingly the blog had moderation controls turned on, I have it turned on here if you aren’t a SQLBlog member so I don’t have a problem with that necessarily, and the comment didn’t show up on the site.

Interestingly enough, yesterday SQL Server Central had an editorial by Tim Mitchell titled Calling Out Bad Advice that discussed the problem of bad information on the internet and how to go about calling people out for publishing bad advice.  Lets face it, people are human, at least I am, and mistakes happen from time to time, either through our own misunderstandings of our personal experiences and what we perceived from the information we had, or by shear accident in some cases.  This afternoon I got an email back from the blog post author and we traded a few emails about the post, and in the end the author made changes to the original post which have been syndicated to SQLServerPedia already, so to see the original you have do something like look at the Google Cached Copy.  The author also posted a follow up blog post today on the subject.

So why this blog post?  Well even with the corrections, some of the conclusions are still wrong.

image

I am not trying to knock this guy for what he saw or perceived from the information he collected, but 2, 3 and 4 are still incorrect.  What’s great is we can prove this by using Extended Events in SQL Server 2008 and that is what the real purpose behind this blog post is.  To set things up, we first need to create a database that roughly matches the available information shown in the pictures of the original blog post.  The database will have a single database file, that I am sizing initially at 128MB and will have a fixed autogrowth value of 64MB.  The database will have four log files that are initially sized at 1MB each, and the first log file will have a fixed autogrowth value of 32MB, with the last three transaction log files having fixed growth values of just 1MB.  Don’t comment on this configuration, I understand completely that there is no reason to create multiple log files on the same disk array (half the purpose behind this post is to show that there is no benefit to having multiple log files like this, which is also the intended purpose behind the original blog post as well), and I wouldn’t do this in production, but it works perfectly for the tests that we are about to run.  Once the database is created, we’ll switch to that database, and dump out the transaction log VLF information using DBCC LOGINFO.

-- Create our Test database with
--        1 data file sized at 128MB with 64MB autogrowth
--        1 log file sized at 1MB with 32MB autogrowth
--        3 log files sized at 1MB with 1MB autogrowth
CREATE DATABASE [Test] ON  
PRIMARY 
    (    
        NAME = N'Test', 
        FILENAME = N'D:\SQLData\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test.mdf', 
        SIZE = 131072KB, 
        FILEGROWTH = 65536KB
    )
LOG ON 
    (    
        NAME = N'Test_log', 
        FILENAME = N'L:\SQLData\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_log.ldf', 
        SIZE = 1024KB, 
        MAXSIZE = 131072KB, 
        FILEGROWTH = 32768KB
    ), 
    (    
        NAME = N'Test_log2',     
        FILENAME = N'L:\SQLData\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_log2.ldf', 
        SIZE = 1024KB, 
        MAXSIZE = 131072KB, 
        FILEGROWTH = 1024KB 
    ), 
    ( 
        NAME = N'Test_log3', 
        FILENAME = N'L:\SQLData\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_log3.ldf', 
        SIZE = 1024KB, 
        MAXSIZE = 131072KB, 
        FILEGROWTH = 1024KB 
    ), 
    ( 
        NAME = N'Test_log4', 
        FILENAME = N'L:\SQLData\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_log4.ldf', 
        SIZE = 1024KB, 
        MAXSIZE = 131072KB, 
        FILEGROWTH = 1024KB 
    )
GO
-- Switch to our Test database
USE [Test]
GO
-- Dump the VLF Usage information
DBCC LOGINFO
GO

image

Each of the log files VLF’s have been highlighted in a different color above to point out the separation of the four different files.  Note that the active VLF is in the first log file, FileId=2, as shown by the Status=2.  With our test database created, we can now set out to create the Extended Events Event Session that:

    1. The transaction logs are written to sequentially starting with the first VLF in FileId=2 and then when the last VLF in FileId=2 is full, the log begins writing log records to the first VLF of FileId=3 and when the last VLF in FileId=3 is full, the log begins writing log records to the first VLF of FileId=4 and when the last VLF in FileId=4 is full, the log begins writing log records to the first VLF of FileId=5 and when the last VLF in FileId=5 is full, the log circles back to the first VLF of FileId=2 which will still be active because we are going to work within a single explicit transaction for the duration of the test.  Since the file is full it has to be grown, and because it has a growth factor of 32MB it grows by 32MB and begins writing log records to the first VLF of the newly allocated space.
    2. The writes to the log files do not happen at the same time, they occur sequentially as the engine writes log records into each file, filling the VLF’s and has to move to the next file, or circle back to the beginning of the log when it reaches the end of the last log file.
    3. Whatever results were seen in the original thread by opening the log file with Apex tools were incorrect and misleading to the original poster, since log files were actually written to all of the files during the operation.  I have a couple of theories as to what could have happened that made the Apex tool show no log records that I will discuss later in this thread.

What Events would we want to capture to look at what is happening in our transaction log files when running the same workload from the original post?  Since we are going to be executing a number of statements, the sqlserver.sql_statement_starting and sqlserver.sql_statement_completed Events seem like a good starting point, and since we want to know what statement was executing, we’ll add the sql_text Action to these Events.  Since we are dealing with the transaction log files, the sqlserver.databases_log_file_size_changed, sqlserver.databases_log_file_used_size_changed, sqlserver.databases_log_flush_wait, sqlserver.databases_log_flush, sqlserver.databases_log_growth, and sqlserver.databases_log_truncation Events should probably be included to so we can track what’s going on with our log specifically, and to ensure that these Events only fire for our test database, we’ll dynamically build in a Predicate on the sqlserver.database_id Predicate source using the output of DB_ID() inside the testing database. 

Since the log is a file, we also will want to collect the file operation related events such as sqlserver.flush_file_buffers, sqlserver.file_read, sqlserver.file_written, sqlserver.file_read_completed, and sqlserver.file_write_completed, and we’ll dynamically set a database_id Predicate on these Events as well.  If you recall back to Friday of last week, I talked about a number of trace flags that provide further information about Backup, Restore and file operations in my blog post A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 1).  One of those was Trace Flag 3004, which writes file zeroing information to the trace print output whenever a zeroing operation occurs.  I previously used this trace flag in my blog post Does the tempdb Log file get Zero Initialized at Startup?  Since the log files grew in the original tests, we can turn this trace flag on to track the file growths, and use the sqlserver.trace_print Event to capture the file operation messages, and to keep this Event focused to our tests only, we’ll dynamically set a Predicate for the current session_id using the sqlserver.session_id Predicate Source and the output of @@SPID.  Finally since this is all happening inside of an explicit transaction, we’ll also capture the sqlserver.database_transaction_begin and sqlserver.database_transaction_end events for the current database_id.

We have quite a large list of Events associated with this Event Session, and to ensure that we can perform analysis over all of the Event data from our tests, we’ll use the package0.asynchronous_file_target to hold our Event information.  We’ll also increase our buffer memory from the default 4MB to 8MB and set the Event Session up to ALLOW_SINGLE_EVENT_LOSS, which does exactly what it sounds like it does, and to correlate cause and effect we’ll also turn TRACK_CAUSALITY to ON for the session.

-- Create our Event Session dynamically
DECLARE @sqlcmd nvarchar(2000) = '
CREATE EVENT SESSION TransactionLogUsage
ON SERVER
--ADD EVENT sqlserver.sql_statement_starting
--( ACTION(sqlserver.sql_text)
--  WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
--ADD EVENT sqlserver.sql_statement_completed
--( ACTION(sqlserver.sql_text)
--  WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.databases_log_file_size_changed
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.databases_log_file_used_size_changed
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.databases_log_flush_wait
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.databases_log_flush
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.databases_log_growth
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.databases_log_truncation
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.flush_file_buffers
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.file_read
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.file_written
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.file_read_completed
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.file_write_completed
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.trace_print
(   WHERE (sqlserver.session_id = '+ cast(@@SPID as varchar(4))+')),
ADD EVENT sqlserver.database_transaction_begin
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.database_transaction_end
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+'))
ADD TARGET package0.asynchronous_file_target(
     SET filename=''C:\SQLBlog\TransactionLogUsage.xel'',
         metadatafile=''C:\SQLBlog\TransactionLogUsage.xem'')
WITH (MAX_MEMORY = 8MB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS, TRACK_CAUSALITY = ON )'
EXEC (@sqlcmd)
GO

If you notice above, I have commented out the sqlserver.sql_statement_starting and sqlserver.sql_statement_completed Events in the Event Session.  It turns out that these two events are not needed in the Event Session to prove the points being made in this blog post.  Including these two events makes the time to process the 240K+ Events run in the 10-15 minute range on my 16 core test server, so its not likely something that you are going to do on a laptop VM, but they were included in my initial Event Session for this, and I wanted to show the thought process I followed to get from A to B and ultimately C.

With our Event Session created, we can finish setting up our environment to run the actual tests.  To do this we’ll create a table named LogTable with two columns that are, as best as I can tell from the limited information provided about the test table, the same as the table used in the original post.  We’ll then CHECKPOINT the database to cause log truncation to occur (you did create the database in SIMPLE recovery right?), turn on Trace Flag 3004 for our session, and then start the Event Session so that it collects the data from our Events during our test.

-- Create our Test Table
CREATE TABLE LogTable (RowID decimal(10,4), Data char(1024))
GO
-- Checkpoint the database to truncate and clear the log.
CHECKPOINT
GO
-- Turn on Trace Flag 3004 so we can see file zeroing ops.
DBCC TRACEON(3004)
GO
-- Start the Event Session
ALTER EVENT SESSION TransactionLogUsage
ON SERVER
STATE=START
GO

With the Event Session started and all our setup work completed we can now run the test script that was used in the original post to generate our test workload.  When the tests complete, we’ll dump out our VLF information again with DBCC LOGINFO, then ROLLBACK the transaction, switch to master and DROP our test database and the Extended Events Session from the server since they are no longer needed.

-- Run our tests
SET NOCOUNT ON

DECLARE @cnt decimal(10,4)=0
DECLARE @rows int=0
BEGIN TRAN
WHILE 1=1
BEGIN
    INSERT INTO LogTable VALUES (ROUND((RAND()* 1000000),0), SPACE(1024))
    
    SELECT @rows+=1
        
    SELECT @cnt = (size * 1.0 * 8.0)/1024.0 
    FROM  Test.sys.database_files
    WHERE data_space_id = 0
    AND [FILE_ID]=5
    
    IF @cnt>1.0
            BREAK
END

SELECT @rows;
GO
-- Pull Log VLF usage again
DBCC LOGINFO
GO
-- Rollback our transaction
ROLLBACK
GO
USE master
GO
-- Kill any connection to Test database
ALTER DATABASE [Test] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
-- Drop the Test database
DROP DATABASE [Test]
GO
-- Drop the Event Session
DROP EVENT SESSION TransactionLogUsage
ON SERVER
GO

image

There is a reason that I dump out the VLF information before performing the ROLLBACK of the transaction.  As long as the transaction remains active, the VLF’s containing the active transaction can not be truncated and cleared.  In order to see the allocated VLF’s, we need the transaction active still.  Once again, I have highlighted each of the individual log files separately, and from the DBCC LOGINFO output we can look at the FileId and Status columns and see that our transaction log wrote information into all 4 of the files, filling them, and the wrapped back to the first file which had to be grown, and each of the subsequent log files were also grown by the database engine.  However, if you look at the CreateLSN information for the growth portion of each log file, you will notice that each file has its own Create LSN value for the second set of VLF’s, meaning that they were grown separately and at different times.  Still not convinced by DBCC LOGINFO?  Well we have the data to validate this and prove it unequivocally, but before we can look at the data, we need to retrieve it from the asynchronous_file_target files and shred the XML using XQuery.

-- Create our Analysis Database
CREATE DATABASE TLogUsageTestResults
GO
-- Switch to our Analysis Database
USE [TLogUsageTestResults]
GO
-- Create intermediate temp table for raw event data
CREATE TABLE RawEventData
(Rowid int identity primary key, event_data xml)

-- Create final results table for parsed event data
CREATE TABLE TestResults
([Rowid] int primary key, [event_name] varchar(50), [package_name] varchar(50),
 [timestamp] datetime2, [count] bigint, [increment] bigint, [database_id] int, 
 [mode] nvarchar(4000), [file_handle] nvarchar(4000), [offset] bigint, 
 [file_id] int, [file_group_id] int, [path] nvarchar(4000), [duration] bigint, 
 [io_data] nvarchar(4000), [succeeded] nvarchar(4000), [sql_text] nvarchar(4000), 
 [trace_message] nvarchar(4000), [source_database_id] int, [object_id] int, 
 [object_type] int, [cpu] int, [reads] bigint, [writes] bigint, 
 [state] nvarchar(4000), [offset_end] int, [nest_level] int, 
 [activity_id] uniqueidentifier, [event_sequence] int )

-- Read the file data into intermediate temp table
INSERT INTO RawEventData (event_data)
SELECT
    CAST(event_data AS XML) AS event_data
FROM sys.fn_xe_file_target_read_file('C:\SQLBlog\TransactionLogUsage*.xel', 
                                     'C:\SQLBlog\TransactionLogUsage*.xem', 
                                     null, null)

-- Query the Event data from the Target.
INSERT INTO TestResults
([Rowid], [event_name], [timestamp], [database_id], [count], [increment], 
 [mode], [file_handle], [offset], [file_id], [file_group_id], [path], 
 [duration], [io_data], [succeeded], [sql_text], [trace_message], [source_database_id], 
 [object_id], [object_type], [cpu], [reads], [writes], [state], [offset_end], 
 [nest_level], [activity_id], [event_sequence])

-- Fetch the Event Data from the Event Session Target
SELECT 
    RowID,
    event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,
    DATEADD(hh, 
            DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP), 
            event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
    COALESCE(event_data.value('(event/data[@name="database_id"]/value)[1]', 'int'), 
             event_data.value('(event/action[@name="database_id"]/value)[1]', 'int')) AS database_id,
    event_data.value('(event/data[@name="count"]/value)[1]', 'bigint') AS [count],
    event_data.value('(event/data[@name="increment"]/value)[1]', 'bigint') AS [increment],
    event_data.value('(event/data[@name="mode"]/text)[1]', 'nvarchar(4000)') AS [mode],
    event_data.value('(event/data[@name="file_handle"]/value)[1]', 'nvarchar(4000)') AS [file_handle],
    event_data.value('(event/data[@name="offset"]/value)[1]', 'bigint') AS [offset],
    event_data.value('(event/data[@name="file_id"]/value)[1]', 'int') AS [file_id],
    event_data.value('(event/data[@name="file_group_id"]/value)[1]', 'int') AS [file_group_id],
    event_data.value('(event/data[@name="path"]/value)[1]', 'nvarchar(4000)') AS [path],
    event_data.value('(event/data[@name="duration"]/value)[1]', 'bigint') AS [duration],
    event_data.value('(event/data[@name="io_data"]/value)[1]', 'nvarchar(4000)') AS [io_data],
    event_data.value('(event/data[@name="succeeded"]/value)[1]', 'nvarchar(4000)') AS [succeeded],
    event_data.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(4000)') AS [sql_text],
    event_data.value('(event/data[@name="message"]/value)[1]', 'nvarchar(4000)') AS [trace_message],
    event_data.value('(event/data[@name="source_database_id"]/value)[1]', 'int') AS [source_database_id],
    event_data.value('(event/data[@name="object_id"]/value)[1]', 'int') AS [object_id],
    event_data.value('(event/data[@name="object_type"]/value)[1]', 'int') AS [object_type],
    event_data.value('(event/data[@name="cpu"]/value)[1]', 'int') AS [cpu],
    event_data.value('(event/data[@name="reads"]/value)[1]', 'bigint') AS [reads],
    event_data.value('(event/data[@name="writes"]/value)[1]', 'bigint') AS [writes],
    event_data.value('(event/data[@name="state"]/text)[1]', 'nvarchar(4000)') AS [state],
    event_data.value('(event/data[@name="offset_end"]/value)[1]', 'int') AS [offset_end],
    event_data.value('(event/data[@name="nest_level"]/value)[1]', 'int') AS [nest_level],
    CAST(SUBSTRING(event_data.value('(event/action[@name="attach_activity_id"]/value)[1]', 'varchar(50)'), 1, 36) AS uniqueidentifier) as activity_id,
    CAST(SUBSTRING(event_data.value('(event/action[@name="attach_activity_id"]/value)[1]', 'varchar(50)'), 38, 10) AS int) as event_sequence
FROM RawEventData
ORDER BY Rowid
GO

-- Return our results
SELECT * 
FROM TestResults
WHERE event_name NOT IN ('sql_statement_starting' , 'sql_statement_completed')
ORDER BY RowID

If you scroll through the results you can see the writes occurring sequentially through each of the log files, and while FileId=4 is being written to, the Database Engine begins the growth of FileId=2 by 32MB.  If we change our query to only focus on the file_write_completed, databases_log_growth, and trace_print Events, we can see this a little easier.

-- Return our results
SELECT 
    Rowid, 
    event_name, 
    [timestamp], 
    [count], 
    database_id, 
    mode, 
    offset, 
    file_id, 
    duration, 
    trace_message 
FROM TestResults
WHERE event_name  IN ('file_write_completed' , 'databases_log_growth', 'trace_print')
ORDER BY RowID

image    image
Log Rollover from Log1 to Log2   Log Rollover from Log2 to Log3

 

image

   image
Autogrow of Log1   Log Rollover from Log3 to Log4

 

image    image
Log Rollover from Log4 back to Log1   Autogrow of Log2

 

image    image
Autogrow of Log3   Autogrow of Log4


We can see the first log file, FileID=2, grows before the log rollover from Log3, FileID=4, occurs to Log4, FileID=5, making space available in the first log file for the rollover when FileID=5 becomes full.  Log records are written all four of the log files before the log wraps back around to the first log file, debunking point numbers two and four of the conclusion.  The timestamps of the events shows that the additional log files are written to serially and not at the same time debunking point number three of the conclusion.  The reason that only a fraction of the log records are written to the three additional log files is proportionate to the difference in the autogrowth settings between the first log file at 32MB and the three additional log files at 1MB.  If the first log file was set to grow at 1MB, the majority of the log records would not be in the first log file.

Its been well documented that there is no performance benefit to having multiple log files in a database, and Paul Randal’s blog post, Importance of proper transaction log size management, was linked to in the original blog post that ultimately triggered this post.  The behavior demonstrated in this post isn’t a mystery, its documented in the Books Online (Transaction Log Physical Architecture), but sometime empirical evidence like this helps solidify that fact. 

2 thoughts on “An XEvent a Day (23 of 31) – How it Works – Multiple Transaction Log Files

  1. Thanks for an excellent article, both on the multiple log file question and also on the use of extended events to prove the points. Very nice!

  2. Glad to see a write-up to help debunk the multiple log file myth of performance gains. I see this many times with client SQL databases. To avoid an outage in the interim is understood. When someone tells me it was done for optimization, I am skeptical. Good to have a reference. Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.