An XEvent a Day (31 of 31) – Event Session DDL Events

To close out this month’s series on Extended Events we’ll look at the DDL Events for the Event Session DDL operations, and how those can be used to track changes to Event Sessions and determine all of the possible outputs that could exist from an Extended Event Session.  One of my least favorite quirks about Extended Events is that there is no way to determine the Events and Actions that may exist inside a Target, except to parse all of the the captured data.  Information about the Event Session does exist in the Session Definition Catalog Views and Active Session DMV’s, but as you change an Event Sessions Events and Actions while it is running, the information in these change as well, so it is possible that a Target has Events and Actions that are not returned by the current information available about the Event Session.  This is where the DDL Events for the Event Session DDL operations can be useful, if the appropriate framework is deployed.

The DDL Events for Extended Events are not currently documented in the Books Online.  I only recently learned about them from Mike Wachal during a discussion about what I thought was missing from Extended Events.  This is simply an oversight in the documentation, and something that Mike has stated will be fixed, it doesn’t mean that the DDL Events are undocumented and subject to change without notice like other undocumented features of SQL Server.  We can find the DDL Events for Extended Events in the sys.event_notification_event_types.

FROM sys.event_notification_event_types
WHERE type_name LIKE '%SESSION%'


These can be used just like any other DDL Event to create a DDL Trigger or Event Notification that takes action when one of the DDL operations occurs.  We can use this to log the DDL to track our changes over time, and we can also use it to create a tracking table of the possible outputs from our Event Session, ensuring that we know what information it may have collected when we parse the Event data from the Targets.  We can also use this information to simplify the generation of our XQuery XPATH statements to parse the data from the Targets with a little extra work. 

In all of my servers, I have a standard database named sqladmin that I keep DBA related information and objects.  For the examples, I will create this database and use it in all the code.  If you have a different database, the scripts can easily be changed to create the objects in that database.  The first thing we’ll do is create our database, and two tables, one for tracking the DDL operations and the other for tracking all of the possible outputs for our Event Session.

USE sqladmin
CREATE TABLE dbo.XEvents_DDLOperations
( DDLEventData XML, 
  ProgramName NVARCHAR(256) DEFAULT(program_name())
CREATE TABLE XEvents_SessionOutputs
    EventSessionName NVARCHAR(256),
    EventName NVARCHAR(256),
    EventID INT,
    ColumnID INT,
    ColumnName NVARCHAR(256),
    NodeType NVARCHAR(10),
    DataType NVARCHAR(50),
    XMLLocation NVARCHAR(10),
    TypePrecidence INT

The XEvents_SessionOutputs table will have multiple rows for each Event Session defined on the server that track the EventName, the output ColumnName, the NodeType for the data element in the Event XML, the SQL DataType returned by the output, the XMLLocation for where the data of interest exists, and a TypePrecidence value that can be used when multiple Events return the same Data Element with different DataTypes, ensuring that we can pick the most compatible DataType for the output column.  The table also tracks the Event ID in the Event Session for the Event, the Column ID for the output column so that grouping and ordering can be performed during code generation from this table.

To get the SQL DataType that an output returns, we have to look at the output type_name in the Extended Events metadata for the output column or Action.  To make this easier to do and allow for code reuse, I create a view that maps the type_name in Extended Events to corresponding SQL DataType.  Since Maps can be a type_name in Extended Events, the view queries the sys.dm_xe_map_values DMV and calculates the maximum length of the map_value column for each Map, and then uses the nvarchar datatype and rounds the length up to the nearest power of 10 (ok, it doesn’t actually round but that is the effect of the math operations).  For the actual Event data columns in the base payload, the type_name is transposed to the equivalent SQL DataType that is compatible with XQuery.

        XETypeName =, 
        SQLTypeName = 'nvarchar('+CAST(MAX(LEN(mv.map_value))-(MAX(LEN(mv.map_value))%10) + 10 AS VARCHAR(4))+')',
        XMLLocation = 'text',
        TypePrecidence = 5
    FROM sys.dm_xe_object_columns oc
    LEFT JOIN sys.dm_xe_map_values mv
        ON oc.type_package_guid = mv.object_package_guid
            AND oc.type_name =
    WHERE oc.column_type = 'data'
        XETypeName =,
        SQLTypeName = CASE 
                            WHEN TYPE_NAME IN ('int8', 'int16', 'int32', 'uint8', 
                                    'uint16', 'uint32', 'float32') 
                                THEN 'int'
                            WHEN TYPE_NAME IN ('int64', 'uint64', 'float64')
                                THEN 'bigint'
                            WHEN TYPE_NAME = 'boolean'
                                THEN 'nvarchar(10)' --true/false returned
                            WHEN TYPE_NAME = 'guid'
                                THEN 'uniqueidentifier'
                            ELSE 'nvarchar(4000)'
        XMLLocation = 'value',
        TypePrecidence = CASE 
                            WHEN TYPE_NAME IN ('int8', 'int16', 'int32', 'uint8', 
                                    'uint16', 'uint32', 'float32') 
                                THEN 1
                            WHEN TYPE_NAME IN ('int64', 'uint64', 'float64')
                                THEN 2
                            WHEN TYPE_NAME = 'boolean'
                                THEN 3 --true/false returned
                            WHEN TYPE_NAME = 'guid'
                                THEN 3
                            ELSE 5
    FROM sys.dm_xe_objects o
    WHERE object_type = 'type'
      AND TYPE_NAME != 'null'

Using this view, we can create another view that queries the Session Definition Catalog Views, to retrieve the output columns for an Event Session in a format that matches our XEvents_SessionOutputs table.

CREATE VIEW dbo.XESession_OutputsFromDMVs
        -- Find a list of all the possible output columns
   AS EventSessionName,
   AS EventName,
            sese.event_id AS EventID,
            oc.column_id AS ColumnID,
   AS ColumnName,
            'data' AS NodeType,
            xetst.SQLTypeName AS DataType,
        FROM sys.server_event_sessions AS ses
        JOIN sys.server_event_session_events AS sese
            ON ses.event_session_id = sese.event_session_id
        JOIN sys.dm_xe_packages AS p 
            ON sese.package =
        JOIN sys.dm_xe_object_columns AS oc 
            ON oc.object_name =
                AND oc.object_package_guid = p.guid
        JOIN XETypeToSQLType AS xetst
            ON oc.type_name = xetst.XETypeName
        WHERE oc.column_type = 'data'
            999 AS column_id,
        FROM sys.server_event_sessions AS ses
        JOIN sys.server_event_session_events AS sese
            ON ses.event_session_id = sese.event_session_id
        JOIN sys.server_event_session_actions AS sesa
            ON ses.event_session_id = sesa.event_session_id
                AND sesa.event_id = sese.event_id
        JOIN sys.dm_xe_packages AS p
            ON sesa.package =
        JOIN sys.dm_xe_objects AS o
            ON p.guid = o.package_guid
                AND =
        JOIN XETypeToSQLType AS xetst
            ON o.type_name = xetst.XETypeName
        WHERE o.object_type = 'action'

We can then create a Server Level DDL Trigger for the DDL_EVENT_SESSION_EVENTS group that will log the DDL operation to the XEvents_DDLOperations table, and at the same time populate the output information in the XEvents_SessionOutputs table when an Event Session is created, add any new outputs when an Event Session is altered, and delete the Event Session information when an Event Session is dropped.  By adding new outputs when a Event Session is altered, we maintain a record of the original outputs, even if the Event was dropped from the Event Session.

    INSERT INTO sqladmin.dbo.XEvents_DDLOperations (DDLEventData)
    VALUES (@EventData);

    DECLARE @EventType NVARCHAR(256) = @EventData.value('(EVENT_INSTANCE/EventType)[1]', 'nvarchar(256)')
    DECLARE @SessionName NVARCHAR(256) = @EventData.value('(EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(256)')

        INSERT INTO sqladmin.dbo.XEvents_SessionOutputs 
            (EventSessionName, EventName, EventID, ColumnID, ColumnName, NodeType,
             DataType, XMLLocation, TypePrecidence)
        SELECT EventSessionName, EventName, EventID, ColumnID, ColumnName, NodeType,
             DataType, XMLLocation, TypePrecidence
        FROM sqladmin.dbo.XESession_OutputsFromDMVs
        WHERE EventSessionName = @SessionName

        -- Add any new outputs to the Table
        INSERT INTO sqladmin.dbo.XEvents_SessionOutputs
            (EventSessionName, EventName, EventID, ColumnID, ColumnName, NodeType,
             DataType, XMLLocation, TypePrecidence)
        SELECT vdmv.EventSessionName, vdmv.EventName, vdmv.EventID, vdmv.ColumnID, vdmv.ColumnName, vdmv.NodeType,
             vdmv.DataType, vdmv.XMLLocation, vdmv.TypePrecidence
        FROM sqladmin.dbo.XESession_OutputsFromDMVs vdmv
        LEFT JOIN sqladmin.dbo.XEvents_SessionOutputs xeso
            ON vdmv.EventSessionName = xeso.EventSessionName
                AND vdmv.EventName = xeso.EventName
                AND vdmv.ColumnName = vdmv.ColumnName
        WHERE vdmv.EventSessionName = @SessionName
          AND xeso.EventSessionName IS NULL

    IF @EventType = 'DROP_EVENT_SESSION'
        -- Delete the Output data for the Event Session
        DELETE sqladmin.dbo.XEvents_SessionOutputs
        WHERE EventSessionName = @SessionName

If we recreate the TrackResourceWaits Event Session from yesterday’s post and then query the XEvents_SessionOutputs table, we can see the outputs that we can expect from that Event Session:

FROM sqladmin.dbo.XEvents_SessionOutputs
WHERE EventSessionName = 'TrackResourceWaits'


Using this information, we can also write a query to generate our XQuery statements for each of the outputs, as well as a column definition stub if we wanted to create a table to hold this information for analysis.

    'event_data.value(''(event/'+NodeType+'[@name="'+ColumnName+'"]/'+XMLLocation+')[1]'', '''+DataType+''') AS '+QUOTENAME(ColumnName)+',' AS XQuery,
    QUOTENAME(ColumnName)+' '+DataType+', ' AS ColumnDefinition
        ROW_NUMBER() OVER (PARTITION BY ColumnName ORDER BY TypePrecidence DESC) AS partitionid,
    FROM sqladmin.dbo.XEvents_SessionOutputs
) AS tab
WHERE EventSessionName = 'TrackResourceWaits'
  AND partitionid = 1
ORDER BY EventID, ColumnID


The information in the XQuery column can be copied and pasted into our TSQL Script for parsing the Event Data from the ring_buffer, pair_matching, or asynchronous_file_target Targets.  You could also use this as the basis for writing your own Extended Events Target Data code generator, similar to the one that Adam Machanic created a year ago.

That’s it for this months series on Extended Events.  You can find links to all of the posts on the round up post from December 1, An XEvent A Day: 31 days of Extended Events.  Hopefully its been informative, and you now have a better understanding of how Extended Events can be used inside of SQL Server 2008, 2008R2, and in Denali CTP1.

An XEvent a Day (30 of 31) – Tracking Session and Statement Level Waits

While attending PASS Summit this year, I got the opportunity to hang out with Brent Ozar (Blog|Twitter) one afternoon while he did some work for Yanni Robel (Blog|Twitter).  After looking at the wait stats information, Brent pointed out some potential problem points, and based on that information I pulled up my code for my PASS session the next day on Wait Statistics and Extended Events and made some changes to one of the demo’s so that the Event Session only focused on those potentially problematic waits that had been identified, and sent Brent the DDL so that he could give Extended Events a shot.  Within a few minutes, we were able track down to the statement level in a couple of stored procedures, the causes of those waits, and after some analysis Brent was able to offer some suggestions to Yanni about how to reduce the waits.

Understanding how SQL Server waits to continue execution can be key to improving performance since time spent waiting is time lost during the execution of a SQL Statement.  I love looking at wait statistics and the chapter that I wrote for SQL Server 2008 Internals and Troubleshooting was SQL Server Waits and Extended Events.  Information about wait statistics has been available in SQL Server for a long time, and many of the vendors that develop monitoring applications for SQL Server have polling methods that query sys.dm_os_waiting_tasks or sys.sysprocesses to capture wait information about the tasks that are currently active in the system.  However one of the shortcomings of a polling method is that it misses a lot of the wait information because it is a point in time snapshot only.  If the polling interval is every second, only the active waits that exist at that second are captured, and any waits that occur between the polling interval is missed.  This information is still accumulated in sys.dm_os_wait_stats, but it is impossible to track it back to the statement level from that DMV.

Extended Events offers us the ability to capture information about waits without missing any of the information.  Already in this series we’ve seen how to use Extended Events with the Bucketizer Target to count the occurrences of waits by type.  This isn’t really a great use of Extended Events since sys.dm_os_wait_stats counts the occurrences of the wait types already, and a differential analysis of the information contained in sys.dm_os_wait_stats can provide this information.  The purpose of that example was to discuss the bug that existed in the RTM release of SQL Server 2008 more than it was to provide a sensible use for the target.  However, if we wanted to break our waits down by database, we could bucket on the database_id, and begin to understand which database had the most waits associated with it, but not by the individual wait type.  To get to that level of information, we need to collect all of the waits and the associated information for them to do further analysis.

There are two Events in Extended Events associated with wait types; sqlos.wait_info and sqlos.wait_info_external.  Looking at the description of the Events in the Metadata DMV’s we can get an idea of when each Event will fire.

SELECT name, description 
FROM sys.dm_xe_objects
WHERE name LIKE 'wait_info%'


The sqlos.wait_info_external Event will fire for wait types that begin with PREEMPTIVE_ in the name, and the sqlos.wait_info Event will fire for the other wait types that occur on the server.  Glenn Alan Berry (Blog|Twitter) has a great script that filters queries sys.dm_os_wait_stats and filters out common waits that are not generally problematic.  You can find his script on his blog post Updated SQL 2005 and 2008 Diagnostic Queries.  You can use this script to identify the most common waits on a server, and then use that information to build an Event Session that captures the session and statement information for those individual wait types.  In SQL Server 2008, there are 484 wait types listed in sys.dm_os_wait_stats and there are 599 map_value’s for the wait_types Map in sys.dm_xe_map_values.  The reason this is different is that the Map was created from the header file for the wait types and there are padded values that exist in the Map that don’t really correspond to wait types that exist in SQL Server.  However, there are also a couple of Maps for the wait_types that don’t match the wait type in sys.dm_os_wait_stats, the most notable being the ASYNC_NETWORK_IO to NETWORK_IO.

To build the Event Session, we just need to query sys.dm_map_values for our wait_types and use the map_key’s in the Predicate definition of the sqlos.wait_info or sqlos.wait_info_external Event as appropriate.  We can also do the same thing to build a generic Event Session that tracks the most common resource related waits.

SELECT map_key, map_value 
FROM sys.dm_xe_map_values
WHERE name = 'wait_types'
  AND ((map_key > 0 AND map_key < 22) -- LCK_ waits
            OR (map_key > 31 AND map_key < 38) -- LATCH_ waits
            OR (map_key > 47 AND map_key < 54) -- PAGELATCH_ waits
            OR (map_key > 63 AND map_key < 70) -- PAGEIOLATCH_ waits
            OR (map_key > 96 AND map_key < 100) -- IO (Disk/Network) waits
            OR (map_key = 107) -- RESOURCE_SEMAPHORE waits
            OR (map_key = 113) -- SOS_WORKER waits
            OR (map_key = 120) -- SOS_SCHEDULER_YIELD waits
            OR (map_key = 178) -- WRITELOG waits
            OR (map_key > 174 AND map_key < 177) -- FCB_REPLICA_ waits
            OR (map_key = 186) -- CMEMTHREAD waits
            OR (map_key = 187) -- CXPACKET waits
            OR (map_key = 207) -- TRACEWRITE waits
            OR (map_key = 269) -- RESOURCE_SEMAPHORE_MUTEX waits
            OR (map_key = 283) -- RESOURCE_SEMAPHORE_QUERY_COMPILE waits
            OR (map_key = 284) -- RESOURCE_SEMAPHORE_SMALL_QUERY waits

Once we have the list of map_key’s defined, we can do a replace in SSMS and change map_key to wait_type and define the predicate for the sqlos.wait_info Event for our Event Session.

ADD EVENT  sqlos.wait_info
(    -- Capture the database_id, session_id, plan_handle, and sql_text
        (opcode = 1 --End Events Only
            AND duration > 100 -- had to accumulate 100ms of time
            AND ((wait_type > 0 AND wait_type < 22) -- LCK_ waits
                    OR (wait_type > 31 AND wait_type < 38) -- LATCH_ waits
                    OR (wait_type > 47 AND wait_type < 54) -- PAGELATCH_ waits
                    OR (wait_type > 63 AND wait_type < 70) -- PAGEIOLATCH_ waits
                    OR (wait_type > 96 AND wait_type < 100) -- IO (Disk/Network) waits
                    OR (wait_type = 107) -- RESOURCE_SEMAPHORE waits
                    OR (wait_type = 113) -- SOS_WORKER waits
                    OR (wait_type = 120) -- SOS_SCHEDULER_YIELD waits
                    OR (wait_type = 178) -- WRITELOG waits
                    OR (wait_type > 174 AND wait_type < 177) -- FCB_REPLICA_ waits
                    OR (wait_type = 186) -- CMEMTHREAD waits
                    OR (wait_type = 187) -- CXPACKET waits
                    OR (wait_type = 207) -- TRACEWRITE waits
                    OR (wait_type = 269) -- RESOURCE_SEMAPHORE_MUTEX waits
                    OR (wait_type = 283) -- RESOURCE_SEMAPHORE_QUERY_COMPILE waits
                    OR (wait_type = 284) -- RESOURCE_SEMAPHORE_SMALL_QUERY waits
ADD TARGET package0.ring_buffer(SET max_memory=4096)


Now that we have the Event Session defined, we can start it as needed to collect the resource wait information for our system.  The only concern with this Event Session is the Target being used.  If the Event Session is going to run for a long period of time, or if the waits on the server being monitored occur in large quantities, the Target should be changed away from the ring_buffer to the asynchronous_file_target.  I configured this session to only collect waits that exceed 100ms in duration.  If you want waits that have shorter durations this can easily be changed.  If you set the duration to be > 0 a lot of 1-5ms waits will be collected that aren’t generally interesting.  To query the wait information from this Event Session using the ring_buffer:

-- Extract the Event information from the Event Session 
    event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,
        event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
    COALESCE(event_data.value('(event/data[@name="database_id"]/value)[1]', 'int'), 
        event_data.value('(event/action[@name="database_id"]/value)[1]', 'int')) AS database_id,
    event_data.value('(event/action[@name="session_id"]/value)[1]', 'int') AS [session_id],
    event_data.value('(event/data[@name="wait_type"]/text)[1]', 'nvarchar(4000)') AS [wait_type],
    event_data.value('(event/data[@name="opcode"]/text)[1]', 'nvarchar(4000)') AS [opcode],
    event_data.value('(event/data[@name="duration"]/value)[1]', 'bigint') AS [duration],
    event_data.value('(event/data[@name="max_duration"]/value)[1]', 'bigint') AS [max_duration],
    event_data.value('(event/data[@name="total_duration"]/value)[1]', 'bigint') AS [total_duration],
    event_data.value('(event/data[@name="signal_duration"]/value)[1]', 'bigint') AS [signal_duration],
    event_data.value('(event/data[@name="completed_count"]/value)[1]', 'bigint') AS [completed_count],
    event_data.value('(event/action[@name="plan_handle"]/value)[1]', 'nvarchar(4000)') AS [plan_handle],
    event_data.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(4000)') AS [sql_text]
(    SELECT XEvent.query('.') AS event_data 
    (    -- Cast the target_data to XML 
        SELECT CAST(target_data AS XML) AS TargetData 
        FROM sys.dm_xe_session_targets st 
        JOIN sys.dm_xe_sessions s 
            ON s.address = st.event_session_address 
        WHERE name = 'TrackResourceWaits' 
          AND target_name = 'ring_buffer'
    ) AS Data 
    -- Split out the Event Nodes 
    CROSS APPLY TargetData.nodes ('RingBufferTarget/event') AS XEventData (XEvent)   
) AS tab (event_data)


In the result set, you will notice that some of the wait_info Events do not have an associated session_id, database_id, plan_handle, or sql_text value.  Depending on where the wait actually occurs in code, this information is not available to the firing Event, for example, the NETWORK_IO Event generally does not successfully collect these Actions.

While collecting session and statement level waits like this is certainly interesting, there are some considerations that have to be made whenever you look at wait information like this.  The first consideration is that, while a specific session or statement waited on a resource, that doesn’t necessarily mean that the problem exists within that session or statement.  Take for example a query that has to wait 500ms on ASYNC_IO_COMPLETION waits.  At the same time that query is executing there are 10 DSS queries running that scan large ranges of data from the database data files and generate a lot of IO activity.  Where exactly is the problem?  The root problem is that there is a disk IO bottleneck, but not necessarily related to the query that is waiting on disk IO, it could be another query performing Table Scan that is leading to the heavy IO activity.

An XEvent a Day (29 of 31) – The Future – Looking at Database Startup in Denali

As I have said previously in this series, one of my favorite aspects of Extended Events is that it allows you to look at what is going on under the covers in SQL Server, at a level that has never previously been possible.  SQL Server Denali CTP1 includes a number of new Events that expand on the information that we can learn about how SQL Server operates and in today’s blog post we’ll look at how we can use those Events to look at what happens when a database starts up inside of SQL Server.

First lets create our Event Session, which will collect a large number of events that relate to the operations that occur when a database starts in SQL Server.  

DECLARE @sqlcmd nvarchar(max) =
'IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE
   DROP EVENT SESSION [MonitorStartupLogRecovery] ON SERVER;
CREATE EVENT SESSION [MonitorStartupLogRecovery]
ADD EVENT sqlserver.database_started 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.databases_log_file_used_size_changed 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.databases_log_flush 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.databases_log_flush_wait 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.file_read 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.file_read_completed 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.file_write_completed 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.file_written 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_block_cache 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_block_consume 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_blocks_uncache 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_cache_buffer_refcounter_change 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_consumer_act 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_flush_complete 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_flush_requested 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_flush_start 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.log_single_record 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.new_log_interest_flip 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.redo_single_record 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.redo_target_set 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + ')),
ADD EVENT sqlserver.transaction_log 
(WHERE (database_id = ' + CAST(DB_ID('AdventureWorks2008R2') AS varchar(3)) + '))
ADD TARGET package0.asynchronous_file_target(
     SET filename=''C:\SQLBlog\MonitorStartupLogRecovery.xel'',

With the Event Session created, we can make some changes that write to our test database to see what happens when the database is recovered at startup. We are going to can make two changes to the database.  First we’ll begin a transaction and create a table with 10 rows of data in it without committing the transaction. 

USE AdventureWorks2008R2
-- Begin a Transaction and leave it open
-- Create the First Table
INTO TestTable
FROM Sales.SalesOrderDetail sod

Now in a New Query Window, we’ll create a second table with 10 rows of data without opening a transaction, and then force a dirty shutdown of the Database Engine.

USE AdventureWorks2008R2
-- Create a Second Table
INTO TestTable2
FROM Sales.SalesOrderDetail sod
-- Flush changes to data file
-- Force Shutdown the Engine

Once SHUTDOWN is issued, the process terminates, and the Database Engine will need to be restarted from the Services Snapin, the SQL Server Configuration Manager, or through SSMS.  When the Engine starts up, the Event Session will become active, and the Events will be logged to the package0.asynchronous_file_target for analysis.  Once recovery completes, we can drop the Event Session from the Server, so that the buffers flush, and we can then begin our analysis of the collected information.

USE tempdb

-- Read the Raw Event data into a table
SELECT CAST(event_data AS XML) AS event_data
INTO TargetEvents
FROM sys.fn_xe_file_target_read_file('C:\SQLBlog\MonitorStartupLogRecovery*.xel', 'C:\SQLBlog\MonitorStartupLogRecovery*.xem', null, null)

-- Fetch the Event Data from the Raw Event Data into another table
    event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,
            event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
    COALESCE(event_data.value('(event/data[@name="database_id"]/value)[1]', 'int'), 
             event_data.value('(event/action[@name="database_id"]/value)[1]', 'int')) AS database_id,
    event_data.value('(event/data[@name="count"]/value)[1]', 'bigint') AS [count],
    event_data.value('(event/data[@name="start_log_block_id"]/value)[1]', 'bigint') AS [start_log_block_id],
    event_data.value('(event/data[@name="is_read_ahead"]/value)[1]', 'nvarchar(4000)') AS [is_read_ahead],
    event_data.value('(event/data[@name="private_consumer_id"]/value)[1]', 'bigint') AS [private_consumer_id],
    event_data.value('(event/data[@name="mode"]/text)[1]', 'nvarchar(4000)') AS [mode],
    event_data.value('(event/data[@name="file_handle"]/value)[1]', 'nvarchar(4000)') AS [file_handle],
    event_data.value('(event/data[@name="offset"]/value)[1]', 'bigint') AS [offset],
    event_data.value('(event/data[@name="file_id"]/value)[1]', 'int') AS [file_id],
    event_data.value('(event/data[@name="filegroup_id"]/value)[1]', 'int') AS [filegroup_id],
    event_data.value('(event/data[@name="size"]/value)[1]', 'bigint') AS [size],
    event_data.value('(event/data[@name="path"]/value)[1]', 'nvarchar(4000)') AS [path],
    event_data.value('(event/data[@name="duration"]/value)[1]', 'bigint') AS [duration],
    event_data.value('(event/data[@name="io_data"]/value)[1]', 'nvarchar(4000)') AS [io_data],
    event_data.value('(event/data[@name="resource_type"]/text)[1]', 'nvarchar(4000)') AS [resource_type],
    event_data.value('(event/data[@name="owner_type"]/text)[1]', 'nvarchar(4000)') AS [owner_type],
    event_data.value('(event/data[@name="transaction_id"]/value)[1]', 'bigint') AS [transaction_id],
    event_data.value('(event/data[@name="lockspace_workspace_id"]/value)[1]', 'nvarchar(4000)') AS [lockspace_workspace_id],
    event_data.value('(event/data[@name="lockspace_sub_id"]/value)[1]', 'int') AS [lockspace_sub_id],
    event_data.value('(event/data[@name="lockspace_nest_id"]/value)[1]', 'int') AS [lockspace_nest_id],
    event_data.value('(event/data[@name="resource_0"]/value)[1]', 'int') AS [resource_0],
    event_data.value('(event/data[@name="resource_1"]/value)[1]', 'int') AS [resource_1],
    event_data.value('(event/data[@name="resource_2"]/value)[1]', 'int') AS [resource_2],
    event_data.value('(event/data[@name="object_id"]/value)[1]', 'int') AS [object_id],
    event_data.value('(event/data[@name="associated_object_id"]/value)[1]', 'bigint') AS [associated_object_id],
    event_data.value('(event/data[@name="resource_description"]/value)[1]', 'nvarchar(4000)') AS [resource_description],
    event_data.value('(event/data[@name="database_name"]/value)[1]', 'nvarchar(4000)') AS [database_name],
    event_data.value('(event/data[@name="log_block_id"]/value)[1]', 'bigint') AS [log_block_id],
    event_data.value('(event/data[@name="log_block_size"]/value)[1]', 'int') AS [log_block_size],
    event_data.value('(event/data[@name="from_disk"]/value)[1]', 'nvarchar(4000)') AS [from_disk],
    event_data.value('(event/data[@name="incomplete"]/value)[1]', 'nvarchar(4000)') AS [incomplete],
    event_data.value('(event/data[@name="cache_buffer_pointer"]/value)[1]', 'nvarchar(4000)') AS [cache_buffer_pointer],
    event_data.value('(event/data[@name="consumer_id"]/value)[1]', 'bigint') AS [consumer_id],
    event_data.value('(event/data[@name="old_weight"]/value)[1]', 'int') AS [old_weight],
    event_data.value('(event/data[@name="new_weight"]/value)[1]', 'int') AS [new_weight],
    event_data.value('(event/data[@name="new_position"]/value)[1]', 'int') AS [new_position],
    event_data.value('(event/data[@name="last_log_block_id"]/value)[1]', 'bigint') AS [last_log_block_id],
    event_data.value('(event/data[@name="weight"]/value)[1]', 'int') AS [weight],
    event_data.value('(event/data[@name="address"]/value)[1]', 'nvarchar(4000)') AS [address],
    event_data.value('(event/data[@name="type"]/text)[1]', 'nvarchar(4000)') AS [type],
    event_data.value('(event/data[@name="current_count"]/value)[1]', 'int') AS [current_count],
    event_data.value('(event/data[@name="change_type"]/value)[1]', 'int') AS [change_type],
    event_data.value('(event/data[@name="activity_id"]/value)[1]', 'int') AS [activity_id],
    event_data.value('(event/data[@name="write_size"]/value)[1]', 'int') AS [write_size],
    event_data.value('(event/data[@name="rows"]/value)[1]', 'int') AS [rows],
    event_data.value('(event/data[@name="pending_writes"]/value)[1]', 'int') AS [pending_writes],
    event_data.value('(event/data[@name="pending_bytes"]/value)[1]', 'int') AS [pending_bytes],
    event_data.value('(event/data[@name="reason"]/text)[1]', 'nvarchar(4000)') AS [reason],
    event_data.value('(event/data[@name="waiters"]/value)[1]', 'int') AS [waiters],
    event_data.value('(event/data[@name="error"]/value)[1]', 'int') AS [error],
    event_data.value('(event/data[@name="slot_id"]/value)[1]', 'int') AS [slot_id],
    event_data.value('(event/data[@name="used_size"]/value)[1]', 'int') AS [used_size],
    event_data.value('(event/data[@name="reservation_size"]/value)[1]', 'bigint') AS [reservation_size],
    event_data.value('(event/data[@name="log_op_id"]/value)[1]', 'int') AS [log_op_id],
    event_data.value('(event/data[@name="log_op_name"]/value)[1]', 'nvarchar(4000)') AS [log_op_name],
    event_data.value('(event/data[@name="interest"]/value)[1]', 'nvarchar(4000)') AS [interest],
    event_data.value('(event/data[@name="cache_type"]/value)[1]', 'int') AS [cache_type],
    event_data.value('(event/data[@name="keys"]/value)[1]', 'nvarchar(4000)') AS [keys],
    event_data.value('(event/data[@name="stop_mark"]/value)[1]', 'nvarchar(4000)') AS [stop_mark],
    event_data.value('(event/data[@name="operation"]/text)[1]', 'nvarchar(4000)') AS [operation],
    event_data.value('(event/data[@name="success"]/value)[1]', 'nvarchar(4000)') AS [success],
    event_data.value('(event/data[@name="index_id"]/value)[1]', 'int') AS [index_id],
    event_data.value('(event/data[@name="log_record_size"]/value)[1]', 'int') AS [log_record_size],
    event_data.value('(event/data[@name="context"]/text)[1]', 'nvarchar(4000)') AS [context],
    event_data.value('(event/data[@name="replication_command"]/value)[1]', 'int') AS [replication_command],
    event_data.value('(event/data[@name="transaction_start_time"]/value)[1]', 'nvarchar(4000)') AS [transaction_start_time]
INTO Results
FROM TargetEvents

Now we can begin to analyze the information that we collected by querying the Results table.  Looking at the Results as a whole, we can see the database opened by reading the first page of the database and then the database boot page (page_id=9) and page 32 of the primary data file.  Then the engine scans each of the VLF’s of the transaction log.  We can tell that the log reads are scans of the VLF’s by looking at the DBCC LOGINFO information for the database and comparing the file_read offsets from the Event Session to the StartOffset of each of the VLF’s in the DBCC LOGINFO output.



SELECT event_name, timestamp, database_id, file_id, mode, offset, 
    CASE WHEN file_id = 1 THEN offset/8192 ELSE NULL END AS page_id, size, 
    log_block_id, log_block_size, start_log_block_id, last_log_block_id, 
    from_disk, consumer_id, activity_id, log_op_id, log_op_name, change_type, 
    operation, object_id, index_id, log_record_size, slot_id, used_size, 
    reservation_size, write_size, rows, pending_writes, pending_bytes, 
    context, waiters
FROM Results


After the startup scans the VLF’s 120K of information is read from the log file, and the log buffers start to be consumed to determine the redo start point for recovery.  We can filter our Event data to remove Events that while interesting are not necessary for our analysis at the moment, as well as to reduce the number of columns being returned from the data set.

    event_name, timestamp, file_id, mode, offset, size, log_block_id, 
    COALESCE(log_op_name, operation) as [operation], slot_id, 
    object_id, index_id, log_record_size, context, write_size, rows
FROM Results
WHERE event_name NOT IN ('log_consumer_act', 'log_single_record', 'log_cache_buffer_refcounter_change', 'file_read') 


With the filtered results, we can see the log reads into cache and the setting of the redo target.  If we scroll down further, we can get a better picture of what is happening.


The last active log blog was consumed, and the redo target was set at that log_block_id.  Then the log is reread starting at offset 318976 and the blocks get cached and the redo operations begin against the database.  Scrolling through the results further, we can see that the redo operations continue as the log blocks increase up to the Checkpoint operation that was executed immediately before the Shutdown of the instance occurred, at the log block that was set as the redo target originally.


At this point the data file begins to be read so that the undo operations can be performed before making the database available.


After the undo completes the database_started Event is raised and that database becomes available for use.


To validate that the changes we see occurring after the redo operations and before the database_started Event, we can set the database OFFLINE, recreate our Event Session, and then bring the database back ONLINE again, and then compare the logged Events when no changes have occurred to our original Events when known changes have occurred.  I am not going to do that in this blog post, but will instead leave that up to the reader to investigate on their own.