Parallel Maintenance Tasks with Service Broker

I’ve been doing a bit of work on Service Broker while building a series of new Pluralsight courses and one of the things I recently found a good use for Service Broker on a client system with a 8TB VLDB was for implementing parallel maintenance tasks using multiple threads.  The idea for this implementation came out of the fact that their maintenance window continues to become shorter and shorter the larger their workload increases, while the size of their database continues to get larger and larger.  One of the things I noticed while doing a monthly health check of their server was that during the maintenance window, their hardware is no where close to being utilized and there are plenty of resources available due to the serial execution of their maintenance tasks.  This specific client uses Ola Hallengren’s maintenance procedures, which we recommend to all of our clients, so I started to think about ways I could make things run in parallel to get better utilization of the hardware during the maintenance window and hopefully reduce the time required to perform the day to day maintenance in their database.

I wish I could say the first thing I thought of was Service Broker, but it wasn’t I toyed around with the idea of a custom C# app to run Ola’s scripts and log the output commands to the CommandLog table in master, then use multiple threads to run each of the individual tasks, something similar using Powershell instead of a compiled app, and even multiple agent jobs to break things up using different parameters for IndexOptimize. It turns out that these are all over engineering a very basic problem that Service Broker is highly suited towards solving and it only took a little bit of trial and error to create a very robust solution that runs index and statistics maintenance tasks in parallel in SQL Server.

Setting up Service Broker Components

I chose to implement the Service Broker components for this configuration in the MSDB database for a couple of reasons.  First, it is already enabled for Service Broker, so I don’t have to worry about that, second, it is also already TRUSTWORTHY ON which is required by Agent, and third it is owned by SA, so an activation stored procedure marked with EXECUTE AS OWNER, runs as a full sysadmin in the server allowing it to run any maintenance command generated by Ola’s scripts.  We need to configure some basic objects for Service Broker in the database; a message type, a contract, a queue and a service.

USE msdb
GO
-- Create the message types
CREATE MESSAGE TYPE
[OlaHallengrenMaintenanceTaskMessage]
VALIDATION = WELL_FORMED_XML;
GO

-- Create the contract
CREATE CONTRACT [OlaHallengrenMaintenanceTaskContract]
([OlaHallengrenMaintenanceTaskMessage]
SENT BY INITIATOR);
GO

-- Create the target queue and service
CREATE QUEUE OlaHallengrenMaintenanceTaskQueue;
GO

CREATE SERVICE
[OlaHallengrenMaintenanceTaskService]
ON QUEUE OlaHallengrenMaintenanceTaskQueue
([OlaHallengrenMaintenanceTaskContract]);
GO

Building an Activation Procedure to Automate Processing

With that setup, now we need to build an activation stored procedure to process the messages from the queue and perform the operations generated by Ola’s scripts.  At first I thought that this would be pretty straightforward, but it actually required a fair amount of additional logic to prevent poison messages and queue deactivation from occurring when multiple online index rebuilds were attempted on the same table.  When this occurs the engine raises Error 1912:

Could not proceed with index DDL operation on %S_MSG ‘%.*ls’ because it conflicts with another concurrent operation that is already in progress on the object. The concurrent operation could be an online index operation on the same object or another concurrent operation that moves index pages like DBCC SHRINKFILE.

which causes the XACT_STATE() of the operation to become –1, which only allows the transaction processing the command to be rolled back, leading to a poison message within Service Broker.  So to mitigate against that I had to add in a check against sys.dm_tran_locks for any ALTER_INDEX command to identify if a conflicting lock is currently being held and if it is then we just requeue the message and increment the retry count held within the message so that we can incrementally back-off attempting to run the command each time it encounters a conflicting lock using a WAITFOR.  Since Ola’s scripts log the command type, database name, object name, and command to be executed to the CommandLog table in the master database, all we need to queue in our message is the ID of the command to be processed by Service Broker, and we can get the rest of the pertinent information from the CommandLog table directly.  As a part of each tasks execution in our activation stored procedure we will also update the StartTime and EndTime of the operation for historical trending of execution durations.

IF OBJECT_ID('OlaHallengrenMaintenanceTaskQueue_ActivationProcedure') IS NOT NULL
BEGIN
	DROP PROCEDURE OlaHallengrenMaintenanceTaskQueue_ActivationProcedure;
END
GO

CREATE PROCEDURE OlaHallengrenMaintenanceTaskQueue_ActivationProcedure
AS

  DECLARE @conversation_handle UNIQUEIDENTIFIER;
  DECLARE @message_body XML;
  DECLARE @message_type_name SYSNAME;
  DECLARE @Command NVARCHAR(MAX);
  DECLARE @ID INT
  DECLARE @DBName SYSNAME;
  DECLARE @ObjectName SYSNAME;
  DECLARE @CommandType NVARCHAR(60);
  DECLARE @Retry INT;
  DECLARE @FQN NVARCHAR(400);

  WHILE (1=1)
  BEGIN

    BEGIN TRANSACTION;

    WAITFOR
    ( RECEIVE TOP(1)
        @conversation_handle = conversation_handle,
        @message_body = message_body,
        @message_type_name = message_type_name
      FROM OlaHallengrenMaintenanceTaskQueue
    ), TIMEOUT 5000;

    IF (@@ROWCOUNT = 0)
    BEGIN
      ROLLBACK TRANSACTION;
      BREAK;
    END

    IF @message_type_name = N'OlaHallengrenMaintenanceTaskMessage'
    BEGIN
		SELECT @ID = @message_body.value('(CommandLogID)[1]', 'int'),
				@Retry = ISNULL(@message_body.value('(CommandLogID/@retry)[1]', 'int'), 0);

	   
		SELECT @Command = Command, 
				@ObjectName = ObjectName, 
				@DBName = DatabaseName,
				@FQN = QUOTENAME(DatabaseName) + '.' + QUOTENAME(SchemaName) + '.' + QUOTENAME(ObjectName),
				@CommandType = CommandType
		FROM master.dbo.CommandLog
		WHERE ID = @ID;

		-- 	Check for Index rebuilds if one is already running and requeue the request after waiting.
		IF @CommandType = 'ALTER_INDEX'
		BEGIN

			-- Check if we have an incompatible lock that would lead to a failed execution 
			IF EXISTS (SELECT 1
						FROM sys.dm_tran_locks AS tl
						WHERE (request_mode = 'SCH-M' OR 
							-- Concurrent maintenance task doing UpdateStats?
							(request_mode = 'LCK_M_SCH_S' AND EXISTS (SELECT 1 FROM sys.dm_exec_sessions AS s WHERE is_user_process = 1 AND tl.request_session_id = s.session_id)))
						  AND resource_associated_entity_id = OBJECT_ID(@FQN)
						  AND resource_database_id = DB_ID(@DBName) )
			BEGIN
				-- Wait for 5 seconds times the number of retrys to do an incremental backoff
				-- This will eventually cause all queue readers to die off and serial execution of tasks
				DECLARE @Delay NVARCHAR(8) = CAST(DATEADD(ss, @Retry*5, CAST('00:00:00'AS TIME)) AS VARCHAR)
				WAITFOR DELAY @Delay

				-- Increment retry count in the message
				SELECT @message_body = N'<CommandLogID retry="'+CAST(@Retry+1 AS NVARCHAR)+'">'+CAST(@id AS NVARCHAR)+N'</CommandLogID>';


				-- Send the message back to the queue for later processing
				;SEND ON CONVERSATION @conversation_handle
						MESSAGE TYPE [OlaHallengrenMaintenanceTaskMessage]
						(@message_body);

				GOTO SkipThisRun
			END
		END

		UPDATE master.dbo.CommandLog
		SET StartTime = GETDATE()
		WHERE ID = @ID;

		BEGIN TRY 
			EXECUTE(@Command);

			UPDATE master.dbo.CommandLog
			SET EndTime = GETDATE()
			WHERE ID = @ID;

		END TRY
		BEGIN CATCH

			UPDATE master.dbo.CommandLog
			SET EndTime = GETDATE(),
				ErrorMessage = ERROR_MESSAGE(),
				ErrorNumber = ERROR_NUMBER()
			WHERE ID = @ID;

		END CATCH

       END CONVERSATION @conversation_handle;
    END

	-- If end dialog message, end the dialog
    ELSE IF @message_type_name = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
    BEGIN
       END CONVERSATION @conversation_handle;
    END

	-- If error message, log and end conversation
    ELSE IF @message_type_name = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
    BEGIN
		DECLARE @error INT;
		DECLARE @description NVARCHAR(4000);

		-- Pull the error code and description from the doc
		WITH XMLNAMESPACES ('http://schemas.microsoft.com/SQL/ServiceBroker/Error' AS ssb)
		SELECT
			@error = @message_body.value('(//ssb:Error/ssb:Code)[1]', 'INT'),
			@description = @message_body.value('(//ssb:Error/ssb:Description)[1]', 'NVARCHAR(4000)');
		
		RAISERROR(N'Received error Code:%i Description:"%s"', 16, 1, @error, @description) WITH LOG;

		-- Now that we handled the error logging cleanup
		END CONVERSATION @conversation_handle;
	END
 
 SkipThisRun:   
    COMMIT TRANSACTION;

  END
GO

Enabling Parallel Queue Activation

The last thing we need to do is bind our stored procedure to the queue for automated activation and to set the MAX_QUEUE_READERS to the number of concurrent parallel tasks we want to have executing on the system at any given point in time.  I spent some time testing different values for MAX_QUEUE_READERS and found that 5 was generally sufficient to provide significant reductions in runtimes without creating contention between the tasks being executed.

-- Alter the target queue to specify internal activation
ALTER QUEUE OlaHallengrenMaintenanceTaskQueue
WITH ACTIVATION
( STATUS = ON,
PROCEDURE_NAME = OlaHallengrenMaintenanceTaskQueue_ActivationProcedure,
MAX_QUEUE_READERS = 5,
EXECUTE AS SELF
);
GO

Now all we have to do is setup a job to feed the tasks into the queue as messages and Service Broker will automatically begin processing the tasks in parallel up to the MAX_QUEUE_READERS configured value.  To do this, I just used Ola’s scripts to log the commands to the CommandLog table but not execute them and then using a cursor and ROW_NUMBER() definition to split the work up across different tables instead of queuing the commands in the order generated, send each ID as a message to Service Broker to process.

DECLARE @MaxID INT
SELECT @MaxID = MAX(ID)
FROM master.dbo.CommandLog;

SELECT @MaxID = ISNULL(@MaxID, 0)

-- Load new tasks into the Command Log
EXECUTE master.dbo.IndexOptimize
@Databases = 'AdventureWorks2014',
@FragmentationLow = 'INDEX_REBUILD_OFFLINE',
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@UpdateStatistics = 'ALL',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@LogToTable = 'Y',
@Execute = 'N';

DECLARE @NewMaxID INT
SELECT @NewMaxID = MAX(ID)
FROM master.dbo.CommandLog;

USE    msdb;

DECLARE @id int;

-- Don't submit commands  in exact command order or parallel processing
-- of indexes/stats on same object will occur and could block
DECLARE command_cursor CURSOR FOR
SELECT ID
FROM (
SELECT ROW_NUMBER() OVER
(PARTITION BY ObjectName
ORDER BY COALESCE(IndexName, StatisticsName), CommandType) AS Ranking,
ID
FROM master.dbo.CommandLog
WHERE ID > @MaxID AND ID <= @NewMaxID)  AS t
ORDER BY Ranking

OPEN command_cursor

FETCH NEXT FROM command_cursor
INTO @id

WHILE @@FETCH_STATUS = 0
BEGIN

-- Begin a conversation and send a request message
DECLARE @conversation_handle UNIQUEIDENTIFIER;
DECLARE @message_body XML;

BEGIN TRANSACTION;

BEGIN DIALOG @conversation_handle
FROM SERVICE [OlaHallengrenMaintenanceTaskService]
TO SERVICE N'OlaHallengrenMaintenanceTaskService'
ON CONTRACT [OlaHallengrenMaintenanceTaskContract]
WITH ENCRYPTION = OFF;

SELECT @message_body = N'<CommandLogID>'+CAST(@id AS NVARCHAR)+N'</CommandLogID>';

SEND ON CONVERSATION @conversation_handle
MESSAGE TYPE [OlaHallengrenMaintenanceTaskMessage]
(@message_body);

COMMIT TRANSACTION;

-- Get the next command to run
FETCH NEXT FROM command_cursor
INTO @id
END
CLOSE command_cursor;
DEALLOCATE command_cursor;
GO

I originally started out processing the records in the order generated, but I found that I would hit blocking with statistics updates on the same table until Trace Flag 7471 was enabled for the instance to change the locking behavior in SQL Server 2014 SP1+CU6.  However, if you don’t have access to the trace flag, by randomizing the order of objects as they are queued, only the tables with a large number of indexes or statistics will have problems running parallel and only at the end of the maintenance tasks since the threads will be working through different tables by the order of queuing.  This has resulted in 30-45% reductions in maintenance times depending on the database schema and the types of maintenance that have to be performed so far in testing.

Looking at read_microsec in sys.dm_os_buffer_descriptors in SQL Server 2012

Last week someone sent Kimberly an email asking what the read_microsec column in the sys.dm_os_buffer_descriptors DMV in SQL Server 2012 showed. The email was passed around our team and to be honest it was the first time I’d even heard of the column existing. The questioner also wanted to know if/how the read_microsec column be used. The Books Online topic for sys.dm_os_buffer_descriptors defines this column as, “The actual time (in microseconds) required to read the page into the buffer. This number is reset when the buffer is reused.”

This would make you think that the counter is per-page in the buffer pool, but I wanted to see what exactly it correlates to, and the results aren’t quite as straightforward as the Books Online description might seem. The read_microsec column isn’t the time required to read a single page in microseconds – it’s the time taken to complete reading into the buffer pool the original I/O block that the page was part of – whether that block was a single 8KB page or a much larger I/O. This is a completely different meaning when you think about the different sizes of I/O that SQL Server can potentially do.

Investigating this further

To look at this further, the first thing to do is identify the Page ID for a specific row in a database that we can use for tracking the I/O associated with bringing that page into the buffer pool. To do this, we can use the fn_PhysLocFormatter function and %%physloc%% as shown by Paul in his blog post, SQL Server 2008: New (undocumented) physical row locator function.

SELECT TOP 1
	sys.fn_PhysLocFormatter(%%physloc%%) AS [Physical_Loc],
	*
FROM AdventureWorks2012_Slow.Production.Product
ORDER BY ProductID;
%%PhysLoc%% formatted results

%%PhysLoc%% formatted results

In my copy of AdventureWorks2012, this returns Page ID 791, as shown in the screenshot above. Using this Page ID we can clear the buffer pool with DBCC DROPCLEANBUFFERS and then rerun our query to read the page back into the buffer pool from disk. However, since we want to track what the meaning of the new column in sys.dm_os_buffer_descriptors is, first let’s create an Extended Events session to track the physical I/O operations and memory operations associated with the session_id that we are running our queries on.

CREATE EVENT SESSION [PageIOLatency]
ON SERVER
    ADD EVENT sqlos.async_io_completed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlos.async_io_requested
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlos.page_allocated
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlos.page_freed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.file_read
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.file_read_completed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.physical_page_read
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.sql_statement_completed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.sql_statement_starting
        (WHERE (sqlserver.session_id=55))
    ADD TARGET package0.event_file
        (SET filename=N'PageIOLatency.xel')
WITH (TRACK_CAUSALITY=ON);

Now we can clear the buffer pool, start the event session, query the row from the Production.Product table, and then stop the event session and review the information contained in the event_file target. Specifically we want to look at the I/O operations for page_id 791 and what the duration was for reading the page.

Highlighted results for page_id 791

Highlighted results for page_id 791

Here I’ve highlighted the async_io_requested event and the last physical_page_read event associated with the I/O for page_id 791. If we look at the duration of the file_read_completed event, which is in milliseconds, we can see from the image above that it took 10 milliseconds to perform the read operation. The read_microsec column in sys.dm_os_buffer_descriptors is in microseconds, so this may only loosely correlate to the value showing for page_id 791 when we query dm_os_buffer_descriptors.

SELECT
	obd.file_id,
	obd.page_id,
	obd.page_level,
	obd.row_count,
	obd.free_space_in_bytes,
	obd.is_modified,
	obd.numa_node,
	obd.read_microsec
FROM sys.dm_os_buffer_descriptors AS obd
WHERE database_id = DB_ID('AdventureWorks2012_Slow')  AND
	obd.page_id = 791;
page_id 791 buffer_descriptor

page_id 791 buffer_descriptor

If we look at the timestamps for the async_io_requested and async_io_completed events in the event_file output, we can calculate out the delta in microseconds between the I/O request posting and the completing:

2014-02-05 08:37:40.0419342 — async_io_completion timestamp
2014-02-05 08:37:40.0309362 — async_io_requested timestamp
010998 — delta in microseconds

This is an exact match to the read_microsec in sys.dm_os_buffer_descriptors, which proves that the column provides the actual I/O time associated with that page being read into the buffer pool.

However, it is not actually that simple…

If we go back to the event_file results, we’ll notice that we didn’t just read one page into the buffer pool with our request, we read an entire extent, 8 pages, into the buffer pool with our I/O operation. So if we modify our query against sys.dm_os_buffer_descriptors to show all of the pages for the AdventureWorks2012_Slow database, we’ll see that all 8 of the pages that were read in a single read, have the same read_microsec value.

SELECT
	o.name,
	obd.file_id,
	obd.page_id,
	obd.page_level,
	obd.row_count,
	obd.free_space_in_bytes,
	obd.is_modified,
	obd.numa_node,
	obd.read_microsec
FROM sys.dm_os_buffer_descriptors AS obd
JOIN AdventureWorks2012_Slow.sys.allocation_units AS au
	ON obd.allocation_unit_id = au.allocation_unit_id
JOIN AdventureWorks2012_Slow.sys.partitions AS p
	ON au.container_id = p.partition_id
JOIN AdventureWorks2012_Slow.sys.objects AS o
	ON p.object_id = o.object_id
WHERE database_id = DB_ID('AdventureWorks2012_Slow')  AND
	o.is_ms_shipped = 0
ORDER BY obd.page_id, o.name, obd.read_microsec;
All pages read in buffer_descriptors

All pages read in buffer_descriptors

So for a single page, unless it is a single 8KB read that occurred, the read_microsec column doesn’t necessarily show an I/O related problem. We’d have to know the size of the original I/O block that read the page into the buffer pool to know if the amount of read_microsec is actually a problem. The reason for this is that with I/O latency, the size of the I/O operation performed matters. Larger block I/Os take longer to complete than smaller blocks. To demonstrate this, we can perform a query that would perform larger read-ahead reads from the data file and look at the impact to the read_microsec column for the pages that are read in the larger block I/O.

To do this, I used the sys.fn_PhysLocCracker() TVF in a CROSS APPLY against the Sales.SalesOrderDetail table to find the first set of contiguous rows in the table. Using the query below, this happened to be the TOP 2866 in my AdventureWorks2012_Slow database which covered 32 pages contiguously or 256KB starting at page_id 10672 and ending at page_id 10703.

SELECT *
FROM AdventureWorks2012_Slow.Sales.SalesOrderDetail
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%);

So if we start the event session again, clear the buffer pool, and read the TOP 2866 rows from the Sales.SalesOrderDetail table, the event data will show that all 32 pages were read with a single 256KB read-ahead read, and the duration for the file_read_completed event is 51ms.

256KB read-ahead read

256KB read-ahead read

If we look at the information in the read_microsec column for the 32 pages in sys.dm_os_buffer_descriptors after reading the table, we’ll see again that all of the pages have the same value.

SELECT
	obd.file_id,
	obd.page_id,
	obd.page_level,
	obd.row_count,
	obd.free_space_in_bytes,
	obd.is_modified,
	obd.numa_node,
	obd.read_microsec
FROM sys.dm_os_buffer_descriptors AS obd
WHERE database_id = DB_ID('AdventureWorks2012_Slow')  AND
	obd.page_id BETWEEN 10672 AND 10703
ORDER BY page_id
All pages have the same read_microsec value

All pages have the same read_microsec value

Summary

The read_microsec column of sys.dm_os_buffer_descriptors wouldn’t be my first go-to metric for looking at I/O latency or page-read issues in SQL Server. While the screenshots in this blog post showed that consecutive page_ids retrieved by a single I/O operation have the same read_microsec value, the testing was done on a SQL Server with no other load and doesn’t mimic production activity in any way. In a production server, you can’t rely on looking at consecutive pages in the output of sys.dm_os_buffer_descriptors to correlate whether or not the pages were all read at the same time in most cases. This is especially true if the server has experienced buffer pool memory pressure or has been active for a long period of time, since pages that aren’t actively being used may be removed to make space for new pages being read into the buffer pool, and a single page within a larger block of pages may remain in the buffer pool longer than the pages around it.

For troubleshooting I/O latency problems, I’d recommend reading Erin’s blog post on Monitoring Disk I/O from our Accidental DBA series, which covers other methods for looking at I/O latency including sys.dm_io_virtual_file_stats and the Physical Disk performance counters.

Logging Extended Events changes to the ERRORLOG

A question came up in class today about the difference between SQL Trace and Extended Events for logging information to the ERRORLOG file. Joe and I have both written about the observer overhead of Trace and Extended Events in the past (Observer Overhead and Wait Type Symptoms and Measuring “Observer Overhead” of SQL Trace vs. Extended Events), and one of the things we teach is to check whether a trace or event session may be running and impacting performance as a part of general troubleshooting performance problems in SQL Server. Anytime a user starts or stops a SQL Trace, information is logged in the ERRORLOG.

SQL Trace ID 2 was started by login “SQL2K8R2-IE2\Jonathan Kehayias”.
SQL Trace stopped. Trace ID = ‘2’. Login Name = ‘SQL2K8R2-IE2\Jonathan Kehayias’.

SQL Trace messages

SQL Trace messages

However, for Extended Events nothing is logged when a user starts or stops an event session on the server. The question in class today was whether it was possible to make Extended Events log entries like SQL Trace and the answer is yes, with a DDL Trigger.

CREATE TRIGGER XEventLogging
ON ALL SERVER 
FOR DDL_EVENT_SESSION_EVENTS
AS
BEGIN
    SET NOCOUNT ON;
    DECLARE @EventData XML = EVENTDATA();
    DECLARE @EventType NVARCHAR(256) = @EventData.value('(EVENT_INSTANCE/EventType)[1]', 'NVARCHAR(256)')
    DECLARE @SessionName NVARCHAR(256) = @EventData.value('(EVENT_INSTANCE/ObjectName)[1]', 'NVARCHAR(256)')
    DECLARE @LoginName NVARCHAR(256) = @EventData.value('(EVENT_INSTANCE/LoginName)[1]', 'NVARCHAR(256)')
	DECLARE @Command NVARCHAR(MAX) = @EventData.value('(EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'NVARCHAR(MAX)');
	DECLARE @msg NVARCHAR(440) = CASE 
		WHEN @EventType = 'CREATE_EVENT_SESSION'
			THEN 'Extended Event session created. Session Name = ''%s''. Login Name = ''%s''.'
		WHEN @EventType = 'ALTER_EVENT_SESSION' AND LOWER(@Command) LIKE LOWER('%STATE%=%START%')
			THEN 'Extended Event session started. Session Name = ''%s''. Login Name = ''%s''.'
		WHEN @EventType = 'ALTER_EVENT_SESSION' AND LOWER(@Command) LIKE LOWER('%STATE%=%STOP%')
			THEN 'Extended Event session stopped. Session Name = ''%s''. Login Name = ''%s''.'
		WHEN @EventType = 'DROP_EVENT_SESSION'
			THEN 'Extended Event session dropped. Session Name = ''%s''. Login Name = ''%s''.'
		END
	
	RAISERROR(@msg, 10, 1, @SessionName, @LoginName) WITH LOG;
END
GO

Now anytime an event session is created, started, stopped, or dropped, information will be logged into the ERRORLOG file.

New Extended Events messages

New Extended Events messages