An XEvent a Day (28 of 31) – Tracking Page Compression Operations

The Database Compression feature in SQL Server 2008 Enterprise Edition can provide some significant reductions in storage requirements for SQL Server databases, and in the right implementations and scenarios performance improvements as well.  There isn’t really a whole lot of information about the operations of database compression that is documented as being available in the DMV’s or SQL Trace.  Paul Randal pointed out on Twitter today that sys.dm_db_index_operational_stats() provides the page_compression_attempt_count and page_compression_success_count available.  Beyond that the only other documented information for monitoring Data Compression are the Page Compression Attempts/sec and Pages Compressed/sec Performance Counters of the SQL Server:Access Methods object in Perfmon (http://msdn.microsoft.com/en-us/library/cc280449.aspx). 

There is one thing in common about the documented methods of monitoring Data Compression, and that is they all only deal with Page compression, and not Row compression, and in Extended Events we find the same commonality as there are no Row compression Events in Extended Events.  There are two Page compression Events in Extended Events; sqlserver.page_compression_attempt_failed and sqlserver.page_compression_tracing.  These two Events can be used to track Page compression operations at multiple levels, including database, object, index, and even down to the individual page. The sqlserver.page_compression_tracing Event provides Start and End tracing of Page compression operations inside of the Database Engine and returns the database_id, index_id, rowset_id, page_id, and duration of the compression operation.  The sqlserver.page_compression_attempt_failed is really poorly named, and doesn’t provide information about failures in the sense that something broke, but provides information for why a page compression attempt did not actually change the compression of the data in the page.  It also returns the database_id, index_id, rowset_id, and page_id for the compression attempt, and it also includes a failure_reason column which correlates to the page_compression_failure_reason Map Value.

-- Get the payload information for the Events 
SELECT 
    object_name, 
    column_id, 
    name, 
    type_name
FROM sys.dm_xe_object_columns
WHERE object_name IN ('page_compression_tracing', 
                      'page_compression_attempt_failed')
  AND column_type = 'data'

image

To demonstrate how these Events function, I am going to use the LineItem table from the TPC-H Benchmark that was created by Quest Benchmark Factory using Level 2 for the table sizing, which makes the table just at 1.8GB in size.  All of the indexes on the table will be rebuilt using PAGE compression, and then 10,000 rows will be added to the table.  To setup the environment, first load the TPC-H LineItem table with the appropriate seed of data, this can be done with the free trial version of Benchmark Factory.  Then rebuild all of the indexes on the LineItem table using PAGE compression, and review the PAGE compression statistics from sys.dm_db_index_operational_stats for the database and object.

USE [TPCH]
GO
-- Rebuild the indexes with Page compression 
ALTER INDEX ALL ON dbo.H_Lineitem REBUILD WITH (DATA_COMPRESSION = PAGE)
GO
-- Look at the compression information in sys.dm_db_index_operational_stats
SELECT 
    database_id, 
    object_id, 
    index_id, 
    page_compression_attempt_count, 
    page_compression_success_count,
    (page_compression_attempt_count - page_compression_success_count) as page_compression_failure_count
FROM sys.dm_db_index_operational_stats(db_id('TPCH'), object_id('H_Lineitem'), null, null)
GO

image

Once the table and its indexes have been rebuilt using PAGE compression, we can then create our Event Session, start it, and add 10,000 rows to the LineItem table.  After we add the rows, we can then check the page compression statistics in sys.dm_db_index_operational_stats, and drop our Event Session from the server.

-- Create an Event Session to Track the Failed attempts
CREATE EVENT SESSION PageCompressionTracing
ON SERVER
ADD EVENT sqlserver.page_compression_attempt_failed,
ADD EVENT sqlserver.page_compression_tracing
ADD TARGET package0.asynchronous_file_target(
     SET filename='C:\SQLBlog\PageCompressionTracing.xel',
         metadatafile='C:\SQLBlog\PageCompressionTracing.xem')
WITH (MAX_MEMORY = 8MB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY=5SECONDS)
GO
-- Start the Event Session
ALTER EVENT SESSION PageCompressionTracing
ON SERVER
STATE=START
GO
-- Insert 10000 rows into the H_Lineitem table
INSERT INTO H_Lineitem
    (l_orderkey, l_partkey, l_suppkey, l_linenumber, l_quantity, 
     l_extendedprice, l_discount, l_tax, l_returnflag, l_linestatus, 
     l_shipdate, l_commitdate, l_receiptdate, l_shipinstruct, l_shipmode, 
     l_comment)
SELECT TOP 10000 
     l_orderkey, l_partkey, l_suppkey, l_linenumber, l_quantity, 
     l_extendedprice, l_discount, l_tax, l_returnflag, l_linestatus, 
     l_shipdate, l_commitdate, l_receiptdate, l_shipinstruct, l_shipmode, 
     l_comment
FROM H_Lineitem
GO
-- Look at the compression information in sys.dm_db_index_operational_stats
SELECT 
    database_id, 
    object_id, 
    index_id, 
    page_compression_attempt_count, 
    page_compression_success_count,
    (page_compression_attempt_count - page_compression_success_count) as page_compression_failure_count
FROM sys.dm_db_index_operational_stats(db_id('TPCH'), object_id('H_Lineitem'), null, null)
GO
-- Drop the Event Session
DROP EVENT SESSION PageCompressionTracing
ON SERVER
GO

image

Now we can parse the Events that were captured by our Event Session and compare the information presented by sys.dm_db_index_operational_stats() with what was collected by Extended Events.

-- Create our result Analysis database
CREATE DATABASE [PageCompTestResults]
GO
USE [PageCompTestResults]
GO
-- Create intermediate temp table for raw event data
CREATE TABLE RawEventData
(Rowid int identity primary key, event_data xml)
GO
-- Read the file data into intermediate temp table
INSERT INTO RawEventData (event_data)
SELECT
    CAST(event_data AS XML) AS event_data
FROM sys.fn_xe_file_target_read_file('C:\SQLBlog\PageCompressionTracing*.xel', 
                                     'C:\SQLBlog\PageCompressionTracing*.xem', 
                                     null, null)
GO
-- Fetch the Event Data from the Event Session Target
SELECT 
    RowID,
    event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,
    DATEADD(hh, 
            DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP), 
            event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
    COALESCE(event_data.value('(event/data[@name="database_id"]/value)[1]', 'int'), 
             event_data.value('(event/action[@name="database_id"]/value)[1]', 'int')) AS database_id,
    event_data.value('(event/data[@name="file_id"]/value)[1]', 'int') AS [file_id],
    event_data.value('(event/data[@name="page_id"]/value)[1]', 'int') AS [page_id],
    event_data.value('(event/data[@name="rowset_id"]/value)[1]', 'bigint') AS [rowset_id],
    event_data.value('(event/data[@name="failure_reason"]/text)[1]', 'nvarchar(150)') AS [failure_reason],
    event_data.value('(event/action[@name="system_thread_id"]/value)[1]', 'int') AS [system_thread_id],
    event_data.value('(event/action[@name="scheduler_id"]/value)[1]', 'int') AS [scheduler_id],
    event_data.value('(event/action[@name="cpu_id"]/value)[1]', 'int') AS [cpu_id]
INTO ParsedResults
FROM RawEventData
GO

After parsing out the data, we can begin to really leverage the information we’ve gathered.  If we join the ParsedResults table to sys.partitions for our TPCH database by rowset_id = hobt_id, we can get the object_id and index_id and aggregate the failure reasons up to the object and index level.

SELECT 
    pr.database_id, 
    p.object_id, 
    p.index_id,
    failure_reason, 
    COUNT(*) as failure_count
FROM TPCH.sys.partitions p
JOIN ParsedResults pr
    ON pr.rowset_id = p.hobt_id
WHERE event_name = 'page_compression_attempt_failed'
GROUP BY     pr.database_id, 
    p.object_id, 
    p.index_id,
    failure_reason
GO
-- Look at the compression information in sys.dm_db_index_operational_stats
SELECT 
    database_id, 
    object_id, 
    index_id, 
    page_compression_attempt_count, 
    page_compression_success_count,
    (page_compression_attempt_count - page_compression_success_count) as page_compression_failure_count
FROM sys.dm_db_index_operational_stats(db_id('TPCH'), object_id('H_Lineitem'), null, null)
GO

image

With this we can se that the Extended Events sqlserver.page_compression_attempt_failed Event tracks failures and attempts that are not counted in sys.dm_db_index_operational_stats().  The PageModCountBelowThreshold failure isn’t really a failed attempt at compression.  This reason shows that the page was evaluated for recalculation, and the modified counter for the page hadn’t passed the internal threshold for recalculation so the actual compression operation wasn’t performed.  If we look at the sqlserver.page_compression_tracing Event information, we can see how the numbers begin to come together to match what is output by sys.dm_db_index_operational_stats().

SELECT 
    pr.database_id, 
    p.object_id, 
    p.index_id,
    COUNT(*) as attempt_count
FROM TPCH.sys.partitions p
JOIN ParsedResults pr
    ON pr.rowset_id = p.hobt_id
WHERE event_name = 'page_compression_tracing'
  AND opcode = 'Begin'
GROUP BY     pr.database_id, 
    p.object_id, 
    p.index_id
GO
-- Look at the compression information in sys.dm_db_index_operational_stats
SELECT 
    database_id, 
    object_id, 
    index_id, 
    page_compression_attempt_count, 
    page_compression_success_count,
    (page_compression_attempt_count - page_compression_success_count) as page_compression_failure_count
FROM sys.dm_db_index_operational_stats(db_id('TPCH'), object_id('H_Lineitem'), null, null)
GO

image

We have 193 attempts by this Event, and we have 72 PageModCountBelowThreshold failures, matching our actual attempts of 121 from the DMF.  We can then subtract out the other failures and get the 93 successful operations matching the DMF as well.

An XEvent a Day (27 of 31) – The Future – Tracking Page Splits in SQL Server Denali CTP1

Nearly two years ago Kalen Delaney blogged about Splitting a page into multiple pages, showing how page splits occur inside of SQL Server.  Following her blog post, Michael Zilberstein wrote a post, Monitoring Page Splits with Extended Events, that showed how to see the sqlserver.page_split Events using Extended Events.  Eladio Rincón also blogged about Using XEvents (Extended Events) in SQL Server 2008 to detect which queries are causing Page Splits, but not in relation to Kalen’s blog post.  Both of these blog posts demonstrate how to get the sqlserver.page_split Events, but as discussed in the comments section of Michael Zilberstein’s blog post, the Event fires for all page splits and Adam Machanic and I talked after Eladio’s blog post and opened a connect item to have the sqlserver.page_split Event extended in the product so that you know what kind of split is actually occurring.

https://connect.microsoft.com/SQLServer/feedback/details/388482/sql-server-extended-events-page-split-event-additions

The CTP1 release of Denali has significant changes to the sqlserver.page_split Event, that makes it easier to find the splitting object as well the type of split that is occurring.  Before we look at that, I am going to show the code required to get the object and index information from SQL Server 2008, which is based on Adam’s comments to use sys.dm_os_buffer_descriptors.  For the examples in this blog post I am going use Kalen’s multipage split example from her blog post referenced above.

	-- Create the table 
	USE tempdb;
	GO
	SET NOCOUNT ON
	GO
	IF EXISTS (SELECT * FROM sys.tables
	            WHERE name = 'split_page')
	    DROP TABLE split_page;
	GO
	CREATE TABLE split_page 
	(id INT IDENTITY(0,2) PRIMARY KEY,
	id2 bigint DEFAULT 0,
	data1 VARCHAR(33) NULL, 
	data2 VARCHAR(8000) NULL);
	GO
	-- fill page until no more rows fit
	INSERT INTO split_page DEFAULT VALUES;
	GO 385
	-- verify that there is only one data page 
	DBCC IND(tempdb, split_page, -1);
	-- Create MonitorPageSplits Extended Event Session 
	IF (SELECT 1 FROM sys.server_event_sessions WHERE name = 'MonitorPageSplits') IS NOT NULL 
	   DROP EVENT SESSION MonitorPageSplits ON SERVER 
	GO 
	CREATE EVENT SESSION MonitorPageSplits ON SERVER 
	ADD EVENT sqlserver.page_split 
	( 
	    ACTION (sqlserver.database_id, sqlserver.sql_text)   
	    WHERE sqlserver.database_id = 2 
	) 
	ADD TARGET package0.ring_buffer 
	WITH(MAX_DISPATCH_LATENCY = 1 SECONDS)
	GO 
	-- Start the MonitorPageSplits Event Session 
	ALTER EVENT SESSION MonitorPageSplits ON SERVER STATE = start; 
	GO 
	-- Now insert one more row, this time filling the VARCHARs to the maximum length. 
	SET IDENTITY_INSERT split_page  ON;
	GO
	INSERT INTO split_page (id, id2, data1, data2)
	      SELECT 111, 0, REPLICATE('a', 33), REPLICATE('b', 8000);
	GO
	SET IDENTITY_INSERT split_page  OFF;
	GO 
	ALTER EVENT SESSION MonitorPageSplits ON SERVER 
	DROP EVENT sqlserver.page_split; 
	GO 
	-- Wait to allow dispatch to complete
	WAITFOR DELAY '00:00:01.000' 
	GO
	SELECT oTab.*
	  , p.OBJECT_ID
	  , p.index_id
	  , OBJECT_NAME(p.OBJECT_ID)
	  , i.name
	FROM
	(
	SELECT 
	    XEvent            = XEvent.query('.') 
	  , time              = XEvent.value('(@timestamp)[1]','datetime') 
	  , FILE_ID           = XEvent.value('(data[@name=''file_id'']/value)[1]','int') 
	  , page_id           = XEvent.value('(data[@name=''page_id'']/value)[1]','int') 
	  , database_id       = XEvent.value('(action[@name=''database_id'']/value)[1]','int') 
	  , sql_text          = XEvent.value('(action[@name=''sql_text'']/value)[1]','varchar(max)') 
	FROM 
	( 
	   SELECT CAST(target_data AS XML) AS target_data 
	   FROM sys.dm_xe_session_targets xst 
	   JOIN sys.dm_xe_sessions xs ON xs.address = xst.event_session_address 
	   WHERE xs.name = 'MonitorPageSplits' 
	) AS tab (target_data) 
	CROSS APPLY target_data.nodes('/RingBufferTarget/event') AS EventNodes(XEvent) 
	) AS oTab
	LEFT JOIN sys.dm_os_buffer_descriptors AS obd
	   ON obd.database_id = oTab.database_id
	       AND obd.FILE_ID = oTab.FILE_ID
	       AND obd.page_id = oTab.page_id
	LEFT JOIN sys.allocation_units au
	   ON au.allocation_unit_id = obd.allocation_unit_id
	LEFT JOIN sys.partitions p 
	   ON p.partition_id = au.container_id  
	LEFT JOIN sys.indexes i
	   ON p.OBJECT_ID = i.OBJECT_ID
	       AND p.index_id = i.index_id
	-- verify that there is only one data page 
	DBCC IND(tempdb, split_page, -1);
	
	

The above code creates a table in tempdb, loads one page of data in it exactly as in Kalen’s blog post, and then creates an Event Session for the sqlserver.page_split Event in tempdb, that also collects the sqlserver.database_id and sqlserver.sql_text actions when the Event fires.  After triggering the page split, it drops the Event from the Event Session and then uses WAITFOR DELAY to allow the events to be buffered to the package0.ring_buffer Target.  Then it shreds the XML and joins to the DMV’s to get the object and index names.  The output of running the above script in SQL Server 2008 should be similar to the following, showing 10 split events and 10 additional pages in the database table.

 image

Note that the only two columns returned by the sqlserver.page_split Event are the file_id and page_id.  In SQL Server Denali CTP1, the sqlserver.page_split event now has a much larger Event payload associated with it.  It now returns the file_id, page_id, database_id (as a part of the event, not requiring an action), rowset_id, splitOperation, new_page_file_id, and the new_page_page_id associated with the page_split Event.   This makes the Event much more useful and allows it to be used without having to query the buffer descriptors to find the object association.  The following demo is identical to the demo for SQL Server 2008 listed above with the exception of that the XQuery is slightly different (a requirement to pull the new information from the XML).

	-- Create the table 
	USE tempdb;
	GO
	SET NOCOUNT ON
	GO
	IF EXISTS (SELECT * FROM sys.tables
	            WHERE name = 'split_page')
	    DROP TABLE split_page;
	GO
	CREATE TABLE split_page 
	(id INT IDENTITY(0,2) PRIMARY KEY,
	id2 bigint DEFAULT 0,
	data1 VARCHAR(33) NULL, 
	data2 VARCHAR(8000) NULL);
	GO
	-- fill page until no more rows fit
	INSERT INTO split_page DEFAULT VALUES;
	GO 385
	-- verify that there is only one data page 
	DBCC IND(tempdb, split_page, -1);
	-- Create MonitorPageSplits Extended Event Session 
	IF (SELECT 1 FROM sys.server_event_sessions WHERE name = 'MonitorPageSplits') IS NOT NULL 
	   DROP EVENT SESSION MonitorPageSplits ON SERVER 
	GO 
	CREATE EVENT SESSION MonitorPageSplits ON SERVER 
	ADD EVENT sqlserver.page_split 
	( 
	    ACTION (sqlserver.database_id, sqlserver.sql_text)   
	    WHERE sqlserver.database_id = 2 
	) 
	ADD TARGET package0.ring_buffer 
	WITH (MAX_DISPATCH_LATENCY = 1 SECONDS)
	GO 
	-- Start the MonitorPageSplits Event Session 
	ALTER EVENT SESSION MonitorPageSplits ON SERVER STATE = start; 
	GO 
	-- Now insert one more row, this time filling the VARCHARs to the maximum length. 
	SET IDENTITY_INSERT split_page  ON;
	GO
	INSERT INTO split_page (id, id2, data1, data2)
	      SELECT 111, 0, REPLICATE('a', 33), REPLICATE('b', 8000);
	GO
	SET IDENTITY_INSERT split_page  OFF;
	GO 
	ALTER EVENT SESSION MonitorPageSplits ON SERVER 
	DROP EVENT sqlserver.page_split; 
	
	GO
	SELECT 
	    event_time         = XEvent.value('(@timestamp)[1]','datetime') 
	  , orig_file_id      = XEvent.value('(data[@name=''file_id'']/value)[1]','int') 
	  , orig_page_id      = XEvent.value('(data[@name=''page_id'']/value)[1]','int') 
	  , database_id           = XEvent.value('(data[@name=''database_id'']/value)[1]','int') 
	  , OBJECT_ID         = p.OBJECT_ID
	  , index_id          = p.index_id
	  , OBJECT_NAME           = OBJECT_NAME(p.OBJECT_ID)
	  , index_name            = i.name
	  , rowset_id         = XEvent.value('(data[@name=''rowset_id'']/value)[1]','bigint') 
	  , splitOperation        = XEvent.value('(data[@name=''splitOperation'']/text)[1]','varchar(255)') 
	  , new_page_file_id  = XEvent.value('(data[@name=''new_page_file_id'']/value)[1]','int') 
	  , new_page_page_id  = XEvent.value('(data[@name=''new_page_page_id'']/value)[1]','int') 
	  , sql_text          = XEvent.value('(action[@name=''sql_text'']/value)[1]','varchar(max)') 
	FROM 
	( 
	   SELECT CAST(target_data AS XML) AS target_data 
	   FROM sys.dm_xe_session_targets xst 
	   JOIN sys.dm_xe_sessions xs ON xs.address = xst.event_session_address 
	   WHERE xs.name = 'MonitorPageSplits' 
	) AS tab (target_data) 
	CROSS APPLY target_data.nodes('/RingBufferTarget/event') AS EventNodes(XEvent) 
	LEFT JOIN sys.allocation_units au
	   ON au.container_id = XEvent.value('(data[@name=''rowset_id'']/value)[1]','bigint') 
	LEFT JOIN sys.partitions p 
	   ON p.partition_id = au.container_id  
	LEFT JOIN sys.indexes i
	   ON p.OBJECT_ID = i.OBJECT_ID
	       AND p.index_id = i.index_id
	-- View the Page allocations 
	DBCC IND(tempdb, split_page, -1);
	

If you run the above demo the output should be similar to the below (if you click on the picture, it will open up larger).  One thing that should become immediately obvious is that the same demo in Denali is doing 1/3rd of the page splits that occur in SQL Server 2008. 

image

The old_page_id and new_page_id tell where the page originated and moved to, and the splitOperation tells the type of split.  In this case only two of the type of splits are occurring; SPLIT_FOR_ROOT_NODE which occurs when the first page allocated is split into multiple pages, and SPLIT_FOR_INSERT which occurs as the inserts continue and the pages are split to accommodate the data.  There are a number of additional split operations that exist in SQL Server Denali CTP1 including, SPLIT_FOR_DELETE, SPLIT_FOR_GHOST, SPLIT_FOR_INTERNAL_NODE, and SPLIT_FOR_UPDATE.  I’ve tried to figure out how to correlate the output from DBCC IND with the data held in the Event Session for page splits to correlate the old_page_id and new_page_id to identify problematic splits, but haven’t finalized validation of my tests yet (hopefully I can finish this work and I’ll write an update to this blog post showing how to do this at some point in the near future).  One item that I have noted in my testing is that mid-page splits generally generate multiple sqlserver.page_split Events in the same operation, similar to the demonstrations used in this example, where as end-page splits for identity and sequential GUID inserts do not.  I am not certain that this is a valid conclusion to come to at this point and have further testing to do to investigate page splits more.

An XEvent a Day (26 of 31) – Configuring Session Options

There are 7 Session level options that can be configured in Extended Events that affect the way an Event Session operates.  These options can impact performance and should be considered when configuring an Event Session.  I have made use of a few of these periodically throughout this months blog posts, and in today’s blog post I’ll cover each of the options separately, and provide further information about their usage.  Mike Wachal from the Extended Events team at Microsoft, talked about the Session options on his blog post, Option Trading: Getting the most out of the event session options, and I’d recommend giving it a read for additional information as well.

EVENT_RETENTION_MODE

The EVENT_RETENTION_MODE option specifies how the Event Session handles Event loss when Events generate faster than they can be dispatched to the Targets.  There are three possible values for this option; ALLOW_SINGLE_EVENT_LOSS, ALLOW_MULTIPLE_EVENT_LOSS, and NO_EVENT_LOSS.  This option directly affects the possible impact that an Event Session may have on the performance of a system while the Event Session is active.  A trade off occurs between performance impact and the guarantee whether all Events are captured or not.

ALLOW_SINGLE_EVENT_LOSS

The ALLOW_SINGLE_EVENT_LOSS value is the system default for all Event Sessions where the EVENT_RETENTION_MODE is not explicitly specified as a part of the Event Session definition.  This value allows single events to be dropped and lost from the session when the memory buffers for the Event Session are full and dispatch to the Targets can not keep up with the Event generation. 

ALLOW_MULTIPLE_EVENT_LOSS

The ALLOW_MULTIPLE_EVENT_LOSS value allows an entire memory buffer containing multiple events to be dropped and lost when the memory buffers are full and the Events are generating faster than the buffers can be dispatched to the Targets.  This can minimize the performance impact on the server at the trade off that many Events could potentially be lost, with the number of Events lost depending on the size of the Events being generated, the configuration of the MAX_MEMORY session option, and the MEMORY_PARTITION_MODE session option. 

NO_EVENT_LOSS

The NO_EVENT_LOSS value guarantees that all Events that fire are captured, but at the expense of possible system performance degradation when the Event Session is active.  If the memory buffers are all full and an Event fires, the task firing the Event will wait until space is available in a memory buffer for the Event to be buffered.  This option value is not recommended by the Extended Events team at Microsoft for most Event Sessions and should be used with extreme caution and only when it is absolutely necessary that every Event be captured, even at the expense of degraded performance of the system.

MAX_DISPATCH_LATENCY

The MAX_DISPATCH_LATENCY option specifies the time in seconds that Events are held in a memory buffer that is not full before being dispatched to the asynchronous session Targets.  The default value if the MAX_DISPATCH_LATENCY is not explicitly defined in the Session definition is 30 seconds, and the option has a minimum value of 1 second.  If a value of 0 or INFINITE is specified, the Events held in a memory buffer will not be dispatched until the memory buffer becomes full.

MAX_EVENT_SIZE

The MAX_EVENT_SIZE option specifies the maximum size in kilobytes or megabytes an individual Event can be.  The default value for this option when it is not explicitly set in the Session definition is 0KB, allowing the maximum Event size to be the size of a single memory buffer in the Event Session.  This option can be explicitly set to allow Events that are larger than a single memory buffer to be captured by the Event Session.  The minimum value for this option is 64KB.

MAX_MEMORY

The MAX_MEMORY option specifies the amount of memory in kilobytes or megabytes that is allocated to the memory buffers for the Event Session.  The value of this options is divided evenly amongst the memory buffers that are created for the Event Session based on the configuration of the MEMORY_PARTITION_MODE session option.  The MAX_MEMORY option can be used to increase the memory available for buffering Events when a large number of Events are expected to fire, minimizing Event loss due to full memory buffers.  The default value for this option is 4 megabytes (MB) or 4096 kilobytes (KB). 

Mike Wachal blogged about this option on the Extended Events blog Take it to the MAX (and beyond), and again in response to a number of questions that I sent him early on in this blog series when I was working on a large NUMA based server, Session memory – who’s this guy named Max and what’s he doing with my memory?

MEMORY_PARTITION_MODE

The MEMORY_PARTITION_MODE option specifies how the memory buffers for the Event Session are created and/or partitioned.  For servers with multiple processors and/or multiple NUMA nodes the memory buffers can become a bottleneck performance wise if multiple CPU’s are firing Events and have to wait on a memory buffer to buffer the Event information being collected.  There are three values for this option; NONE, PER_NODE, and PER_CPU. 

NONE

The NONE value specifies that a single set of memory buffers will be created for the Event Session.  In this configuration, three memory buffers are created for the Event Session, and the memory for the Event Session is divided evenly, to the nearest 64KB boundary, amongst the three memory buffers.  This is the default value for an Event Session if the MEMORY_PARTITION_MODE is not explicitly defined.

PER_NODE

The PER_NODE value specifies that a separate set of three memory buffers will be created.  In this configuration, three memory buffers are created for each NUMA node that exists for the SQL Server Instance, and the memory is divided evenly, to the nearest 64KB boundary, amongst all of the memory buffers.  

PER_CPU

The PER_CPU value specifies that a set of memory buffers is created for each CPUs/Scheduler that is assigned to the SQL Server Instance.  In this configuration, the number of memory buffers is 2.5 times the number of CPUs/Schedulers available, and the memory is divided evenly, to the nearest 64KB boundary, amongst all of the memory buffers.

STARTUP_STATE

The STARTUP_STATE option specifies whether an Event Session automatically starts in an Active state when the SQL Server Instance starts up.  There are two valid values for this option, ON and OFF, with OFF being the default.

TRACK_CAUSALITY

The TRACK_CAUSALITY option specifies whether causality tracking across multiple Events is turned ON or OFF.  The default configuration for this option is OFF.  When TRACK_CAUSALITY is turned on, an additional Action, package0.attach_activity_id, is added to each Event that fires in the Event Session.  This Action is a combination GUID and sequence number that allows related Events to be tracked for cause and effect analysis of the Events that fired in the order in which they have fired.

I should make note of the fact that in many cases, the options specified in the blog posts, may not be appropriate for a production implementation, and may have been made based on the fact that I just didn’t want to wait over multiple test cycles for Events to dispatch to the Targets.