Availability Groups and Columnstore Indexes

This week Paul, Kimberly, Erin, and Tim are at SQLIntersection in Arizona presenting, and one of the best things about presenting at conferences is the questions that get asked about SQL Server features.  Kimberly was asked a question about whether or not columnstore indexes could be used in a database that participates in an Availability Group or not and there is conflicting information available online. Since I have multiple 2012 and 2014 Availability Groups setup in my laptop for teaching our Immersion Events, I spent a little time testing and agreed to blog about the findings. 

Columnstore indexes can be created in any database participating in an Availability Group without a problem. If you create a nonclustered column store index, it can also be used on a readable secondary for queries. However, a clustered columnstore index can only be read on the primary database. If you try to query a clustered columnstore index on a readable secondary you will get error 35371,

Msg 35371, Level 16, State 1, Line 4
SNAPSHOT isolation level is not supported on a table which has a clustered columnstore index.

So as long as the columnstore index is nonclustered, it can still be read on the secondary, but keep in mind that the nonclustered columnstore index is not updateable through DML operations and has to either leverage partition switching or rebuilding the index to modify the data in the table. A good read on the differences can be found in the Books Online.

Update: SQL Server 2016 will support clustered columnstore indexes on readable secondaries as well.

Why I hate the ring_buffer target in Extended Events

I’ve had so many questions about the same problem with the ring_buffer target in Extended Events lately that I figured I would write a blog post that explains all the information I teach about the ring_buffer target and the problems associated with it.  Since the release of SQL Server 2012, and the new UI for Extended Events, I have never used the ring_buffer target once, in fact as the title of this post says I really hate the ring_buffer target. In this post I’ll go over the reasons I hate the ring_buffer target and hopefully convince you why you should use the file_target instead.

Missing Events

This is by far the most common problem I have to explain about the ring_buffer target by email.  Generally the question is phrased along the lines of:

“Hey Jonathan,

I got the code below from one of your articles on SQL Server central and it is not working. The problem I have is that when I run the code, it doesn’t show any deadlock graphs even though I know one just occurred in the application. It seems like I only see older deadlocks in the system_health session, but never the most recent one.  I’ve turned on Trace 1222 and get the information that way, so why doesn’t this work.”

The reality of the situation is that the events are actually there, you just can’t see them because of a limitation of the sys.dm_xe_session_targets DMV.  The target_data column of this DMV can only output roughly 4MB of XML data. The information about the 4MB formated XML limitation of the DMV was explained by Bob Ward on the CSS SQL Server Engineers blog post You may not see the data you expect in Extended Event Ring Buffer Targets…. back in 2009.  To demonstrate the effect of this limitation, lets look at the number of events contained in the ring_buffer target for the system_health event session on a SQL Server 2012 SP1+CU7 server that I have permission to share the information from using the following query.

SELECT 
    ring_buffer_event_count, 
    event_node_count, 
    ring_buffer_event_count - event_node_count AS events_not_in_xml
FROM
(    SELECT target_data.value('(RingBufferTarget/@eventCount)[1]', 'int') AS ring_buffer_event_count,
        target_data.value('count(RingBufferTarget/event)', 'int') as event_node_count
    FROM 
    (    SELECT CAST(target_data AS XML) AS target_data  
        FROM sys.dm_xe_sessions as s
        INNER JOIN sys.dm_xe_session_targets AS st 
            ON s.address = st.event_session_address
        WHERE s.name = N'system_health'
            AND st.target_name = N'ring_buffer'    ) AS n    ) AS t;

image

The ring_buffer_event_count comes from the eventCount attribute maintained in the root RingBufferTarget element of the XML document returned, and the event_node_count is just a count of the RingBufferTarget/event nodes contained by the XML document returned from the DMV.  Here you can see that there are a total of 5000 events contained in the ring_buffer target, based on the new 2012 MAX_EVENTS_LIMIT option which is set at 5000 for the system_health session. Out of those, only 3574 are actually being output by the XML document from the DMV.  That leaves us with 1426 events that are not available, but still exist in memory. The XML document outputs the events oldest first, so the events that we can’t see are actually the most recently collected events and are the ones that we’d probably want to actually have access to.

We can look at the memory usage for the target using the query below to compare the binary data size maintained within the target, versus the size of the formatted XML being output by the DMV.

SELECT 
    target_data.value('(RingBufferTarget/@memoryUsed)[1]', 'int') AS buffer_memory_used_bytes,
    ROUND(target_data.value('(RingBufferTarget/@memoryUsed)[1]', 'int')/1024., 1) AS buffer_memory_used_kb,
    ROUND(target_data.value('(RingBufferTarget/@memoryUsed)[1]', 'int')/1024/1024., 1) AS buffer_memory_used_MB,
    DATALENGTH(target_data) AS xml_length_bytes,
    ROUND(DATALENGTH(target_data)/1024., 1) AS xml_length_kb,
    ROUND(DATALENGTH(target_data)/1024./1024,1) AS xml_length_MB
FROM (
SELECT CAST(target_data AS XML) AS target_data  
FROM sys.dm_xe_sessions as s
INNER JOIN sys.dm_xe_session_targets AS st 
    ON s.address = st.event_session_address
WHERE s.name = N'system_health'
 AND st.target_name = N'ring_buffer') as tab(target_data)

image

Here we can see that the memory usage is roughly 1.7MB of binary data, but once serialized to formatted XML, the length of the document is roughly 4.7MB in size, which is larger than the published XML size, but only slightly.  This is the problem with the nature of the data that can be generated by Extended Events, it can be very compact in binary format, but serializing the formatted XML creates a lot of added overhead for the events that have been collected.  The system_health event session is specifically prone to having the maximum, 5000, number of events collected but the ring_buffer, but only a smaller subset of the events can actually be output by the DMV due to the sizes of the events that get collected by that event session.  The events most likely to have a fairly large size are the sp_server_diagnostics_component_output and the xml_deadlock_report events, since the sizes of the XML being returned by both of those events depends on the specifics of the conditions for when they fired.  I’ve seen sp_server_diagnostics_component_output events that are over 512KB in size on busy servers when problems occur, so that would significantly limit the available data that could be returned by the DMV for the ring_buffer target when one of those events is contained in the output XML.

No UI Support

This for me is probably the biggest reason that I don’t deal with the ring_buffer target anymore in my day to day work.  The UI for Extended Events doesn’t support breaking down the information that is contained in the ring_buffer target at all.  The only thing the UI does is show the XML that was output by the target_data column of the sys.dm_xe_session_targets DMV.

image

This means that to consume the data you have to either open the XML and scan down the events, or write XQuery to parse the XML into table form, which requires that you know the events, columns, and actions that were used in the event sessions definition to really access the data. If I am doing short term data collection and don’t want it persisted to a file in the file system on SQL Server 2012, I just opt for the Live View to stream the data back into the grid, where I don’t have to deal with the XML and can rapidly find the information I was interested.  For anything long term, or even looking at information from the system_health event session, I use the file_target, which the UI can read and process the events from, eliminating the need to do any XQuery manually.

Watch the Memory Usage

This is a new one to me, but it is something to be incredibly careful about with the way the ring_buffer target gets configured on a production server.  Two weeks ago, Andy Galbraith (Blog|Twitter) ran into an issue on a server where all connections were receiving 701 Insufficient System Memory errors. During his analysis Andy realized that the MEMORYCLERK_XE memory clerk in SQL Server was consuming close to 10GB of memory on a server with 16GB of total RAM with ‘max server memory’ configured to 11000MB.  The problem was that the event session that was created with a MAX_EVENTS_LIMIT of 1,000,000 events, but with no MAX_MEMORY configuration on the target.  Since the memory usage of the target is based on the events that have been captured this is, and no maximum was set, it can essentially use an unlimited amount of memory, leading to problems on the server if memory is limited like the example server Andy was troubleshooting.

Looking at read_microsec in sys.dm_os_buffer_descriptors in SQL Server 2012

Last week someone sent Kimberly an email asking what the read_microsec column in the sys.dm_os_buffer_descriptors DMV in SQL Server 2012 showed. The email was passed around our team and to be honest it was the first time I’d even heard of the column existing. The questioner also wanted to know if/how the read_microsec column be used. The Books Online topic for sys.dm_os_buffer_descriptors defines this column as, “The actual time (in microseconds) required to read the page into the buffer. This number is reset when the buffer is reused.”

This would make you think that the counter is per-page in the buffer pool, but I wanted to see what exactly it correlates to, and the results aren’t quite as straightforward as the Books Online description might seem. The read_microsec column isn’t the time required to read a single page in microseconds – it’s the time taken to complete reading into the buffer pool the original I/O block that the page was part of – whether that block was a single 8KB page or a much larger I/O. This is a completely different meaning when you think about the different sizes of I/O that SQL Server can potentially do.

Investigating this further

To look at this further, the first thing to do is identify the Page ID for a specific row in a database that we can use for tracking the I/O associated with bringing that page into the buffer pool. To do this, we can use the fn_PhysLocFormatter function and %%physloc%% as shown by Paul in his blog post, SQL Server 2008: New (undocumented) physical row locator function.

SELECT TOP 1
	sys.fn_PhysLocFormatter(%%physloc%%) AS [Physical_Loc],
	*
FROM AdventureWorks2012_Slow.Production.Product
ORDER BY ProductID;
%%PhysLoc%% formatted results

%%PhysLoc%% formatted results

In my copy of AdventureWorks2012, this returns Page ID 791, as shown in the screenshot above. Using this Page ID we can clear the buffer pool with DBCC DROPCLEANBUFFERS and then rerun our query to read the page back into the buffer pool from disk. However, since we want to track what the meaning of the new column in sys.dm_os_buffer_descriptors is, first let’s create an Extended Events session to track the physical I/O operations and memory operations associated with the session_id that we are running our queries on.

CREATE EVENT SESSION [PageIOLatency]
ON SERVER
    ADD EVENT sqlos.async_io_completed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlos.async_io_requested
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlos.page_allocated
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlos.page_freed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.file_read
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.file_read_completed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.physical_page_read
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.sql_statement_completed
        (WHERE (sqlserver.session_id=55)),
    ADD EVENT sqlserver.sql_statement_starting
        (WHERE (sqlserver.session_id=55))
    ADD TARGET package0.event_file
        (SET filename=N'PageIOLatency.xel')
WITH (TRACK_CAUSALITY=ON);

Now we can clear the buffer pool, start the event session, query the row from the Production.Product table, and then stop the event session and review the information contained in the event_file target. Specifically we want to look at the I/O operations for page_id 791 and what the duration was for reading the page.

Highlighted results for page_id 791

Highlighted results for page_id 791

Here I’ve highlighted the async_io_requested event and the last physical_page_read event associated with the I/O for page_id 791. If we look at the duration of the file_read_completed event, which is in milliseconds, we can see from the image above that it took 10 milliseconds to perform the read operation. The read_microsec column in sys.dm_os_buffer_descriptors is in microseconds, so this may only loosely correlate to the value showing for page_id 791 when we query dm_os_buffer_descriptors.

SELECT
	obd.file_id,
	obd.page_id,
	obd.page_level,
	obd.row_count,
	obd.free_space_in_bytes,
	obd.is_modified,
	obd.numa_node,
	obd.read_microsec
FROM sys.dm_os_buffer_descriptors AS obd
WHERE database_id = DB_ID('AdventureWorks2012_Slow')  AND
	obd.page_id = 791;
page_id 791 buffer_descriptor

page_id 791 buffer_descriptor

If we look at the timestamps for the async_io_requested and async_io_completed events in the event_file output, we can calculate out the delta in microseconds between the I/O request posting and the completing:

2014-02-05 08:37:40.0419342 — async_io_completion timestamp
2014-02-05 08:37:40.0309362 — async_io_requested timestamp
010998 — delta in microseconds

This is an exact match to the read_microsec in sys.dm_os_buffer_descriptors, which proves that the column provides the actual I/O time associated with that page being read into the buffer pool.

However, it is not actually that simple…

If we go back to the event_file results, we’ll notice that we didn’t just read one page into the buffer pool with our request, we read an entire extent, 8 pages, into the buffer pool with our I/O operation. So if we modify our query against sys.dm_os_buffer_descriptors to show all of the pages for the AdventureWorks2012_Slow database, we’ll see that all 8 of the pages that were read in a single read, have the same read_microsec value.

SELECT
	o.name,
	obd.file_id,
	obd.page_id,
	obd.page_level,
	obd.row_count,
	obd.free_space_in_bytes,
	obd.is_modified,
	obd.numa_node,
	obd.read_microsec
FROM sys.dm_os_buffer_descriptors AS obd
JOIN AdventureWorks2012_Slow.sys.allocation_units AS au
	ON obd.allocation_unit_id = au.allocation_unit_id
JOIN AdventureWorks2012_Slow.sys.partitions AS p
	ON au.container_id = p.partition_id
JOIN AdventureWorks2012_Slow.sys.objects AS o
	ON p.object_id = o.object_id
WHERE database_id = DB_ID('AdventureWorks2012_Slow')  AND
	o.is_ms_shipped = 0
ORDER BY obd.page_id, o.name, obd.read_microsec;
All pages read in buffer_descriptors

All pages read in buffer_descriptors

So for a single page, unless it is a single 8KB read that occurred, the read_microsec column doesn’t necessarily show an I/O related problem. We’d have to know the size of the original I/O block that read the page into the buffer pool to know if the amount of read_microsec is actually a problem. The reason for this is that with I/O latency, the size of the I/O operation performed matters. Larger block I/Os take longer to complete than smaller blocks. To demonstrate this, we can perform a query that would perform larger read-ahead reads from the data file and look at the impact to the read_microsec column for the pages that are read in the larger block I/O.

To do this, I used the sys.fn_PhysLocCracker() TVF in a CROSS APPLY against the Sales.SalesOrderDetail table to find the first set of contiguous rows in the table. Using the query below, this happened to be the TOP 2866 in my AdventureWorks2012_Slow database which covered 32 pages contiguously or 256KB starting at page_id 10672 and ending at page_id 10703.

SELECT *
FROM AdventureWorks2012_Slow.Sales.SalesOrderDetail
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%);

So if we start the event session again, clear the buffer pool, and read the TOP 2866 rows from the Sales.SalesOrderDetail table, the event data will show that all 32 pages were read with a single 256KB read-ahead read, and the duration for the file_read_completed event is 51ms.

256KB read-ahead read

256KB read-ahead read

If we look at the information in the read_microsec column for the 32 pages in sys.dm_os_buffer_descriptors after reading the table, we’ll see again that all of the pages have the same value.

SELECT
	obd.file_id,
	obd.page_id,
	obd.page_level,
	obd.row_count,
	obd.free_space_in_bytes,
	obd.is_modified,
	obd.numa_node,
	obd.read_microsec
FROM sys.dm_os_buffer_descriptors AS obd
WHERE database_id = DB_ID('AdventureWorks2012_Slow')  AND
	obd.page_id BETWEEN 10672 AND 10703
ORDER BY page_id
All pages have the same read_microsec value

All pages have the same read_microsec value

Summary

The read_microsec column of sys.dm_os_buffer_descriptors wouldn’t be my first go-to metric for looking at I/O latency or page-read issues in SQL Server. While the screenshots in this blog post showed that consecutive page_ids retrieved by a single I/O operation have the same read_microsec value, the testing was done on a SQL Server with no other load and doesn’t mimic production activity in any way. In a production server, you can’t rely on looking at consecutive pages in the output of sys.dm_os_buffer_descriptors to correlate whether or not the pages were all read at the same time in most cases. This is especially true if the server has experienced buffer pool memory pressure or has been active for a long period of time, since pages that aren’t actively being used may be removed to make space for new pages being read into the buffer pool, and a single page within a larger block of pages may remain in the buffer pool longer than the pages around it.

For troubleshooting I/O latency problems, I’d recommend reading Erin’s blog post on Monitoring Disk I/O from our Accidental DBA series, which covers other methods for looking at I/O latency including sys.dm_io_virtual_file_stats and the Physical Disk performance counters.