In my last blog post, I provided a script that queried the XML for the resource monitor ring buffer entries in sys.dm_os_ring_buffers to identify when the system was under memory pressure as a result of not having the ‘max server memory’ sp_configure option set correctly for the server. Since that post has gone live, I’ve gotten a number of emails asking about various ‘max server memory’ configurations and how I might interpret different memory configurations, performance counter values, DMV outputs, etc. One email, from a Microsoft employee named Cristopher Benge, asked about filtering the resource monitor ring buffer entries to exclude notifications generated from inside of the SQL Server process, which wasn’t something I remembered being shown, but I wrote the original script for parsing the XML years ago so I decided it was worth a little more investigation.
In looking into this on SQL Server 2008 R2 SP1, I found that there are two XML nodes that identify whether the entry was the result of process memory pressure, for example the plan cache consuming to many stolen pages from the buffer pool, or system memory pressure from outside of SQL Server. This was blogged about by Bob Dorr a couple of years ago, How it Works: What are the RING_BUFFER_RESOURCE_MONITOR telling me? To play around with this, I used a couple of the workload generators I’ve written in C# for the Memory Performance Tuning module of our IE2: Performance Tuning Immersion Events and started creating internal and external memory problems for SQL Server in one of my lab VMs where I had complete control over the activity being generated.
The first test that I ran was a plan cache pollution application I wrote that churns through gigabytes of plan cache with single use plans against any copy of AdventureWorks to show cache trimming occurring. To perform the tests I added two columns to the XML output for the IndicatorsProcess and IndicatorsSystem elements from the XML:
–System Memory Usage
record.value('(/Record/ResourceMonitor/Notification)', 'varchar(max)') as [Type],
record.value('(/Record/ResourceMonitor/IndicatorsProcess)', 'int') as [IndicatorsProcess],
record.value('(/Record/ResourceMonitor/IndicatorsSystem)', 'int') as [IndicatorsSystem],
record.value('(/Record/MemoryRecord/AvailablePhysicalMemory)', 'bigint') AS [Avail Phys Mem, Kb],
record.value('(/Record/MemoryRecord/AvailableVirtualAddressSpace)', 'bigint') AS [Avail VAS, Kb]
DATEADD (ss, (-1 * ((cpu_ticks / CONVERT (float, ( cpu_ticks / ms_ticks ))) – [timestamp])/1000), GETDATE()) AS EventTime,
CONVERT (xml, record) AS record
CROSS JOIN sys.dm_os_sys_info
WHERE ring_buffer_type = 'RING_BUFFER_RESOURCE_MONITOR') AS tab
ORDER BY EventTime DESC;
After allowing the plan cache pollution application to run for a couple of minutes I ran the above query and received the following output:
This shows that the resource monitor is generating low memory notifications into the ring buffers for process level memory pressure, so we definitely need to take this into account when looking at the ring buffer output to make decisions about whether or not SQL Server is properly configured for ‘max server memory’. After running these tests, I switched to a different application that I wrote for our Immersion Events that does nothing but consume memory from the OS in a similar manner to how another instance of SQL Server might consume memory or a VM balloon driver might consume memory to force memory pressure in Windows to cause applications to trim their usage. When this application is run over a couple of minutes time, the following out put is received:
This shows that we indeed had external memory pressure at the time that the low memory notifications occurred in the SQLOS resource monitor. One of the questions I was asked was whether I would filter the notifications that were internal process memory pressure events out, and my answer to that question would be “Absolutely Not!!” I would only add the two columns to the output as shown in the query above. The reason for this is that a low memory notification, whether it is occurring from inside SQL Server as a process, or from the system being under memory pressure is a sign that there is a problem that we need to look into further.