[Edit 2016: Check out my new resource – a comprehensive library of all wait types and latch classes – see here.]
Wait statistics, as you know, are one of my favorite things to do with SQL Server, along with corruption, the transaction log, and Kimberly (but not necessarily in that order :-)
One of the things that really frustrates me about wait statistics is that there is hardly any documentation about what the various wait types actually mean. For example, the Microsoft documentation for the WRITE_COMPLETION wait is ‘Occurs when a write operation is in progress.’ That’s not very illuminating. What kind of writes? Under what circumstances?
There is a relatively easy way to figure out when particular wait types are occurring, using Extended Events to track the SQL Server code call stacks when the wait occurs. Also, this information and the methodology for it are required for quite a few of the people I’m working with to analyze their 24-hours of wait statistics (see this post).
Here’s what you need to do:
- Download the symbol files for the SQL Server instance you’re interested in
- Create an Extended Event session to track the wait type you’re interested in
- Run your workload
- Examine the call stacks that you’re collected
And in this post I’ll show you how to do these steps.
Symbol files and call stacks
Whenever an executable is compiled, you can optionally have the compiler generate symbols that can help with debugging. The symbols effectively correlate offsets in the compiled code with the human-readable names of code functions, class methods, and variables. This allows us to look at what’s called a call stack and figure out what SQL Server is doing.
As an example of a call stack, consider the following C++ pseudo-code:
bool Thing::ValidateThing (ThingPtr thing) { // blah if (thing->m_Eggs != GREEN && thing->m_side != HAM) { __ThrowError (THING_NOT_VALID); } return TRUE; } void CountManager::AddNextThingToTotal (ThingPtr thing) { // blah if (TRUE == thing->ValidateThing (ThingPtr thing)) { m_ThingCount++; } } int CountManager::CountListOfThings (ThingListPtr things) { // blah AddNextThingToTotal (ThingPtr thing); // blah return m_ThingCount; }
And we wanted to see all the call stacks that end up with an error being thrown, one such call stack might look like:
__ThrowError Thing::ValidateThing+0x26 CountManager::AddNextThingToTotal+0x441 CountManager::CountListOfThings+0x104
It lets us see at what point in the executable something happens, and we can make sense of the call stack if the words in there make sense in our context. You might be concerned that you don’t know the internals of SQL Server, but most of the time the names of the classes and methods have enough information for you to be able to work out what’s happening.
We need the SQL Server symbol files for this to work. You can get them freely from Microsoft and I have a blog post with instructions to do it: How to download a sqlserver.pdb symbol file. If you have trouble with this, let me know as it can be tricky.
Make sure your call stacks look correct – see the example in the ‘how to’ post.
Extended Event Session
The Extended Event session to use is pretty simple. It uses the histogram target (called the ‘asynchronous bucketizer’ in earlier versions) so capture unique call stacks when the wait type we’re interested in occurs.
We do have to make sure that we use the correct wait type value in the session though, as some of the wait type values changed in Extended Events from version to version. The code to get the correct value to use is as follows, using the WRITE_COMPLETION wait type as the example:
-- Figure out the wait type that we need SELECT [map_key] FROM sys.dm_xe_map_values WHERE [name] = N'wait_types' AND [map_value] = N'WRITE_COMPLETION'; GO
One problem you may have is that some wait types aren’t listed by the name that shows up in sys.dm_os_wait_stats. Jonathan has a handy blog post that does the mapping for these – see here if you run the query above and don’t get a result.
On 2012 I get the following result:
map_key ----------- 628
You MUST make sure to check the map value for your build, as it changes from release to release, including some service packs.
We then plug that value into an Extended Event session that will give us all the call stacks and how many time each was hit:
-- Drop the session if it exists. IF EXISTS ( SELECT * FROM sys.server_event_sessions WHERE [name] = N'InvestigateWaits') DROP EVENT SESSION [InvestigateWaits] ON SERVER GO -- Create the event session -- Note that before SQL 2012, the wait_type to use may be -- a different value. -- On SQL 2012 the target name is 'histogram' but the old -- name still works. CREATE EVENT SESSION [InvestigateWaits] ON SERVER ADD EVENT [sqlos].[wait_info] ( ACTION ([package0].[callstack]) WHERE [wait_type] = 628 -- WRITE_COMPLETION only AND [opcode] = 1 -- Just the end wait events --AND [duration] > X-milliseconds ) ADD TARGET [package0].[asynchronous_bucketizer] ( SET filtering_event_name = N'sqlos.wait_info', source_type = 1, -- source_type = 1 is an action source = N'package0.callstack' -- bucketize on the callstack ) WITH ( MAX_MEMORY = 50 MB, MAX_DISPATCH_LATENCY = 5 SECONDS) GO -- Start the session ALTER EVENT SESSION [InvestigateWaits] ON SERVER STATE = START; GO -- TF to allow call stack resolution DBCC TRACEON (3656, -1); GO
If the trace flag isn’t enabled, the call stacks will not be resolved by SQL Server. The trace flag also pins dbghelp.dll in memory – don’t worry about it causing a perf issue.
NOTE: SQL Server 2019 onward also requires trace flag 2592 to be enabled in the same way.
Note that there’s also a new Extended Event called wait_completed that was added in SQL Server 2014 – I’m using wait_info as it’s available in all versions with Extended Events.
If you want to grab the call stacks and the wait duration for every wait that occurs (e.g. so you can identify the cause of long-duration waits), change the session to:
CREATE EVENT SESSION [InvestigateWaits] ON SERVER ADD EVENT [sqlos].[wait_info] ( ACTION ([package0].[callstack]) WHERE [wait_type] = 628 -- WRITE_COMPLETION only AND [opcode] = 1 -- Just the end wait events --AND [duration] > X-milliseconds ) ADD TARGET [package0].[ring_buffer] WITH ( MAX_MEMORY = 50 MB, MAX_DISPATCH_LATENCY = 5 SECONDS) GO
Workload
Now the Extended Event session exists, you need to run your workload. This may be just your regular workload, or a few example commands that you think may be involved.
As an example, for the WRITE_COMPLETION wait type, I’ll do something simple like creating a database.
Be careful: depending on the wait type you’re investigating, the Extended Event session may cause a small performance issue (e.g. a locking wait, CXPACKET wait, or PAGEIOLATCH_XX wait) so be prepared to stop the session. If you stop the session though, the information in the session histogram target disappears, so grab the data from it (see the Analysis section below) before stopping the session using the following code.
-- Stop the event session ALTER EVENT SESSION [InvestigateWaits] ON SERVER STATE = STOP; GO
But the longer you can run the session for, the more likely you’ll get the causes of the wait type you’re interested in.
Analysis
To get the data out of the histogram target, use the following code:
-- Get the callstacks from the bucketizer target SELECT [event_session_address], [target_name], [execution_count], CAST ([target_data] AS XML) FROM sys.dm_xe_session_targets [xst] INNER JOIN sys.dm_xe_sessions [xs] ON [xst].[event_session_address] = [xs].[address] WHERE [xs].[name] = N'InvestigateWaits'; GO
In the example I’ve created, where I’m looking at WRITE_COMPLETION waits occurring, I get back a bunch of call stacks. Here are the first few call stacks collected on SQL Server 2012 SP1:
XeSosPkg::wait_info::Publish+138 [ @ 0+0x0 SOS_Scheduler::UpdateWaitTimeStats+30c [ @ 0+0x0 SOS_Task::PostWait+90 [ @ 0+0x0 EventInternal::Wait+1f9 [ @ 0+0x0 FCB::SyncWrite+104 [ @ 0+0x0 DBMgr::CopyModel+fe [ @ 0+0x0 DBMgr::CreateAndFormatFiles+966 [ @ 0+0x0 CStmtCreateDB::CreateLocalDatabaseFragment+682 [ @ 0+0x0 DBDDLAgent::CreateDatabase+f7 [ @ 0+0x0 CStmtCreateDB::XretExecute+8fc [ @ 0+0x0 CMsqlExecContext::ExecuteStmts<1,1>+400 [ @ 0+0x0 CMsqlExecContext::FExecute+a33 [ @ 0+0x0 CSQLSource::Execute+866 [ @ 0+0x0 process_request+73c [ @ 0+0x0 process_commands+51c [ @ 0+0x0 SOS_Task::Param::Execute+21e [ @ 0+0x0 SOS_Scheduler::RunTask+a8 [ @ 0+0x0 SOS_Scheduler::ProcessTasks+29a [ @ 0+0x0 SchedulerManager::WorkerEntryPoint+261 [ @ 0+0x0 SystemThread::RunWorker+8f [ @ 0+0x0 SystemThreadDispatcher::ProcessWorker+3c8 [ @ 0+0x0 SchedulerManager::ThreadEntryPoint+236 [ @ 0+0x0 BaseThreadInitThunk+d [ @ 0+0x0 RtlUserThreadStart+21 [ @ 0+0x0 XeSosPkg::wait_info::Publish+138 [ @ 0+0x0 SOS_Scheduler::UpdateWaitTimeStats+30c [ @ 0+0x0 SOS_Task::PostWait+90 [ @ 0+0x0 EventInternal::Wait+1f9 [ @ 0+0x0 FCB::SyncWrite+104 [ @ 0+0x0 FCB::PageWriteInternal+55 [ @ 0+0x0 InitGAMIntervalPages+4cb [ @ 0+0x0 InitDBAllocPages+d0 [ @ 0+0x0 FileMgr::CreateNewFile+137 [ @ 0+0x0 AsynchronousDiskAction::ExecuteDeferredAction+8f [ @ 0+0x0 AsynchronousDiskWorker::ThreadRoutine+15c [ @ 0+0x0 SubprocEntrypoint+a21 [ @ 0+0x0 SOS_Task::Param::Execute+21e [ @ 0+0x0 SOS_Scheduler::RunTask+a8 [ @ 0+0x0 SOS_Scheduler::ProcessTasks+29a [ @ 0+0x0 SchedulerManager::WorkerEntryPoint+261 [ @ 0+0x0 SystemThread::RunWorker+8f [ @ 0+0x0 SystemThreadDispatcher::ProcessWorker+3c8 [ @ 0+0x0 SchedulerManager::ThreadEntryPoint+236 [ @ 0+0x0 BaseThreadInitThunk+d [ @ 0+0x0 RtlUserThreadStart+21 [ @ 0+0x0 XeSosPkg::wait_info::Publish+138 [ @ 0+0x0 SOS_Scheduler::UpdateWaitTimeStats+30c [ @ 0+0x0 SOS_Task::PostWait+90 [ @ 0+0x0 EventInternal::Wait+1f9 [ @ 0+0x0 FCB::SyncWrite+104 [ @ 0+0x0 FCB::PageWriteInternal+55 [ @ 0+0x0 DirectlyMarkPFSPage+154 [ @ 0+0x0 InitGAMIntervalPages+52a [ @ 0+0x0 InitDBAllocPages+d0 [ @ 0+0x0 FileMgr::CreateNewFile+137 [ @ 0+0x0 AsynchronousDiskAction::ExecuteDeferredAction+8f [ @ 0+0x0 AsynchronousDiskWorker::ThreadRoutine+15c [ @ 0+0x0 SubprocEntrypoint+a21 [ @ 0+0x0 SOS_Task::Param::Execute+21e [ @ 0+0x0 SOS_Scheduler::RunTask+a8 [ @ 0+0x0 SOS_Scheduler::ProcessTasks+29a [ @ 0+0x0 SchedulerManager::WorkerEntryPoint+261 [ @ 0+0x0 SystemThread::RunWorker+8f [ @ 0+0x0 SystemThreadDispatcher::ProcessWorker+3c8 [ @ 0+0x0 SchedulerManager::ThreadEntryPoint+236 [ @ 0+0x0 BaseThreadInitThunk+d [ @ 0+0x0 RtlUserThreadStart+21 [ @ 0+0x0 XeSosPkg::wait_info::Publish+138 [ @ 0+0x0 SOS_Scheduler::UpdateWaitTimeStats+30c [ @ 0+0x0 SOS_Task::PostWait+90 [ @ 0+0x0 EventInternal::Wait+1f9 [ @ 0+0x0 FCB::SyncWrite+104 [ @ 0+0x0 SQLServerLogMgr::FormatVirtualLogFile+175 [ @ 0+0x0 SQLServerLogMgr::FormatLogFile+c3 [ @ 0+0x0 FileMgr::CreateNewFile+106 [ @ 0+0x0 AsynchronousDiskAction::ExecuteDeferredAction+8f [ @ 0+0x0 AsynchronousDiskWorker::ThreadRoutine+15c [ @ 0+0x0 SubprocEntrypoint+a21 [ @ 0+0x0 SOS_Task::Param::Execute+21e [ @ 0+0x0 SOS_Scheduler::RunTask+a8 [ @ 0+0x0 SOS_Scheduler::ProcessTasks+29a [ @ 0+0x0 SchedulerManager::WorkerEntryPoint+261 [ @ 0+0x0 SystemThread::RunWorker+8f [ @ 0+0x0 SystemThreadDispatcher::ProcessWorker+3c8 [ @ 0+0x0 SchedulerManager::ThreadEntryPoint+236 [ @ 0+0x0 BaseThreadInitThunk+d [ @ 0+0x0 RtlUserThreadStart+21 [ @ 0+0x0 XeSosPkg::wait_info::Publish+138 [ @ 0+0x0 SOS_Scheduler::UpdateWaitTimeStats+30c [ @ 0+0x0 SOS_Task::PostWait+90 [ @ 0+0x0 EventInternal::Wait+1f9 [ @ 0+0x0 FCB::SyncWrite+104 [ @ 0+0x0 FCB::PageWriteInternal+55 [ @ 0+0x0 GlobalFileHeader::CreateInitialPage+395 [ @ 0+0x0 GlobalFileHeader::WriteInitialPage+50 [ @ 0+0x0 FCB::InitHeaderPage+25c [ @ 0+0x0 FileMgr::CreateNewFile+144 [ @ 0+0x0 AsynchronousDiskAction::ExecuteDeferredAction+8f [ @ 0+0x0 AsynchronousDiskWorker::ThreadRoutine+15c [ @ 0+0x0 SubprocEntrypoint+a21 [ @ 0+0x0 SOS_Task::Param::Execute+21e [ @ 0+0x0 SOS_Scheduler::RunTask+a8 [ @ 0+0x0 SOS_Scheduler::ProcessTasks+29a [ @ 0+0x0 SchedulerManager::WorkerEntryPoint+261 [ @ 0+0x0 SystemThread::RunWorker+8f [ @ 0+0x0 SystemThreadDispatcher::ProcessWorker+3c8 [ @ 0+0x0 SchedulerManager::ThreadEntryPoint+236 [ @ 0+0x0 BaseThreadInitThunk+d [ @ 0+0x0 RtlUserThreadStart+21 [ @ 0+0x0 . . .
In this example, we can see from the collected call stacks that a WRITE_COMPLETION wait occurs when the following operations occur (and there are many more, of course):
- Copying the pages from the model database into our new database (call stack 1)
- Creating and formatting the GAM, SGAM, DIFF_MAP, and ML_MAP allocation bitmaps (call stack 2)
- Creating and formatting the PFS allocation byte-maps (call stack 3)
- Creating and formatting the transaction log VLF headers (call stack 4)
- Creating and formatting the data and log file header pages (call stack 5)
How cool is that? :-)
Summary
Now you have a method to investigate any wait type that you’re seeing in your workload. I’ll be posting a bunch of information about wait types and when they occur through the year.
If you have call stacks that you’d like to know what they are, feel free to send me an email and I’ll respond within a week or so.
Enjoy!
8 thoughts on “How to determine what causes a particular wait type”
I was not understanding how you could correlate when one particular wait type occurs with the name’s resolution. But then I find this post of Jonathan (http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/12/24/an-xevent-a-day-24-of-31-what-is-the-callstack-Action.aspx), explaining in more detail the callstack action, then I can see the great posibilities of this kind of analysis. Amazing!!
Thanks for this post, very very useful.
Super geeky.