(Republishing, or using this info in a commercial product/website, is prohibited without permission. All other uses are permitted. If in doubt, please ask.)
Description:
The simplest explanation of this wait type is that there are parallel plans running. Much has been written about this wait type and what to do about it, and unfortunately there’s a lot of bad advice out there. When an exchange iterator (e.g. Repartition Streams) or parallel scan runs, there will be at least one CXPACKET wait (for the control thread), and possibly others if there is a skewed distribution of work.
Do not just reduce the server MAXDOP to try to reduce or remove these! Please read the detailed explanations and advice in the links below.
(Books Online description: “Occurs with parallel query plans when waiting to synchronize the Query Processor Exchange Iterator, and when producing and consuming rows. If waiting is excessive and cannot be reduced by tuning the query (such as adding indexes), consider adjusting the Cost Threshold for Parallelism or lowering the Max Degree of Parallelism (MaxDOP).
Note: Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3, CXPACKET only refers to waiting to synchronize the Exchange Iterator and producing rows. Threads consuming rows are tracked separately in the CXCONSUMER wait type. If the consumer threads are too slow, the Exchange Iterator buffer may become full and cause CXPACKET waits.
Note: In Azure SQL Database and Azure SQL Managed Instance, CXPACKET only refers to waiting on threads producing rows. Exchange Iterator synchronization is tracked separately in the CXSYNC_PORT and CXSYNC_CONSUMER wait types. Threads consuming rows are tracked separately in the CXCONSUMER wait type.”)
Questions/comments on this wait type? Click here to send Paul an email, especially if you have any information to add to this topic.
Added in SQL Server version:
Pre-2005/2005
Removed in SQL Server version:
N/A
Extended Events wait_type value:
The map_key value in sys.dm_xe_map_values is 187 in 2008 and 2008 R2, and 191 in 2012 and 2014 RTM. After 2014 RTM, you must check the DMV to get the latest value as some map_key values have changed in later builds.
Other information:
The knee-jerk reaction to CXPACKET waits is to try to remove or reduce them. This is incorrect as they may be a perfectly normal part of your workload. You must investigate where they are occurring and determine whether the parallelism is valid or not, and if valid, is working normally or whether there is skewed parallelism happening.
I wrote two detailed articles on understanding and troubleshooting CXPACKET waits on sqlperformance.com:
Note that in 2016 SP2, 2017 RTM CU3, and Azure SQL Database, the CXPACKET wait has been split so that benign, non-actionable parallelism waits now show up as CXCONSUMER waits. This means there should be fewer CXPACKET waits, and those that remain are more likely to be indicative of a performance problem.
Known occurrences in SQL Server (list number matches call stack list):
The stacks below are all examples of parallel threads from various query plan operators. There are many, many more occurrences with similar stacks.
Abbreviated call stacks (list number matches known occurrences list):
- SOS_Task::PostWait+90
EventInternal::Wait+2c6
EventInternal::WaitAllowPrematureWakeup+c7
CXPipe::Pull+156
CXTransLocal::AllocateBuffers+64
CQScanXProducerNew::AllocateBuffers+31
CQScanXProducerNew::GetRowHelper+293
CQScanXProducerNew::GetRow+15
FnProducerOpen+57
FnProducerThread+851
SubprocEntrypoint+a59
SOS_Task::Param::Execute+21e
SOS_Scheduler::RunTask+a8 - SOS_Task::PostWait+90
EventInternal::Wait+2c6
EventInternal::WaitAllowPrematureWakeup+c7
XchngWait+b5
CXPort::Open+147
CQScanXProducerNew::Open+da
FnProducerOpen+44
FnProducerThread+851
SubprocEntrypoint+a59
SOS_Task::Param::Execute+21e
SOS_Scheduler::RunTask+a8 - SOS_Task::PostWait+90
EventInternal::Wait+2c6
EventInternal::WaitAllowPrematureWakeup+c7
CXPacketList::RemoveHead+e9
CXPipe::ReceivePacket+7a
CXTransLocal::ReceiveBuffers+2d
CQScanExchangeNew::GetRowFromProducer+5f
CQScanExchangeNew::GetRowHelper+66
CQScanSortNew::BuildSortTable+2a6
CQScanSortNew::OpenHelper+c0
CQScanNew::OpenHelper+41
CQScanStreamAggregateNew::Open+30
CQScanNLJoinNew::Open+24
CQScanSortNew::BuildSortTable+3a
CQScanSortNew::OpenHelper+c0
CQScanNew::OpenHelper+41
CQScanXProducerNew::Open+c8
FnProducerOpen+44
FnProducerThread+851
SubprocEntrypoint+a59
SOS_Task::Param::Execute+21e
SOS_Scheduler::RunTask+a8 - SOS_Task::PostWait+90
EventInternal::Wait+2c6
EventInternal::WaitAllowPrematureWakeup+c7
XchngWait+b5
CXPort::Open+147
CQScanExchangeNew::Open+104
CQScanHash::ConsumeBuild+fd
CQScanHash::Open+8e
CQScanSortNew::BuildSortTable+3a
CQScanSortNew::OpenHelper+c0
CQScanNew::OpenHelper+41
CQScanXProducerNew::Open+c8
FnProducerOpen+44
FnProducerThread+851
SubprocEntrypoint+a59
SOS_Task::Param::Execute+21e
SOS_Scheduler::RunTask+a8