ASYNC_NETWORK_IO

(Republishing, or using this info in a commercial product/website, is prohibited without permission. All other uses are permitted. If in doubt, please ask.)

(Back to main page…)

Description:

This wait type is where SQL Server has sent some data to a client through TDS and is waiting for the client to acknowledge that is has consumed the data, and can also show up with transaction replication if the Log Reader Agent job is running slowly for some reason, plus MARS or BCP inbound.

(Books Online description: “Occurs on network writes when the task is blocked behind the network. Verify that the client is processing data from the server.”)

Questions/comments on this wait type? Click here to send Paul an email, especially if you have any information to add to this topic.

Added in SQL Server version:

Pre-2005/2005

Removed in SQL Server version:

N/A

Extended Events wait_type value:

Maps to the NETWORK_IO map_value in sys.dm_xe_map_values (thanks to Jonathan’s post here).

The map_key value in sys.dm_xe_map_values is 99 in all versions through 2014 RTM. After 2014 RTM, you must check the DMV to get the latest value as some map_key values have changed in later builds.

Other information:

This wait type is never indicative of a problem with SQL Server, and the majority of the time it is nothing to do with the network either (it’s common to see advice stating that this is always a network issue). A simple test for network issues is to test the ping time between the SQL Server and the client/application/web server, and if the ping time is close to the average wait time, then the wait is because of the network (which may just be the normal network latency, not necessarily a problem).

There is usually nothing that you can do with your SQL Server code that will affect this wait type. There are a few causes of this on the client side, including:

  • The client code is doing what is known as RBAR (Row-By-Agonizing-Row), where only one row at a time is pulled from the results and processed, instead of caching all the results and then immediately replying to SQL Server and proceeding to process the cached rows.
  • The client code is running on a server that has performance issues, and so the client code is running slowly.
  • The client code is running on a VM on a host that is configured incorrectly or overloaded such that the VM doesn’t get to run properly (i.e. slowly or coscheduling issues).

On the SQL Server side, the only possibilities I know of for causing this are using MARS (Multiple Active Result Sets) with large result sets or sometimes when using BCP to import data (try increasing the TDS packet size in this case).

You can demonstrate this wait type easily by running a query with a large result set through SSMS on the SQL Server itself, with no network involved.

You can identify which client servers are causing these waits by using Extended Events to watch for the wait type occurring and gathering the sqlserver.client_app_name and/or sqlserver.client_hostname actions.

Some other things you can try around networking:

  • Look for incorrect NIC settings (e.g. TCP Chimney Offload enabled) with the help of your network/system administrator. Whether some settings should be enabled or not depends on the underlying OS version. See this post and this post for some more details.
  • Consider increasing the TDS packet size (carefully) – see this post for more details.

Known occurrences in SQL Server (list number matches call stack list):

These call stacks (and many more similar ones) are all to do with SQL Server communicating via TDS (Tabular Data Stream) to client code/services.

Abbreviated call stacks (list number matches known occurrences list):

  1. SOS_Task::PostWait+90
    EventInternal::Wait+25c
    FWaitForNewPacket+e4
    CNetConnection::FWaitForNewPacket+27
    CNetConnection::FReadTdsPacket+6a
    CNetByteStream::ReadTDSPacketFromNetwork+114
    CNetByteStream::EsetBeginNewStream+c1
    process_commands+261
    SOS_Task::Param::Execute+21e
    SOS_Scheduler::RunTask+a8
  2. SOS_Task::PostWait+90
    EventInternal::Wait+25c
    WaitForReadDataDirect+fe
    FReadDataDirect+6d
    CNetConnection::FReadTdsPacket+201
    CNetByteStream::ReadTDSPacketFromNetwork+114
    CNetByteStream::EsetBeginNewStream+c1
    process_commands+261
    SOS_Task::Param::Execute+21e
    SOS_Scheduler::RunTask+a8
  3. SOS_Task::PostWait+90
    EventInternal::Wait+25c
    WaitOnWriteAsyncToFinish+148
    write_data+168
    flush_buffer+f2
    CTds74::SendRowImpl+795
    0x00007FFB5D117ED7
    CXStmtQuery::ErsqExecuteQuery+471
    CXStmtSelect::XretExecute+2f7
    CMsqlExecContext::ExecuteStmts<1,1>+400
    CMsqlExecContext::FExecute+a33
    CSQLSource::Execute+866
  4. SOS_Task::PostWait+90
    EventInternal::Wait+25c
    FWaitForNewPacket+e4
    CNetConnection::FWaitForNewPacket+27
    CNetConnection::FReadTdsPacket+6a
    CNetByteStream::ReadTDSPacketFromNetwork+114
    CNetByteStream::MoveToNextBuffer+7e
    CNetByteStream::FGetNextUnicodeFragment+34
    CNetByteStream::PWCStrmConvertToWCharStream+60
    CLanguageExecEnv::GetCommandInput+75
    process_request+63f
    process_commands+51c
    SOS_Task::Param::Execute+21e
    SOS_Scheduler::RunTask+a8