(Republishing, or using this info in a commercial product/website, is prohibited without permission. All other uses are permitted. If in doubt, please ask.)
This wait type is when a thread is waiting to acquire an Intent Exclusive lock on a resource and there is at least one other lock in an incompatible mode granted on the resource to a different thread.
General locking information:
- For the complete lock compatibility matrix, see the Books Online page Lock Compatibility.
- For information on the lock hierarchy, see the Books Online page Lock Granularity and Hierarchies.
- For information on some of the lock modes, see the Books Online page Lock Modes.
- For other locking topics, see the Books Online page Locking in the Database Engine.
(Books Online description: “Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock.”)
Questions/comments on this wait type? Click here to send Paul an email, especially if you have any information to add to this topic.
Added in SQL Server version:
Removed in SQL Server version:
Extended Events wait_type value:
The map_key value in sys.dm_xe_map_values is 8 in all versions through 2014 RTM. After 2014 RTM, you must check the DMV to get the latest value as some map_key values have changed in later builds.
General guidance around troubleshooting lock waits:
- It is not possible to determine the lock resource from the sys.dm_os_wait_stats output. You can see the resource from sys.dm_os_waiting_tasks (using my script) or looking at the resource_description field of sys.dm_tran_locks where the request_status is ‘WAIT’.
- You can use the blocked process report to get more detailed information on queries that are waiting for locks for a specified threshold (see here).
- Look to see what is at the head of the blocking chain (i.e. the thread that’s holding the lock that’s blocking everyone) using a script (plenty of them available online – I don’t have a preferred one). What is that thread waiting for? Fixing that wait may help unravel the blocking. For example, a thread may be holding locks and committing a transaction, but there’s a synchronous mirror with a slow I/O subsystem so the mirror log write takes a long time, making the transaction commit take longer, and the locks take longer to be released, causing blocking.
- Look for lock escalation, where an UPDATE transaction has escalated to a table X lock, causing widespread blocking.
- Look for index operations causing table locks, and consider using online index operations (or if already using them, consider the WAIT_AT_LOW_PRIORITY feature in 2014+).
- Look for code that specifies a TABLOCK (causes a table Shared lock) or TABLOCKX (causes a table Exclusive lock) hint.
- Look for application code that will cause locks to be acquired and then waits for user input, or fails to commit a transaction for a long time.
- Consider creating nonclustered indexes to remove row locks from the underlying heap/clustered index.
- Consider using snapshot isolation or read committed snapshot isolation to allow readers to not take S/IS locks and reduce blocking.
- Check the correct isolation level is being used as REPEATABLE_READ and SERIALIZABLE will hold S/IS locks until the end of a transaction.
- Check for accidental use of the SERIALIZABLE isolation level, from using distributed transactions or incorrectly scoped .Net TransactionScope objects.
Specific guidance for LCK_M_IX waits:
- For an Intent Exclusive lock, the resource could be a page, partition, or table.
- Common blockers are a table X (Exclusive) lock from lock escalation occurring, or a SCH_M (Schema Modification) lock from an index build/rebuild.
- If the blocker is holding a table S lock, investigation why the blocking thread has that lock (e.g. use of TABLOCK hint or lock escalation in a restrictive isolation level).
Known occurrences in SQL Server (list number matches call stack list):
- Waiting for an Intent Exclusive lock on a table
- Waiting for an Intent Exclusive lock on a page (in this case, while inserting a row during an online index operation)
And many more similar call stacks.
Abbreviated call stacks (list number matches known occurrences list):