(Look in the Misconceptions blog category for the rest of the month’s posts and check out the 60-page PDF with all the myths and misconceptions blog posts collected together when you join our Insider list, plus my online Myths and Misconceptions training course on Pluralsight.)
It’s the last day of our class here in Boston and I’m teaching all day today, so I need to bang out my Debunk-A-Myth-A-Day blog post before heading down from our hotel room. Incidentally, we’ve had fellow MVP Aaron Bertrand (Twitter|blog) in the class this week. He’s a fine, very knowledgeable fellow and has added a lot to the discussions from his experiences and earlier in the week he wrote a blog post describing his views on the class, and how hard it can be to take a week off production work.
But I digress (as usual).
Myth #2: DBCC CHECKDB causes blocking because it takes locks by default.
In 7.0 and before, DBCC CHECKDB (and the rest of the family of consistency checking commands) were a nasty mess of nested loop C code that took table locks (and the nested loops made the algorithm essentially order-n-squared, for the programmers amongst you). This was not good, and so…
In 2000, a guy called Steve Lindell (who’s still on the SQL team) rewrote DBCC CHECKDB to get a consistent view of the database using transaction log analysis. Essentially DBCC CHECKDB would prevent log truncation and then at the end of reading through the inconsistent (because of concurrent user transactions) database, run crash recovery on the transaction log inside itself. Basically, there was a brand new reproduction of the recovery code, but inside DBCC CHECKDB. I helped write a bunch of the log analysis code – tortuous, but fun. No, more tortuous. And there were some little problems with it – like the possibility of false failures… “if it gave errors, run it again and see if you get the same errors”. Occasionally it would take table SCH_S locks (schema-stability locks) that would only block table scans and table schema modifications. The logging code was overall not good, and so…
In 2005, a guy called Paul Randal rewrote DBCC CHECKDB again to use database snapshots to get the consistent view of the database (as a database snapshot automatically provides a transactionally-consistent, point-in-time view of the database). No more nasty transaction log analysis code, not more locks *at all* – as accesses to the source database of a database snapshot never take locks – the buffer pool manages all the possibilities of race conditions.
You can read more on the internals of this stuff (both 2000 and 2005+) in the following posts:
- CHECKDB From Every Angle: Complete description of all CHECKDB stages
- CHECKDB From Every Angle: Why would CHECKDB run out of space?
- Database snapshots – when things go wrong
- Issues around DBCC CHECKDB and the use of hidden database snapshots
- Do transactions rollback when DBCC CHECKDB runs?
- Diskeeper 10 Intelliwrite corruption bug
Now, in all versions, if you use the WITH TABLOCK option, DBCC CHECKDB will take locks to guarantee a transactionally consistent view, but I don’t recommend doing that. The first thing it will try to do is grab an exclusive database lock, which in the vast majority of cases will fail (it only waits 20 seconds) because of concurrent database connections.
In 2000, the fact that it prevented log truncation could cause some issues – like the log having to grow – and in 2005, there can be issues around the use of database snapshots (see links above).
But by default, DBCC CHECKDB has been blocking-free since SQL Server 2000.