The Curious Case of… setting up a server for DBCC CHECKDB

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

There was a Twitter #sqlhelp question last week from someone who was provisioning a server for offloading DBCC CHECKDB from a production server, and he asked whether to prioritize CPU or memory when configuring the server. My answer was neither!

On this new server, there will be two operations happening: restoring backups and running DBCC CHECKDB. Both of these are typically constrained by I/O speed; write speed for restoring backups and read speed for DBCC CHECKDB. So in my opinion, the single most important thing to consider when provisioning this server is I/O subsystem performance.

And the next thing to consider still isn’t CPU or memory, it’s the size and performance of tempdb. One of the common problems that people run into when running DBCC CHECKDB is that tempdb runs out of space. This is because a lot of the information generated by DBCC CHECKDB spills out into a worktable in tempdb, and that can be a *lot* of information. So running DBCC CHECKDB … WITH ESTIMATEONLY on the databases with the largest tables will give you an idea of how much tempdb space is required. (See this blog post for an explanation of how that command works.) And, of course, the read and write speed of tempdb is critical too.

Finally we come to CPU and memory. I’d prioritize CPU over memory, as DBCC CHECKDB will drive CPU very hard, especially in Enterprise Edition when it runs in parallel. You don’t need to go mad on memory, especially in SQL Server 2016 onward when the Query Processor’s memory grant estimation for DBCC CHECKDB has been fixed and it doesn’t need to steal a huge chunk from the buffer pool memory before it can run.

So to summarize, in order of importance for this server, in my opinion: I/O speed, tempdb size and performance, CPU speed, memory.

4 thoughts on “The Curious Case of… setting up a server for DBCC CHECKDB

  1. Out of practical experience I totally agree with Paul here. Even network throughput is more important than RAM or CPU. I recently built such a system to offload checkdb from about 10 servers . We are talking about roughly 20 TB from ~ 1k databases. Experimenting with CPU and RAM resulted in marginal execution time changes only. Using local SSD storage instead of the SAN reduced execution time from 17hrs to 6 and adding a 2nd teamed NIC plus copying the files to the local storage instead of restoring from network brought it down another hour

      1. Yes, copying them to local storage first instead of restoring from Network squeezed roughly one hour out of the process

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.