Common bad advice around disaster recovery

Now that I have a little more time on my hands I’ve been jumping back into some of the online forums. Last summer I posted on a few bits of bad advice I’ve seen in the forums but yesterday I was stunned by some of the terrible advice I saw being given out. So, I’d like to post a new and longer collection of some of the bad advice I’ve seen over the last couple of years (and yesterday!) with some reasoning and better advice.

Run DBCC CHECKALLOC then DBCC CHECKDB then DBCC CHECKTABLE

There’s a common misconception around what DBCC CHECKDB actually does. According to the Books Online entry I wrote for SQL Server 2005, it does the following:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates the Service Broker data in the database.

This means that the DBCC CHECKALLOC, DBCC CHECKTABLE, or DBCC CHECKCATALOG commands do not have to be run separately from DBCC CHECKDB in SQL Server 2005. Note that in SQL Server 2000, DBCC CHECKCATALOG wasn’t part of DBCC CHECKDB.

Run DBCC CHECKALLOC to find out what’s wrong…

Scenario: Any kind of corruption, but commonly advised for 823/824 errors. DBCC CHECKALLOC only checks the allocation bitmaps so it won’t discover corruptions in the vast majority of the database. You should always run a DBCC CHECKDB (with the PHYSICAL_ONLY option if need be to save time/resources) to find out the full extent of corruption in the database.

Just restore from your backups and carry on…

Scenario: corruption. The theory is that restoring from your backups fixes the problem and you’ll be fine. In reality this only fixes the symptom of the problem – the real problem is what caused the corruption in the first place. Its perfectly ok to restore from a backup to get the database back up and running again quickly, but you have to make sure that you investigate the cause of the corruption and take steps to make sure it won’t happen again – root-cause analysis.

Just run repair…

Most of the time running repair means losing data. Running repair should only really be done when you don’t have any valid backups. I can also see a case where you’ve got corruption and your maximum allowable downtime doesn’t allow you to restore from your backups because your backup strategy doesn’t allow for quick, fine-grained restores. In that case you may be forced to run repair (and then fix your backup strategy), but really it should always be your last resort as you’ll most likely lose data.

Also, if you do end up having to run repair, make a backup of your database first – just in case something goes wrong. You can also wrap the repair statement in an explicit transaction so you can rollback the whole repair operation if you think its losing too much data – but then you’re out of options. Once you’re done, don’t forget to do root-cause analysis too.

Only restore from a backup if repair doesn’t work…

This is one I saw yesterday and I couldn’t believe it. The advice was to run all the various types of repair and only resort to restoring from a backup if the repairs didn’t work. I really don’t get this one – why bother taking backups at all if you’re always going to exhaust repair options first? Repair should be your last option, not your first.

Just rebuild your transaction log…

Scenario: log corruption. Rebuilding a transaction log is almost guaranteed to lose data and cause corruption so you need to be in really dire need to do this. Again, the solution here should be to restore from your backups. If you don’t have backups then you’re going to have to do this. For anything before SQL Server 2005, you should contact Product Support to have them walk you through the correct set of steps to do this. For SQL Server 2005, you should use Emergency Mode Repair. I’ll discuss this in the next post.

Unbelievably, I’ve seen this recommended when recovery is taking too long – shut the server down, delete the transaction log and then rebuild it. Yikes!

Try running the other repair options before REPAIR_ALLOW_DATA_LOSS

Scenario: DBCC CHECKDB says to use REPAIR_ALLOW_DATA_LOSS but would like to avoid it. At the bottom of DBCC CHECKDB‘s output is the minimum repair option needed to fix all the corruptions that it found. If it says you need to use REPAIR_ALLOW_DATA_LOSS, then that’s the only option that will fix all the errors. There’s no point trying REPAIR_FAST (which I only left in SQL Server 2005 for backwards compatibility – it does nothing) or REPAIR_REBUILD. What you really should do is restore from your backups, but if you don’t have any then you’re going to have to bite the bullet and run REPAIR_ALLOW_DATA_LOSS.

Drop all the indexes and create them all again…

Scenario: corruption in non-clustered indexes. The theory is that dropping and recreating the indexes will fix the corruption. Sure – it’ll probably get rid of the corruption for a while at least (until whatever caused the corruption happens again). The problem here is that if any of the indexes are enforcing constraints, dropping them means that something could happen that breaks the constraint (e.g. someone inserting a duplicate value). This means that you won’t be able to recreate the constraint-enforcing index again.

At the very least try simply rebuilding the indexes. If that doesn’t work, you may need to resort to rebuilding them using DBCC CHECKDB and the REPAIR_REBUILD option. Regular index rebuilds can read the data from the index using a logical-order scan. However, if the index b-tree structure is corrupted then that won’t work. DBCC CHECKDB always forces the query processor to ask for an allocation-order scan to avoid corruption problems.

Detach then re-attach your database…

Scenario: suspect database. The theory is that recovery will run again and somehow work the next time and fix whatever caused the database to go suspect in the first place. Here’s the bad part – once you detach a suspect database, it’s almost inevitable that the attach process will fail because of the original problem. This means you’re then stuck with a detached database and you need to resort to hacks to get the database attached again.

The database either went suspect because:

  1. the transaction log ran out of space (either the drive ran out of space or the log was not set to autogrow)
  2. a page was corrupted that was required for transaction rollback/recovery
  3. an internal system operation (e.g. allocating a page) came across a corrupt page (e.g. an allocation bitmap)

None of these things can be fixed by a simple detach/attach. In the first case, you need to give the log more space. Either grow the log file and bring the database online, or detach the database and move it to a new location with more space. See http://support.microsoft.com/kb/224071/ for how to move databases around using detach/attach.

The last two cases require restoration from backups or running some form of repair – no amount of rerunning recovery will fix a corruption.

Options for creating a corrupt database…

Scenario: people want to test their disaster recovery plans and so need to know how to create a corrupt database. The best way to do this is to use an already corrupt database – see my recent post that provides a corrupt 2000 and 2005 database as well as some things to try with them. One of the most common suggestions I’ve seen for corrupting a SQL Server 2000 database is to manually delete something from the sysindexes or sysobjects tables. I’ve also described how to do it using a hex editor:

Shutdown the database so the data files aren’t locked (don’t detach the database because if you corrupt the ‘wrong’ page you may not be able to attach it again). Pick an offset more than, say, 100 pages into the file (at least 819200 bytes) but make sure its aligned on an 8192 byte boundary (a page boundary). This avoids critical metadata pages and allocation bitmaps so that you’ll be able to start the database and run DBCC CHECKDB on it. Write a few bytes of zeroes into the file at the chosen offset and you’re almost guaranteed some page header corruption errors.

However, the very worst piece of advice I’ve ever seen on the Internet was another method that I don’t recommend. The advice was to go into the data-center (scary as this shows the poster works for a fairly large company), go up to one of the hard-drives and flick the power switch on and off a few times. Wow! Not only will that cause corruptions but it will also fry the hard-drive…

Summary

The bottom-line is that you need to be very careful when following anyone’s advice on the Internet. Many people out there DO know exactly what they’re talking about, but many also do not and may help you get more deeply into trouble.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.