TechEd demo: corruptions fatal to DBCC CHECKDB

This blog post describes the demo “1 – Fatal Errors” from my Corruption Survival Techniques conference session from various conferences in 2008. The links to the scripts and databases to use are in this blog post.

The aim of this demo is to show that sometimes a database is so corrupt that DBCC CHECKDB just cannot run on it. In that case, there’s no way to force DBCC CHECKDB to get past the fatal corruption and so there’s no way to run a repair either – you’re looking at restoring from a backup or at worst, extracting as much data as possible into a new database.

Let’s look at a couple of examples. Extract and restore the DemoFatalCorruption1 and DemoFatalCorruption2 databases, and the FatalErrors.sql script. What do we get from running DBCC CHECKDB on DemoFatalCorruption1 (lines 47-49 in the script)?

DBCC CHECKDB (DemoFatalCorruption1)
WITH NO_INFOMSGS, ALL_ERRORMSGS;
GO

 

Msg 8928, Level 16, State 6, Line 1
Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown): Page (1:71) could not be processed. See other errors for details.
CHECKDB found 1 allocation errors and 0 consistency errors not associated with any single object.
Msg 8906, Level 16, State 1, Line 1
Page (1:19) in database ID 8 is allocated in the SGAM (1:3) and PFS (1:1), but was not allocated in any IAM. PFS flags ‘MIXED_EXT ALLOCATED 0_PCT_FULL’.
Msg 2575, Level 16, State 1, Line 1
The Index Allocation Map (IAM) page (1:71) is pointed to by the next pointer of IAM page (0:0) in object ID 15, index ID 1, partition ID 983040, alloc unit ID 983040 (type In-row data), but it was not detected in the scan.
Msg 7965, Level 16, State 2, Line 1
Table error: Could not check object ID 15, index ID 1, partition ID 983040, alloc unit ID 983040 (type In-row data) due to invalid allocation (IAM) page(s).
Msg 8906, Level 16, State 1, Line 1
Page (1:71) in database ID 8 is allocated in the SGAM (1:3) and PFS (1:1), but was not allocated in any IAM. PFS flags ‘IAM_PG MIXED_EXT ALLOCATED 0_PCT_FULL’.
Msg 8939, Level 16, State 5, Line 1
Table error: Object ID 15, index ID 1, partition ID 983040, alloc unit ID 983040 (type In-row data), page (1:71). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Msg 8939, Level 16, State 6, Line 1
Table error: Object ID 15, index ID 1, partition ID 983040, alloc unit ID 983040 (type In-row data), page (1:71). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
Msg 8939, Level 16, State 5, Line 1
Table error: Object ID 15, index ID 1, partition ID 983040, alloc unit ID 983040 (type In-row data), page (1:71). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Msg 8939, Level 16, State 6, Line 1
Table error: Object ID 15, index ID 1, partition ID 983040, alloc unit ID 983040 (type In-row data), page (1:71). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
CHECKDB found 5 allocation errors and 3 consistency errors in table ‘sys.syshobts’ (object ID 15).
Msg 7995, Level 16, State 1, Line 1
Database ‘DemoFatalCorruption1’: consistency errors in system catalogs prevent further DBCC checkdb processing.
CHECKDB found 0 allocation errors and 1 consistency errors in table ‘ALLOCATION’ (object ID 99).
CHECKDB found 6 allocation errors and 4 consistency errors in database ‘DemoFatalCorruption1’.

 

A bunch of errors that look like regular DBCC CHECKDB output – but if you look carefully near the end of the output you’ll see error 7995 stating that the system catalogs are so corrupt that DBCC CHECKDB can’t continue. Notice also that there’s nothing at the end of the output stating what the minimum repair level is to fix the errors – because repair cannot be run on this database.

The second example is even worse (running lines 53-55 in the script):

DBCC CHECKDB (DemoFatalCorruption2)
WITH NO_INFOMSGS, ALL_ERRORMSGS;
GO

Msg 211, Level 23, State 51, Line 1
Possible schema corruption. Run DBCC CHECKCATALOG.

 

In this case, the corruption is so bad that DBCC CHECKDB didn’t even get a chance to terminate gracefully – the metadata subsystem in the Query Processor just blew away the whole command. Running DBCC CHECKCATALOG as the error message states doesn’t do any better – it just prints the same error! (I didn’t write that error message :-)

So – just because DBCC CHECKDB completes, doesn’t always mean it completes successfully. Make sure you always check the output.

 

Conference Questions Pot-Pourri #10: Shrinking the database before taking a backup

Here’s a question I got from someone who attended our database maintenance workshop at PASS last week (paraphrased):

I attended your pre-conference session on database maintenance and found it to be very informative.  From what you told use though, I think I need to change my nightly backup procedure.  I like to get my databases back to as small of a size as possible before backing them up, so I run the following commands to do this before taking the full database backup: BACKUP LOG [mydbname] WITH NO_LOG and then DBCC SHRINKDATABASE (‘mydbname’).  Could you help me with a better way of doing this? We’re on SQL Server 2005.

And here’s the answer I sent back:

How large is the database? And how long must you keep the backups around? If the cumulative size of the backups takes up a large proportion of your available storage space (and we’re talking more than just a single direct-attached 100+GB drive), then it may be worth compressing the backups – otherwise you’re likely causing yourself more trouble than its worth.

By doing BACKUP LOG WITH NO_LOG you’re effectively throwing away log records and removing the possibility of doing any kind of point-in-time, or up-to-the-second recovery (see BACKUP LOG WITH NO_LOG – use, abuse, and undocumented trace flags to stop it). If you’re running in the FULL recovery model, and you don’t care about either of these features, then you should switch to the SIMPLE recovery model. If you really want to be in FULL, don’t ever use WITH NO_LOG.

The amount of transaction log that a full backup requires cannot be changed by you truncating the log. The full backup will backup any log it requires to enable the restored database to be a transactionally consistent copy of the database. See Debunking a couple of myths around full database backups and More on how much transaction log a full backup includes.

Doing a DBCC SHRINKDATABASE (the same exact operation as a database auto-shrink) will cause massive index fragmentation, and cause file-system fragmentation of the data files, as they will likely need to grow again after you’ve squeezed all the space out of them. See Auto-shrink – turn it OFF! for more details on the effects.

If you’re really concerned about backup sizes and space is at a premium, I recommend using a 3rd-party backup compression tool so you’re not affecting the actual database. Remember also that SQL Server 2008+ has native backup compression too – see my blog post here for more details.

Hope this helps.

Finished my seminal work on DBCC CHECKDB

Wow – that was tough but *very* fulfilling. As you may know, Kimberly and I are each writing chapters for Kalen's next book – SQL Server 2008 Internals. Well, I *just* finished the DBCC CHECKDB chapter – it's 26000 words and 69 pages, describing all the algorithms in-depth and all the corruption errors that can be reported in SQL Server 2008. It was really fun to write but I'm glad all that stuff's down on paper now – I can make room in my head for a bunch of other stuff

I can't wait to see it in print next Spring!

(Ok – with 5 blog posts today, I think I broke my record. Time to retire for the night before I'm tempted to break it even more…)