The Curious Case of… emergency-mode repair

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had a blog comment question a few days ago that asked why emergency-mode repair requires the database to be in EMERGENCY mode as well as SINGLE_USER mode.

All repair operations that DBCC CHECKDB (and related commands) performs require the database to be in single-user mode so there’s a guarantee that nothing can be changing while the checks and repairs are done. But that doesn’t change the behavior of what repair does – that needs emergency mode too.

Emergency mode is internally known as ‘bypass recovery’ mode, and is used when a database hasn’t had crash recovery run on it, because of corruption issues or a missing or damaged log file. It allows access to the database in the unrecovered state, which may mean the database has inconsistencies in its structures and/or data because of in-flight transactions that have not been rolled back (as recovery couldn’t run).

When a database is in emergency mode, it’s not possible to create a database snapshot (as that involves running recovery) and so a DBCC CHECKDB without a repair option simply treats the database as if it’s read-only, and runs the consistency checks. When a repair option *is* specified (and only REPAIR_ALLOW_DATA_LOSS is permitted) in emergency mode, that tells DBCC CHECKDB that the drastic, last-resort emergency mode repair should be performed. This will:

  • Run as much crash recovery as possible, skipping errors, unlike real crash recovery which will fail when it encounters an error
  • Delete the transaction log
  • Create a new transaction log
  • Run a regular REPAIR_ALLOW_DATA_LOSS check and repair
  • Bring the database online if possible, albeit maybe with corruptions still in the database

So emergency mode isn’t single-user mode, and vice versa – they’re both required for an emergency-mode repair to run.

You can read more about emergency-mode repair in my blog post EMERGENCY-mode repair: the very, very last resort.

The Curious Case of… why a minimally-logged operation doesn’t make a log backup smaller

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had a question in email last week on why a minimally-logged operation doesn’t make a log backup smaller.

The clue to the answer is in the description of the operation: minimally-logged.

When you perform one of a few operations in the bulk-logged or simple recovery models (e.g. offline index build/rebuild, or bulk load where all requirements for minimal logging have been met – see this whitepaper) then SQL Server vastly reduces the amount of transaction log that’s generated. It does this by only logging the allocation of pages and extents for that operation, and not the contents of the pages themselves.

Now imagine that a log backup after such an operation backed up *only* the transaction log produced. On restoration of that database, including that log backup, the result would be a table or index comprised of empty pages – which is not allowed in SQL Server.

Bottom line: a log backup after a minimally-logged operation has to also back up the data file pages that were populated by the operation, so they can also be restored during the restore of the log backup. This is why a minimally-logged operation does not result in the next log backup being a lot smaller (but it may be a little bit smaller, because of difference in size of 8K data pages vs. log records containing pages).

PS As a corollary question, you may wonder how the log backup gets those data pages, because a log backup does not start with a checkpoint, and so surely those dirty pages are still just in memory and not on disk? No – because for minimally-logged operations, data file pages are written to disk immediately. This mechanism is called ‘eager writing’ and guarantees that the results of a minimally-logged operation are persisted on disk when the operation completes.

The Curious Case of… too few log records for an offline index rebuild

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had an email question today asking why there were too few log records for an offline index rebuild the person was doing. They were concerned that the index they were rebuilding in the full recovery model wasn’t being rebuilt properly because instead of seeing at least one log record per row, they were seeing far fewer.

There’s a very simple answer: SQL Server will always try to generate as little log volume as possible.

Let’s say you have an index with 100,000 rows, and 100 rows fit on each index leaf page. When doing an offline rebuild in the full recovery model, SQL Server will not log an LOP_INSERT_ROWS log record for each index row, as that’s too inefficient, given the 100+ bytes of overhead in each log record. Instead it will log about 1,000 LOP_FORMAT_PAGE log records, where each log record contains a full page image with 100 rows in it. That saves the overhead of 99 log records for each 100 rows inserted into the new index – for a total saving of 99,900 x log-record-overhead in log volume (as it would still have to log the LOP_FORMAT_PAGE log records for the page header of each of the 1,000 pages created).

And if you think about it carefully, that approach gives just the same ability to recover the transaction in the event of a crash, as the net effect of each approach is the same.

SQL Server does as much as it can to limit what gets logged for efficiency, which is a good thing if you think of all the places that log is used (including backups, restores, log shipping, replication, mirroring, AGs).