The Curious Case of… why a minimally-logged operation doesn’t make a log backup smaller

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had a question in email last week on why a minimally-logged operation doesn’t make a log backup smaller.

The clue to the answer is in the description of the operation: minimally-logged.

When you perform one of a few operations in the bulk-logged or simple recovery models (e.g. offline index build/rebuild, or bulk load where all requirements for minimal logging have been met – see this whitepaper) then SQL Server vastly reduces the amount of transaction log that’s generated. It does this by only logging the allocation of pages and extents for that operation, and not the contents of the pages themselves.

Now imagine that a log backup after such an operation backed up *only* the transaction log produced. On restoration of that database, including that log backup, the result would be a table or index comprised of empty pages – which is not allowed in SQL Server.

Bottom line: a log backup after a minimally-logged operation has to also back up the data file pages that were populated by the operation, so they can also be restored during the restore of the log backup. This is why a minimally-logged operation does not result in the next log backup being a lot smaller (but it may be a little bit smaller, because of difference in size of 8K data pages vs. log records containing pages).

PS As a corollary question, you may wonder how the log backup gets those data pages, because a log backup does not start with a checkpoint, and so surely those dirty pages are still just in memory and not on disk? No – because for minimally-logged operations, data file pages are written to disk immediately. This mechanism is called ‘eager writing’ and guarantees that the results of a minimally-logged operation are persisted on disk when the operation completes.

The Curious Case of… too few log records for an offline index rebuild

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had an email question today asking why there were too few log records for an offline index rebuild the person was doing. They were concerned that the index they were rebuilding in the full recovery model wasn’t being rebuilt properly because instead of seeing at least one log record per row, they were seeing far fewer.

There’s a very simple answer: SQL Server will always try to generate as little log volume as possible.

Let’s say you have an index with 100,000 rows, and 100 rows fit on each index leaf page. When doing an offline rebuild in the full recovery model, SQL Server will not log an LOP_INSERT_ROWS log record for each index row, as that’s too inefficient, given the 100+ bytes of overhead in each log record. Instead it will log about 1,000 LOP_FORMAT_PAGE log records, where each log record contains a full page image with 100 rows in it. That saves the overhead of 99 log records for each 100 rows inserted into the new index – for a total saving of 99,900 x log-record-overhead in log volume (as it would still have to log the LOP_FORMAT_PAGE log records for the page header of each of the 1,000 pages created).

And if you think about it carefully, that approach gives just the same ability to recover the transaction in the event of a crash, as the net effect of each approach is the same.

SQL Server does as much as it can to limit what gets logged for efficiency, which is a good thing if you think of all the places that log is used (including backups, restores, log shipping, replication, mirroring, AGs).

The Curious Case of… why the transaction log has to be zero-initialized

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

And it’s an auspicious day to be posting, as SQLskills/SYSolutions turns 24 today on 7/24/19!

I had an email from someone last week asking why the transaction log has to be zero-initialized and a I realized I’ve never written about this before, so here goes. (Note: I do explain and demonstrate this in my Pluralsight course on SQL Server: Logging, Recovery, and the Transaction Log.)

It’s all to do with crash recovery. SQL Server knows where crash recovery has to start for a database, but not where it ends – i.e. SQL Server does not persist the ‘most recent LSN’ for a database anywhere. This means it has to work out where the end of the log is (and by end, I mean the most recent log record persisted on disk, not the physical end of the log file).

Some background before the explanation:

  • The log is split up internally into chunks called virtual log files, or more commonly just VLFs.
  • The first time a VLF is activated and used, all used sections of it are stamped with parity bits 64 (the mechanism for this is not important)
  • Eventually the VLF will be marked inactive, and eventually reused. The second time a VLF is activated, all used sections of it are stamped with parity bits 128
  • And then 64 again
  • And then 128 again
  • Ad infinitum…

Why 64 and 128 as the alternating parity bits you may ask? Why not? is my response. I can’t think of a reason to use a different bit pattern pair.

Back to the question at hand…

The most common case when crash recovery happens is that the log has wrapped around a few times and so the various VLFs have been activated and deactivated a few times. Crash recovery goes to where it must start: either the most recent checkpoint, or the start of the oldest active transaction at the time the most recent checkpoint happened. It follows the sequence of active VLFs until it comes to a point where a section of a VLF has the wrong parity bits. This means a VLF is active and says all valid sections should have parity bits X, and crash recovery finds an old section of the VLF from its previous use that has parity bits Y. That’s the end of the log. (Yes, when a VLF is reactivated, it is not zeroed out, because the overwriting of the old sections with new sections with new parity bits works instead.)

The much rarer case is when the log *hasn’t* yet wrapped around and not all the VLFs in the log have been used. In that case, crash recovery proceeds from the start until it find a section of an active VLF that is full of zeroes. And that’s the end of the log in that case.

New physical portions of the log file have to zero-initialized as the previous bits and bytes on disk might just happen to look like a section of a VLF with the ‘correct’ parity bits, causing crash recovery to try to use it and most likely crash SQL Server. It’s highly improbable, but there’s a very small possibility.

Hence the log cannot use instant file initialization.

PS: note that in SQL Server 2016, the ‘zeroing’ pattern changed from 0x00 (hexadecimal zero) to 0xc0, for reasons unrelated to what we’re discussing here.