The Curious Case of… why a minimally-logged operation doesn’t make a log backup smaller

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had a question in email last week on why a minimally-logged operation doesn’t make a log backup smaller.

The clue to the answer is in the description of the operation: minimally-logged.

When you perform one of a few operations in the bulk-logged or simple recovery models (e.g. offline index build/rebuild, or bulk load where all requirements for minimal logging have been met – see this whitepaper) then SQL Server vastly reduces the amount of transaction log that’s generated. It does this by only logging the allocation of pages and extents for that operation, and not the contents of the pages themselves.

Now imagine that a log backup after such an operation backed up *only* the transaction log produced. On restoration of that database, including that log backup, the result would be a table or index comprised of empty pages – which is not allowed in SQL Server.

Bottom line: a log backup after a minimally-logged operation has to also back up the data file pages that were populated by the operation, so they can also be restored during the restore of the log backup. This is why a minimally-logged operation does not result in the next log backup being a lot smaller (but it may be a little bit smaller, because of difference in size of 8K data pages vs. log records containing pages).

PS As a corollary question, you may wonder how the log backup gets those data pages, because a log backup does not start with a checkpoint, and so surely those dirty pages are still just in memory and not on disk? No – because for minimally-logged operations, data file pages are written to disk immediately. This mechanism is called ‘eager writing’ and guarantees that the results of a minimally-logged operation are persisted on disk when the operation completes.

A month till our next live, online classes!

We have two live, online classes coming up in October and November that round out our training schedule for 2019. These class will be delivered live via WebEx over three days (roughly 12-13 hours of content including Q&As; about the same as two full workshop days!) and the attendees will have lifetime access to the recordings following the end of the class.

Rather than have people try to watch a full day of training at their computer for one of more days, the class will run from 10am to 3pm PST each day, with two 90-minute teaching sessions, each followed by Q&A, and a lunch break. We chose to do this, and to spread the class over a few days, so the times work pretty well for those in the Americas, Africa, and Europe. We also realize that this is complex content, so want to give attendees time to digest each day’s material, plus extensive Q&A.

The classes we have are:

  • IEVLT: Immersion Event on Very Large Tables: Optimizing Performance and Availability through Partitioning
    • October 29-31, taught by Kimberly
  • IECS: Immersion Event on Columnstore Indexes
    • November 12-14, taught by Jonathan

They’re US$495 each or US$395 for past attendees (email me for how to register).

You can get all the logistical, registration, and curriculum details by drilling down from our main schedule page.

We hope to see you there!

The Curious Case of… too few log records for an offline index rebuild

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had an email question today asking why there were too few log records for an offline index rebuild the person was doing. They were concerned that the index they were rebuilding in the full recovery model wasn’t being rebuilt properly because instead of seeing at least one log record per row, they were seeing far fewer.

There’s a very simple answer: SQL Server will always try to generate as little log volume as possible.

Let’s say you have an index with 100,000 rows, and 100 rows fit on each index leaf page. When doing an offline rebuild in the full recovery model, SQL Server will not log an LOP_INSERT_ROWS log record for each index row, as that’s too inefficient, given the 100+ bytes of overhead in each log record. Instead it will log about 1,000 LOP_FORMAT_PAGE log records, where each log record contains a full page image with 100 rows in it. That saves the overhead of 99 log records for each 100 rows inserted into the new index – for a total saving of 99,900 x log-record-overhead in log volume (as it would still have to log the LOP_FORMAT_PAGE log records for the page header of each of the 1,000 pages created).

And if you think about it carefully, that approach gives just the same ability to recover the transaction in the event of a crash, as the net effect of each approach is the same.

SQL Server does as much as it can to limit what gets logged for efficiency, which is a good thing if you think of all the places that log is used (including backups, restores, log shipping, replication, mirroring, AGs).