The Curious Case of… too few log records for an offline index rebuild

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I had an email question today asking why there were too few log records for an offline index rebuild the person was doing. They were concerned that the index they were rebuilding in the full recovery model wasn’t being rebuilt properly because instead of seeing at least one log record per row, they were seeing far fewer.

There’s a very simple answer: SQL Server will always try to generate as little log volume as possible.

Let’s say you have an index with 100,000 rows, and 100 rows fit on each index leaf page. When doing an offline rebuild in the full recovery model, SQL Server will not log an LOP_INSERT_ROWS log record for each index row, as that’s too inefficient, given the 100+ bytes of overhead in each log record. Instead it will log about 1,000 LOP_FORMAT_PAGE log records, where each log record contains a full page image with 100 rows in it. That saves the overhead of 99 log records for each 100 rows inserted into the new index – for a total saving of 99,900 x log-record-overhead in log volume (as it would still have to log the LOP_FORMAT_PAGE log records for the page header of each of the 1,000 pages created).

And if you think about it carefully, that approach gives just the same ability to recover the transaction in the event of a crash, as the net effect of each approach is the same.

SQL Server does as much as it can to limit what gets logged for efficiency, which is a good thing if you think of all the places that log is used (including backups, restores, log shipping, replication, mirroring, AGs).

The Curious Case of… why the transaction log has to be zero-initialized

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

And it’s an auspicious day to be posting, as SQLskills/SYSolutions turns 24 today on 7/24/19!

I had an email from someone last week asking why the transaction log has to be zero-initialized and a I realized I’ve never written about this before, so here goes. (Note: I do explain and demonstrate this in my Pluralsight course on SQL Server: Logging, Recovery, and the Transaction Log.)

It’s all to do with crash recovery. SQL Server knows where crash recovery has to start for a database, but not where it ends – i.e. SQL Server does not persist the ‘most recent LSN’ for a database anywhere. This means it has to work out where the end of the log is (and by end, I mean the most recent log record persisted on disk, not the physical end of the log file).

Some background before the explanation:

  • The log is split up internally into chunks called virtual log files, or more commonly just VLFs.
  • The first time a VLF is activated and used, all used sections of it are stamped with parity bits 64 (the mechanism for this is not important)
  • Eventually the VLF will be marked inactive, and eventually reused. The second time a VLF is activated, all used sections of it are stamped with parity bits 128
  • And then 64 again
  • And then 128 again
  • Ad infinitum…

Why 64 and 128 as the alternating parity bits you may ask? Why not? is my response. I can’t think of a reason to use a different bit pattern pair.

Back to the question at hand…

The most common case when crash recovery happens is that the log has wrapped around a few times and so the various VLFs have been activated and deactivated a few times. Crash recovery goes to where it must start: either the most recent checkpoint, or the start of the oldest active transaction at the time the most recent checkpoint happened. It follows the sequence of active VLFs until it comes to a point where a section of a VLF has the wrong parity bits. This means a VLF is active and says all valid sections should have parity bits X, and crash recovery finds an old section of the VLF from its previous use that has parity bits Y. That’s the end of the log. (Yes, when a VLF is reactivated, it is not zeroed out, because the overwriting of the old sections with new sections with new parity bits works instead.)

The much rarer case is when the log *hasn’t* yet wrapped around and not all the VLFs in the log have been used. In that case, crash recovery proceeds from the start until it find a section of an active VLF that is full of zeroes. And that’s the end of the log in that case.

New physical portions of the log file have to zero-initialized as the previous bits and bytes on disk might just happen to look like a section of a VLF with the ‘correct’ parity bits, causing crash recovery to try to use it and most likely crash SQL Server. It’s highly improbable, but there’s a very small possibility.

Hence the log cannot use instant file initialization.

PS: note that in SQL Server 2016, the ‘zeroing’ pattern changed from 0x00 (hexadecimal zero) to 0xc0, for reasons unrelated to what we’re discussing here.

The Curious Case of… faster disks causing more WRITELOG waits

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

I was contacted last week by someone who was asking how are faster disks causing more WRITELOG waits. They were seeing lots of these waits, with an average wait time of 18ms. The log was stored on a Raid-1 array, using locally-attached spinning disks in the server. They figured that by moving the log to Raid-1 array of SSDs, they’d reduce the WRITELOG wait time and get better workload throughput.

They did so and got better performance, but were very surprised to now see WRITELOG as the most frequent wait type on the server, even though the average wait time was less than 1ms, and asked me to explain.

Let me start by saying this is not unusual, and I’ve seen this sequence of events many times with clients and students.

WRITELOG wait occurs when a log block is being flushed to disk, and the two most common causes of this are that the log block is full (60KB) or a transaction committed. If it’s the case that a workload is comprised of lots of very small transactions, then a reduction in wait time for the log block flush from 18ms down to less than 1ms is very likely to result in a higher throughput. This means more transactions happening and hence more log block flushes when the transactions commit, so *more* WRITELOG waits.

Bottom line: What was confusing to them was that performance tuning is usually about reducing the incidence of waits and/or the individual wait times. This is about the only example I can think of where you do something that increases performance and the wait type you were concerned about becomes more prevalent.