The Curious Case of… log generated during a DROP TABLE

(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)

Several times last week I was in discussions about the amount of log that’s generated during a DROP TABLE of a very large table. In both cases people were concerned about the amount of log being sent to a synchronous Availability Group replica and whether it would slow things down.

Hopefully you all know that it’s a myth that DROP TABLE and TRUNCATE TABLE are non-logged operations. If you didn’t know that, read my blog post on sqlperformance.com that explains about the deferred drop mechanism. Both operations are fully logged, and will generate quite a bit of transaction log.

The bulk of the log that’s generated comes from having to log the deallocation of extents and the pages within them. For each extent, a bit must be cleared in the corresponding GAM page and IAM page, and all 8 pages in the extent must be marked as deallocated in the corresponding PFS page (turning off the 0x40 bit in each PFS byte). So that’s three log records per allocated extent.

I whipped up a quick test case so I could show you the log records:

SELECT
[Current LSN],
[Operation],
[Context],
[Log Record Length],
[Description]
FROM fn_dblog (null, null);

Current LSN             Operation       Context  Length  Description
----------------------  --------------  -------- ------- ------------------------------------------------------
.
.
000001eb:00000010:016a  LOP_SET_BITS    LCX_IAM  72                
000001eb:00000010:016b  LOP_MODIFY_ROW  LCX_PFS  88      Deallocated 0001:000026f0;Deallocated 0001:000026f1;Deallocated 0001:000026f2;Deallocated 0001:000026f3;Deallocated 0001:000026f4;Deallocated 0001:000026f5;Deallocated 0001:000026f6;Deallocated 0001:000026f7
000001eb:00000010:016c  LOP_SET_BITS    LCX_GAM  72      Deallocated 1 extent(s) starting at page 0001:000026f0
.
.

And that set of three log records repeats for every deallocated extent.

One of the discussions was around dropping a 20TB table. 20TB = 20 x 1024 x 1024 x 1024 x 1024 = 21,990,232,555,520 bytes. One extent is 65,536 bytes, so 20TB is 21990232555520 / 65536 = 335,544,320 extents. At 72 + 88 + 72 = 232 logged bytes per extent, that means dropping a 20TB table will involve generating at least 335544320 x 232 = 77,846,282,240 bytes = 72.5GB of transaction log, or roughly 0.35% of the size of the table. There are a smattering of other log records generated every so often, so to be safe I’d say 0.35-0.4%.

That’s not all generated as one transaction (the whole point of deferred drop is to do the operation in the background in many small transactions) and it’s not generated instantly (as the deferred drop background task is single-threaded), so these log records will be intermingled with everything else going on in the database, and sent over to the synchronous AG replica just like other transactions. So while there is a bunch of additional log being generated, I wouldn’t expect it to cause big problems.

And of course if you have that table partitioned, you can do it in smaller steps by repeatedly switching out partitions and dropping them one at a time.

Bottom line: DROP or TRUNCATE of a table requires about 0.35-0.4%% of the size of the table to be generated in the log.

4 thoughts on “The Curious Case of… log generated during a DROP TABLE

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Some thoughts on courage

(This is also the Ponderings – editorial – in today’s SQLskills newsletter.) I want to start out this post by sincerely thanking everyone who emailed

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.