Last week I was sent an email question about the cause of LOGMGR_RESERVE_APPEND waits, and in Monday’s Insider newsletter I wrote a short explanation. It’s a very unusual wait to see as the highest wait on a server, and in fact it’s very unusual to see it at all.

It happens when a thread is generating a log record and needs to write it into a log block, but there’s no space in the log to do so. The thread first tries to grow the log, and if it fails, and the database is in the simple recovery mode, then it waits for 1 second to see if log clearing/truncation can happen in the meantime and free up some space. (Note that when I say ‘simple recovery mode’, this also includes a database in full or bulk_logged, but where a full backup has not been taken – i.e. the database is operating in what’s called pseudo-simple.)

It’s the fact that the database needs to be in the simple recovery mode and have no space available that makes this wait type very unusual to see.

An example call stack is on SQL Server 2014 (captured using this mechanism):


Here’s a scenario that shows it happening. First I’ll create the database with a fixed size log, set it to simple recovery mode, and create a table that will generate large log records for inserts:

    NAME = N'Company_data',
    FILENAME = N'D:\SQLskills\Company_data.mdf')
    NAME = N'Company_log',
    FILENAME = N'C:\SQLskills\Company_log.ldf',
    SIZE = 2MB,
USE [Company];
CREATE TABLE [BigRows] ([c1] INT IDENTITY, [c2] CHAR (8000) DEFAULT 'a');

And then in two other windows, run the following code:

USE [Company];

WHILE (1 = 1)

And within a few seconds, you’ll see LOGMGR_RESERVE_APPEND waits happening (using my waits script). Here’s an example (with a few columns removed for brevity):

WaitType                       WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
------------------------------ --------- ---------- --------- -------- -------- -------------------------------------------------------------------
LOGMGR_RESERVE_APPEND          26        50.78      0.9847    0.9847   0.0000   https://www.sqlskills.com/help/waits/LOGMGR_RESERVE_APPEND
PREEMPTIVE_OS_FLUSHFILEBUFFERS 954       22.14      0.0116    0.0116   0.0000   https://www.sqlskills.com/help/waits/PREEMPTIVE_OS_FLUSHFILEBUFFERS
WRITELOG                       131625    21.63      0.0001    0.0001   0.0000   https://www.sqlskills.com/help/waits/WRITELOG
PAGEIOLATCH_UP                 5841      3.37       0.0003    0.0003   0.0000   https://www.sqlskills.com/help/waits/PAGEIOLATCH_UP

So if you ever see these waits, look for databases using the simple recovery mode where the log is set to have zero or very tiny autogrowth.


Code to show rolled back transactions after a crash

In Monday’s Insider newsletter I discussed an email question I’d been sent about how to identify the transactions that had rolled back because of a crash, and I said I’d blog some code to do it.

First of all you need to know the time of the crash. We can’t get this exactly (from SQL Server) unless SQL Server decides to shut itself down for some reason (like tempdb corruption) but we can easily get the time that SQL Server restarted, which is good enough, as we just need to know a time that’s after the transactions started before the crash, and before those transactions finished rolling back after a crash. We can get the startup time from the sqlserver_start_time column in the output from sys.dm_os_sys_info.

Then we can search in the transaction log, using the fn_dblog function, for LOP_BEGIN_XACT log records from before the crash point that have a matching LOP_ABORT_XACT log record after the crash point, and with the same transaction ID. This is easy because for LOP_BEGIN_XACT log records, there’s a Begin Time column, and for LOP_ABORT_XACT log records (and, incidentally, for LOP_COMMIT_XACT log records), there’s an End Time column in the TVF output.

And there’s a trick you need to use: to get the fn_dblog function to read log records from before the log clears (by the checkpoints that crash recovery does, in the simple recovery model, or by log backups, in other recovery models), you need to enable trace flag 2537. Now, if do all this too long after crash recovery runs, the log may have overwritten itself and so you won’t be able to get the info you need, but if you’re taking log backups, you could restore a copy of the database to the point just after crash recovery has finished, and then do the investigation.

After that, the tricky part is matching what those transactions were doing back to business operations that your applications were performing. If you don’t name your transactions, that’s going to be pretty hard, as all you’ve got are the generic names that SQL Server gives transactions (like INSERT, DELETE, DROPOBJ). Whatever the reason you might want this information, your applications should be written so they gracefully handle transaction failures and leave the database in a consistent state (as far as your business rules are concerned – of course SQL Server leaves the database in a transactionally-consistent state after a crash).

I’ve written some code and encapsulated it in a proc, sp_SQLskillsAbortedTransactions, which is shown in full at the end of the post. To use it, you go into the context of the database you’re interested in, and just run the proc. It takes care of enabling and disabling the trace flag.

Here’s an example of a crash situation and using the proc.

First I’ll create a table and start a transaction:

USE [master];

IF DATABASEPROPERTYEX (N'Company', N'Version') > 0
    DROP DATABASE [Company];

USE [Company];

CREATE TABLE [test] ([c1] INT, [c2] INT, [c3] INT);
INSERT INTO [test] VALUES (0, 0, 0);

BEGIN TRAN FirstTransaction;
INSERT INTO [Test] VALUES (1, 1, 1);

Now in a second window, I’ll start another transaction, and force the log to flush to disk (as I haven’t generated enough log to have the current log block automatically flush to disk):

USE [Company];

BEGIN TRAN SecondTransaction;
INSERT INTO [Test] VALUES (2, 2, 2);

EXEC sp_flush_log;

And in a third window, I’ll force a crash:


After restarting the instance, I can use this code to run my proc:

USE [Company];

EXEC sp_SQLskillsAbortedTransactions;
Begin Time               Transaction Name   Started By       Transaction ID
------------------------ ------------------ ---------------- --------------
2017/01/18 17:09:36:190  FirstTransaction   APPLECROSS\Paul  0000:00000374
2017/01/18 17:09:40:600  SecondTransaction  APPLECROSS\Paul  0000:00000375

Cool eh?

Here’s the code – enjoy!

  File:     sp_SQLskillsAbortedTransactions.sql
  Summary:  This script cracks the transaction log and shows which
            transactions were rolled back after a crash
  SQL Server Versions: 2012 onwards
  Written by Paul S. Randal, SQLskills.com
  (c) 2017, SQLskills.com. All rights reserved.
  For more scripts and sample code, check out 
  You may alter this code for your own *non-commercial* purposes. You may
  republish altered code as long as you include this copyright and give due
  credit, but you must obtain prior permission before blogging this code.
USE [master];
IF OBJECT_ID (N'sp_SQLskillsAbortedTransactions') IS NOT NULL
    DROP PROCEDURE [sp_SQLskillsAbortedTransactions];
CREATE PROCEDURE sp_SQLskillsAbortedTransactions

    DBCC TRACEON (2537);
    DECLARE @XactID     CHAR (13);

    SELECT @BootTime = [sqlserver_start_time] FROM sys.dm_os_sys_info;

    IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
        WHERE [name] = N'##SQLskills_Log_Analysis')
        DROP TABLE [##SQLskills_Log_Analysis];

    -- Get the list of started and rolled back transactions from the log
        [Begin Time],
        [Transaction Name],
        SUSER_SNAME ([Transaction SID]) AS [Started By],
        [Transaction ID],
        [End Time],
        0 AS [RolledBackAfterCrash],
    INTO ##SQLskills_Log_Analysis
    FROM fn_dblog (NULL, NULL)
    WHERE ([Operation] = 'LOP_BEGIN_XACT' AND [Begin Time] < @BootTime) OR ([Operation] = 'LOP_ABORT_XACT' AND [End Time] > @BootTime);

        [Transaction ID]
    OPEN [LogAnalysis];
    FETCH NEXT FROM [LogAnalysis] INTO @XactID;
        IF EXISTS (
            SELECT [End Time] FROM ##SQLskills_Log_Analysis
            WHERE [Operation] = 'LOP_ABORT_XACT' AND [Transaction ID] = @XactID)
        UPDATE ##SQLskills_Log_Analysis SET [RolledBackAfterCrash] = 1
            WHERE [Transaction ID] = @XactID
            AND [Operation] = 'LOP_BEGIN_XACT';

        FETCH NEXT FROM [LogAnalysis] INTO @XactID;
    CLOSE [LogAnalysis];
    DEALLOCATE [LogAnalysis];
        [Begin Time],
        [Transaction Name],
        [Started By],
        [Transaction ID]
    FROM ##SQLskills_Log_Analysis
    WHERE [RolledBackAfterCrash] = 1;
    DBCC TRACEOFF (2537);

    DROP TABLE ##SQLskills_Log_Analysis;
EXEC sys.sp_MS_marksystemobject [sp_SQLskillsAbortedTransactions];
-- USE [Company]; EXEC sp_SQLskillsAbortedTransactions;

Summer 2017 classes in Bellevue open for registration

I’ve just released our second set of classes for 2017 for registration!

Our classes in July/August will be in Bellevue, WA:

  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • July 31-August 4
  • IEPTO2: Immersion Event on Performance Tuning and Optimization – Part 2
    • August 7-11

Note that there is a full set of classes in Chicago in April/May, and that there will be NO classes in Europe in 2017.

You can get all the logistical, registration, and curriculum details by drilling down from our main schedule page.

We hope to see you there!

2016 review: the year by the numbers

The last post of the year! It’s been a really excellent year all round and time for my traditional post counting down some of the numbers that have been my life this year.

  • 125,402: the number of miles I flew on United
  • 34,085: my current tweet total (up 772 from 2015)
  • 13,434: the number of subscribers to our Insider mailing list (up 493 from 2015)
  • 12,168: the number of people who follow my Twitter ramblings (up 1,325 from 2015)
  • 8,798: the number of emails I sent (down 3,025 from 2015 – this is a good thing!)
  • 1,779: the number of books (real ones) that I own (up 176 from 2015)
  • 998: the number of books I own but haven’t read yet (up 130 from 2015)
  • 286: the number of SQL Server wait types I have documented in my Wait Types Library
  • 184: the number of nights away from home (up 36 from 2015, and all with Kimberly, so not *too* bad)
  • 149: the total number of hours of online training we have available on Pluralsight
  • 127: the number of dives I will have completed by 12/31, all in Indonesia, taking my total to 653
  • 91: the number of minutes of my longest dive this year
  • 60: the number of days in Immersion Events and conferences
  • 50.41: the percentage of time we were away from home (which is why we call it our vacation home!)
  • 49: the number of Pluralsight courses we have available
  • 45: the number of books I read (see this post)
  • 42: the number of flights this year (same as last year)
  • 42: the answer to the question of life, the universe, and everything!
  • 30: the number of different places we slept apart from our house and on planes
  • 25: the number of SQLskills blog posts, including this one
  • 22: the number of airports I flew through this year
  • 12: the number of new bird species I saw, taking my total to 511
  • 12: the number of monthly magazines I subscribe to
  • 9: the number of years I’ve been married to Kimberly
  • 8: the number of different octopus species we saw in Indonesia (coconut, algae, reef, blue-ring, starry-night, mimic, wonderpus, long-arm)
  • 7: the number of countries we visited this year
  • 7: the number of full-time SQLskills employees, all of whom are fabulous and indispensable
  • 2: the number of new airports I flew through (Manado and Lubuan Bajo, both in Indonesia), taking my total to 91
  • 2: the number of new countries I visited (Sweden and Finland), taking my total to 38
  • 2: the number of awesome daughters we have
  • 1: the number of new airlines I flew on (Silk Air, based in Singapore), taking my total to 35
  • 1: the person who seems to cram the most into non-work time (farming, scouts, PokemonGo, building, …): Tim Radney
  • 1: the person who is the best at snapping her fingers and ran a marathon this year: Erin Stellato
  • 1: the biggest hardware geek, master beer brewer, and ex-tank commander I know: Glenn Berry
  • 1: the number of Jonathan Kehayias in the world – thankfully :-)
  • 1: the number of indispensable assistants, without whom our lives would be a distressing quagmire – Libby we love you!
  • Finally, the one and only best person in my life: Kimberly, without whom I would be lost…

Thank you to everyone who reads our blogs, follows us on Twitter, sends us questions, watches our videos, comes to our classes, and generally makes being deeply involved in the SQL community a joy.

I sincerely wish you all a happy, healthy, and prosperous New Year!


(The whole team with all their kids after watching/helping Jonathan propose to Kristi at The Bean in Chicago in May)


(With our wonderful girls in Longyearbyen on Svalbard in August)


(At about 80 degrees north, next to the 7th largest ice cap in the world (Austfonna on the island of Nordaustlandet in the Svalbard archipelago) a couple of weeks later, at about 10pm. The ambient temperature is just above freezing, with a gentle katabatic wind coming off the ice, and I’m in shorts and a t-shirt – I don’t get cold!)


2016: the year in books

Back in 2009 I started posting a summary at the end of the year of what I read during the year (see my posts from 200920102011, 2012, 2013, 2014, 2015) and people have been enjoying it, so here I present the 2016 end-of-year post. I set a moderate goal of 50 books this year but I only managed 45 – the first year since 2009 that I’ve missed my goal – as I spent a bunch of time catching up with my magazine backlog. Next year I’m setting myself a goal of reading 50 books again.

For the record, I read ‘real’ books – i.e. not in electronic form – I don’t like reading off a screen. Yes, I’ve seen electronic readers – we both have iPads – and I’m not interested in ever reading electronically. I also don’t ‘speed read’ – I read quickly and make lots of time for reading.

Why do I track metrics? Because I like doing it, and being able to compare against previous years. Some people don’t understand the logic in that – each to their own :-)

I went back-and-forth over the last few days about which book to nominate as my favorite, and I just couldn’t come to a decision, so just like in most years, I give you my favorite 3 books: Code: The Hidden Language of Computer Hardware and Software by Charles Pretzold, Gardens of the Moon: A Tale of the Malazan Book of the Fallen by Steven Erickson, and The Departure: The Owner: Book One by Neal Asher. All three are superb books (with the last two being the start of series) and I strongly recommend you give them a try. You can read my review of them in the top-10 list below.

Now the details. I enjoy putting this together as it will also serve as a record for me many years from now. I hope you get inspired to try some of these books – push yourself with new authors and very often you’ll be surprisingly pleased. Don’t forget to check out the previous year’s blog posts for more inspiration too.

As usual I leave you with a quote that describes a big part of my psychological make-up:

In omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro!

Analysis of What I Read

I read 19,344 pages, or 53.00 pages a day, and a book every 8.1 days or so. The chart below shows the number of pages (y-axis) in each book I read (x-axis).



The average book length was 429 pages, slightly longer than last year but shorter than previous years. That’s because I again read a lot of series books where each isn’t hugely long.

The Top 10

I read a lot of excellent books this year but because I only read 45, I was able to whittle them down to a top-10, unlike previous years. If you don’t read much, at least consider looking at some of these in 2017. It’s impossible to put them into a priority order so I’ve listed them in the order I read them, along with the short Facebook review I wrote at the time.

1 #2; Code: The Hidden Language of Computer Hardware and Software; Charles Petzold; 396pp; Nonfiction; January 14; (This book is really excellent! It’s a very cleverly written introduction and exploration of how computers work at the logic level, and takes the reader from the basics of electricity and binary to the intricacies of memory busses, CPUs, and assembly language. I quickly skimmed the first hundred or so pages until I got to the part about building counters from relays and it started to refresh my memory with things I’d learned back in 1990 when I did my B. Eng (Hons) degree in computer science and electronics in Edinburgh.I read this book as a way to kick start getting back into computer design as I want to build a CPU and computer system out of TTL logic (one of my many, many ‘spare time’ goals). First though, when I get home I’m going to build some logic circuits out of relays – just for the fun of hearing all the little clicks as the relays change state :-)! I highly recommend this book for anyone who wants to know a bit more about how computers work.)

2 #12; The Information: A History, A Theory, A Flood; James Gleick; 544pp; Nonfiction; February 12; (This is an excellent history of the methods of dissemination of information (think printing press, visual telegraph, morse code), and on the creation and development of the various facets of information theory, including quantum computation and genetics. Dense, but very interesting, and highly recommended.)

3 #13; Gardens of the Moon: A Tale of the Malazan Book of the Fallen; Steven Erikson; 752pp; Fantasy Fiction; March 2; (This is the start of a 10-volume, fantasy epic. I picked up the first few to try out and have been enjoying it since I started the first one on the flights home from diving earlier this month. The story is very involved, with magic, immortals, empires, and lots of intrigue and the book throws you right in from page one. All the books are 700-1000+ pages, so a real treat to read. Highly recommended!)

4 #20; SPQR: A History of Ancient Rome; Mary Beard; 606pp; History; May 14; (This is an excellent book for history buffs and anyone interested in ancient Rome. She adroitly covers how Rome developed over its first 1,000 years, clashes between prominent Romans, the rise of the Emperors, the development of political democracy and on into a dictatorial empire, how they treated slaves and women, and much more. I found it hugely readable and not dry in the least, and it’s inspired me to read a bunch of books on Rome and the Romans that I’ve had for a while. Highly recommended!)

5 #22; World Order; Henry Kissinger; 432pp; Nonfiction; June 3; (Kissinger’s excellent disquisition is part history, part survey, and part explanation and I found it immensely interesting. He examines in great detail the Westphalian order (named after the 1648 Treaty of Westphalia that ended the devastating Thirty Years War), where there’s a balance of power between states, and the problems that arise when the balance is disrupted (e.g. both World Wars), along with sections on Europe, Islamism and the Middle East, and Asia and China. Almost 1/3 of the book is dedicated to the history of major U.S. foreign policy since WWI, and all the wars we’ve fought since then to try to maintain order and/or defend a people being wrongly subjugated. He ends with an analysis of how technologies are affecting world order, and a call to action for future policy changes to maintain it. This is the first book of Kissinger’s that I’ve read, and his powerful intellect and clarity of thought are obvious throughout; I have many of his other works and I’m looking forward to reading them too. Hugely recommended!)

6 #23; The End of Alchemy: Money;  Banking;  and the Future of the Global Economy; Mervyn King; 447pp; Nonfiction; June 7; (King was the Governor of the Bank of England from 2003 to 2013, so had a ringside seat of the financial crisis in 2008-9. Rather than being self-aggrandizing or trying to deflect blame, he dispassionately analyzes what he believes led to the crisis: an unwillingness to look beyond liquidity problems to solvency problems, coupled with the alchemy of creating supposed liquid assets (e.g. derivatives based on slices of mortgage loans) from illiquid assets (e.g. sub-prime mortgages). He also explains why the world economy is still in disequilibrium rather than rebounding and presents some interesting ideas for how to change; basically a major reform of the banking sector, including how central banks respond to monetary crises. Unfortunately, it’ll require a multi-country effort to fix the financial problems the world faces – which at present seems unlikely to happen. Along the way King explains a lot about macro- and micro-economic theory and the history of finance over the last 100 years, which in itself makes for a fascinating read. Highly recommended!)

7 #28; Desert Air; George Steinmetz; 380pp; Photography; July 11; (Steinmetz is an aerial photographer who specializes in deserts and this book is a visually stunning collection of photography of major deserts and features in them from across the world. Other books of his I’ve read are African Air and Empty Quarter: A Photographic Journey to the Heart of the Arabian Desert, and I highly recommended all of them.)

8 #31; The Silk Roads: A New History of the World; Peter Frankopan; 645pp; History; August 11; (This is a very interesting book tracing the varied history of the countries along the Silk Road, including empires, explorers, and religions that affected the various routes. A lot of what’s in the book I already knew, but having it all presented in one volume in a chronological sequence was excellent. The last 100 pages or so detailed the quite despicable British and American machinations around the countries in the Middle East for their own (mostly oil-related) gains, to the huge detriment of the native populations, which I felt quite ashamed to read about, being of both nationalities. Highly recommended for history fans!)

9 #36; The Departure: The Owner: Book One; Neal Asher; 412pp; Science Fiction; September 29; (Asher is one of my favorite sci-fi authors and most of his novels are set in his Polity universe. This is the first in a trilogy, set on Earth a hundred years or so in the future, where there’s a single brutal government, and the colony on Mars that’s just been abandoned by Earth. The protagonist wakes up inside a sealed box on a conveyor belt leading to an incinerator and has to figure out his previous life and then start working on revenge. Lots of action, cool machines and robots, futuristic technology and all very fast paced. I can’t wait to read the next two – highly recommended!)

10 #40; Trainspotting; Irvine Welsh; 348pp; Contemporary Fiction; November 21; (This is one of my all-time favorite movies and Kimberly bought me a leather-bound, signed edition of the book for my birthday in July, so I decided to read it again. It jumps straight in to the lives of a handful of degenerate heroin addicts living in Edinburgh (see the prequel book Skagboys for the back-story). It’s a fantastic book, but not for the faint-hearted at all – it’s written in colloquial Scots, littered with four-letter words, and will likely be hard going for most people reading it. However, if you can stomach it, it’s well worth reading for insight into the Edinburgh drug culture of the 1980s and 1990s.)

The Complete List

And the complete list, with links to Amazon so you can explore further. One thing to bear in mind, the dates I finished reading the book don’t mean that I started, for instance, book #2 after finishing book #1. I usually have anywhere from 10-15 books on the go at any one time so I can dip into whatever my mood is for that day. Some books I read start to finish without picking up another one and some books take me over a year. Lots of long airplane flights and boat trips help too!

  1. Path of the Assassin; Brad Thor; 503pp; Contemporary Fiction; January 2
  2. Code: The Hidden Language of Computer Hardware and Software; Charles Petzold; 396pp; Nonfiction; January 14
  3. Sarum: The Novel of England; Edward Rutherford; 1344pp; Historical Fiction; January 16
  4. The Orphan Master’s Son; Adam Johnson; 480pp; Contemporary Fiction; January 20
  5. Desolation Island; Patrick O’Brian; 325pp; Historical Fiction; January 21
  6. The Abominable; Dan Simmons; 663pp; Historical Fiction; January 24
  7. Out of the Flames; Lawrence & Nancy Goldstone; 368pp; History; January 27
  8. The Fortune of War; Patrick O’Brian; 329pp; Historical Fiction; January 28
  9. A Short Guide to a Long Life; David Agus; 190pp; Nonfiction; January 29
  10. The Surgeon’s Mate; Patrick O’Brian; 382pp; Historical Fiction; January 30
  11. The Ionian Mission; Patrick O’Brian; 400pp; Historical Fiction; February 1
  12. The Information: A History, A Theory, A Flood; James Gleick; 544pp; Nonfiction; February 12
  13. Gardens of the Moon: A Tale of the Malazan Book of the Fallen; Steven Erikson; 752pp; Fantasy Fiction; March 2
  14. Soviet Ghosts: The Soviet Union Abandoned: A Communist Empire in Decay; Rebecca Litchfield; 192pp; Photography; March 26
  15. A Death in Vienna; Daniel Silva; 400pp; Contemporary Fiction; March 28
  16. Major Pettigrew’s Last Stand; Helen Simonson; 384pp; Contemporary Fiction; April 12
  17. The Swerve – How the World Became Modern; Stephen Jay Greenblatt; 368pp; History; April 13
  18. The Geography of Genius: A Search for the World’s Most Creative Places from Ancient Athens to Silicon Valley; Eric Weiner; 368pp; Nonfiction; April 26
  19. The Empty Throne; Bernard Cornwell; 296pp; Historical Fiction; May 11
  20. SPQR: A History of Ancient Rome; Mary Beard; 606pp; History; May 14
  21. Monsoon: The Indian Ocean and the Future of American Power; Robert D. Kaplan; 400pp; Nonfiction; May 21
  22. World Order; Henry Kissinger; 432pp; Nonfiction; June 3
  23. The End of Alchemy: Money; Banking; and the Future of the Global Economy; Mervyn King; 447pp; Nonfiction; June 7
  24. The Old Ways: A Journey on Foot; Robert MacFarlan; 448pp; Nonfiction; June 24
  25. Rubicon: The Last Years of the Roman Republic; Tom Holland; 464pp; History; June 27
  26. The Vikings: A History; Robert Ferguson; 464pp; History; July 10
  27. Journey Without Maps; Graham Greene; 272pp; Travel; July 11
  28. Desert Air; George Steinmetz; 380pp; Photography; July 11
  29. Travels with Charley: In Search of America; John Steinbeck; 288pp; Travel; July 13
  30. Treason’s Harbour; Patrick O’Brian; 314pp; Historical Fiction; August 6
  31. The Silk Roads: A New History of the World; Peter Frankopan; 645pp; History; August 11
  32. The Far Side of the World; Patrick O’Brian; 355pp; Historical Fiction; August 12
  33. The Reverse of the Medal; Patrick O’Brian; 269pp; Historical Fiction; August 13
  34. The Letter of Marque; Patrick O’Brian; 287pp; Historical Fiction; August 15
  35. Deadhouse Gates: A Tale of The Malazan Book of the Fallen; Steven Erikson; 959pp; Fantasy Fiction; September 27
  36. The Departure: The Owner: Book One; Neal Asher; 412pp; Science Fiction; September 29
  37. Zero Point: The Owner: Book Two; Neal Asher; 406pp; Science Fiction; October 9
  38. A Burglar’s Guide to the City; Geoff Manaugh; 304pp; Nonfiction; October 13
  39. Jupiter War: The Owner: Book Three; Neal Asher; 356pp; Science Fiction; November 3
  40. Trainspotting; Irvine Welsh; 348pp; Contemporary Fiction; November 21
  41. The Atrocity Archives; Charles Stross; 368pp; Science Fiction; December 5
  42. The Jennifer Morgue; Charles Stross; 416pp; Science Fiction; December 9
  43. The Thirteen-Gun Salute; Patrick O’Brian; 324pp; Historical Fiction; December 17
  44. The Nutmeg of Consolation; Patrick O’Brian; 384pp; Historical Fiction; December 20
  45. The Truelove; Patrick O’Brian; 267pp; Historical Fiction; December 2

New course: Installing and Configuring SQL Server 2016

Glenn’s latest Pluralsight course has been published – SQL Server: Installing and Configuring SQL Server 2016 – and is just over two hours long.

The modules are:

  • Introduction
  • Pre-installation Tasks for the Operating System
  • Pre-installation Tasks for SQL Server 2016
  • Installing SQL Server 2016
  • Post-installation Tasks for SQL Server 2016
  • Automating Common Maintenance Tasks

Check it out here.

We now have more than 146 hours of SQLskills online training available (see all our 49 courses here), all for as little as $29/month through Pluralsight (including more than 5,000 other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.


SQLskills holiday gift to you: all 2015 Insider videos

As we all wind down for the 2016 holiday season, we want to give the SQL Server community a holiday gift to say ‘thank you’ for all your support during 2016, and what better gift than more free content?!

As many of you know, I publish a bi-weekly newsletter to more than 13,500 subscribers that contains an editorial on a SQL Server topic, a demo video, and a book review of my most recently completed book. We’re making all the 2015 demo videos available so everyone can watch them – 25 videos in all, mostly in WMV format. I did the same thing the last few years for the 2014 videos2013 videos2012 videos, and 2011 videos.

Here are the details:

  • January 2015: Measuring sequential I/O performance (video | demo code)
  • January 2015: Incremental statistics (video | demo code)
  • February 2015: Delayed durability (video | demo code)
  • February 2015: Availability Groups using named instances (video | no demo code)
  • March 2015: Using sp_rev_login (video | demo code)
  • March 2015: Adding tempdb data files (video | demo code)
  • March 2015: Using DML triggers with CDC (from Pluralsight) (video | no demo code)
  • April 2015: Compensation log records (from Pluralsight) (video | demo code)
  • April 2015: The effects of AUTO_CLOSE (video | demo code)
  • May 2015: 2014 backup encryption (video | demo code)
  • May 2015: Using CrystalDiskMark for I/O benchmarking (video | no demo code)
  • June 2015: Backing up to a URL (video | no demo code)
  • June 2015: Using data compression (video | demo code)
  • July 2015: More on incremental statistics (video | demo code)
  • August 2015: Tracking page splits (from Pluralsight) (video | demo code)
  • August 2015: 2016 stretch database (video | demo code)
  • August 2015: Using CPU-Z for benchmarking (video | no demo code)
  • September 2015: Using Extended Events histogram targets (video | demo code)
  • September 2015: Converting scalar UDFs to inline TVFs (video | demo code)
  • October 2015: Low-priority lock waits (from Pluralsight) (video | demo code)
  • October 2015: Impact of the cluster key on index structure size (from Pluralsight) (video | demo code)
  • October 2015: Changes to @@VERSION (video | demo code)
  • November 2015: Introducing Query Store (video | demo code)
  • November 2015: Execution plan patterns (from PASS) (video | demo code)
  • December 2015: Predicate pushdown diagnostics (video | demo code)

If you want to see the 2016 videos before next December, get all the newsletter back-issues, and follow along as the newsletters come out, just sign-up at https://www.SQLskills.com/Insider. No strings attached, no marketing or advertising, just free content.

Happy Holidays and enjoy the videos!

Getting a history of database snapshot creation

Earlier today someone asked on the #sqlhelp Twitter alias if there is a history of database snapshot creation anywhere, apart from scouring the error logs.

There isn’t, unfortunately, but you can dig around the transaction log of the master database to find some information.

When a database snapshot is created, a bunch of entries are made in the system tables in master and they are all logged, under a transaction named DBMgr::CreateSnapshotDatabase. So that’s where we can begin looking.

Here’s a simple example of a database snapshot:

USE [master];

IF DATABASEPROPERTYEX (N'Company_Snapshot', N'Version') > 0
    DROP DATABASE [Company_Snapshot];
IF DATABASEPROPERTYEX (N'Company', N'Version') > 0
    DROP DATABASE [Company];

-- Create a database

-- Create the snapshot
CREATE DATABASE [Company_Snapshot]
ON (NAME = N'Company', FILENAME = N'C:\SQLskills\CompanyData.mdfss')

And I can find the transaction using the following code, plus who did it and when:

USE [master];

    [Transaction ID],
    SUSER_SNAME ([Transaction SID]) AS [User],
    [Begin Time]
FROM fn_dblog (NULL, NULL)
    AND [Transaction Name] = N'DBMgr::CreateSnapshotDatabase';
Transaction ID User             Begin Time
-------------- ---------------- ------------------------
0000:00099511  APPLECROSS\Paul  2016/10/20 13:07:53:143

Now to get some useful information, I can crack open one of the system table inserts, specifically the insert into one of the nonclustered indexes of the sys.sysdbreg table:

    [RowLog Contents 0]
FROM fn_dblog (NULL, NULL)
WHERE [Transaction ID] = N'0000:00099511'
    AND [Operation] = N'LOP_INSERT_ROWS'
    AND [AllocUnitName] = N'sys.sysdbreg.nc1';
RowLog Contents 0

Bytes 2 through 5 (considering the first byte as byte 1) are the byte-reversed database ID of the snapshot database, and bytes 10 through the end of the data are the sysname name of the database. Similarly, grabbing the insert log record for the nonclustered index of the sys.syssingleobjrefs table allows us to get the source database ID.

Here’s the finished code:

    SUSER_SNAME ([Transaction SID]) AS [User],
    [Begin Time]
FROM fn_dblog (NULL, NULL)
WHERE [Transaction ID] = N'0000:00099511'
    AND [Operation] = N'LOP_BEGIN_XACT'
) AS [who],
        SUBSTRING ([RowLog Contents 0], 5, 1) +
        SUBSTRING ([RowLog Contents 0], 4, 1) +
        SUBSTRING ([RowLog Contents 0], 3, 1) +
        SUBSTRING ([RowLog Contents 0], 2, 1)) AS [Snapshot DB ID],
    CONVERT (SYSNAME, SUBSTRING ([RowLog Contents 0], 10, 256)) AS [Snapshot DB Name]
FROM fn_dblog (NULL, NULL)
WHERE [Transaction ID] = N'0000:00099511'
	AND [Operation] = N'LOP_INSERT_ROWS'
	AND [AllocUnitName] = N'sys.sysdbreg.nc1'
) AS [snap],
        SUBSTRING ([RowLog Contents 0], 5, 1) +
        SUBSTRING ([RowLog Contents 0], 4, 1) +
        SUBSTRING ([RowLog Contents 0], 3, 1) +
        SUBSTRING ([RowLog Contents 0], 2, 1)) AS [Source DB ID]
FROM fn_dblog (NULL, NULL)
WHERE [Transaction ID] = N'0000:00099511'
	AND [Operation] = N'LOP_INSERT_ROWS'
	AND [AllocUnitName] = N'sys.syssingleobjrefs.nc1'
) AS [src];
User             Begin Time               Snapshot DB ID Snapshot DB Name  Source DB ID
---------------- ------------------------ -------------- ----------------- ------------
APPLECROSS\Paul  2016/10/20 13:07:53:143  35             Company_Snapshot  22

I’ll leave it as an exercise for the reader to wrap a cursor around the code to operate on all such transactions, and you can also look in the master log backups using the fn_dump_dblog function (see here for some examples).


Calling all user group leaders! We want to present for you in 2017!

By the end of December, we at SQLskills will have presented remotely (and a few in-person) to 94 user groups and PASS virtual chapters around the world in 2016!

We’d love to present remotely for your user group in 2017, anywhere in the world. It’s not feasible for us to travel to user groups or SQL Saturdays unless we’re already in that particular city, but remote presentations are easy to do and are becoming more and more popular. We haven’t had any bandwidth problems doing remote presentations in 2016 to groups as far away as South Africa, Australia, and New Zealand, plus Norway, Bulgaria, UK, India, Ukraine, Poland, Ireland, and Canada. This way we can spread the community love around user groups everywhere that we wouldn’t usually get to in person.

Note: we have our own Webex accounts which we generally use, or we can use your GoToMeeting or Webex, but you must use computer audio – we won’t call in by phone as the sound quality is too poor. We also will not use Skype/Lync as we’ve had too many problems with it around user group laptops and sound.

So, calling all user group leaders! If you’d like one of us (me, Kimberly, Jon, Erin, Glenn, Tim) to present remotely for you in 2017 (or maybe even multiple times), send me an email and be sure to include:

  • Details of which user group you represent (and if sending from a shared user group account, your name)
  • The usual day of the month, meeting time, and timezone of the user group
  • Which months you have available, starting in January 2017 (a list of available dates would be ideal)

And I’ll let you know who’s available with what topics so you can pick.

What’s the catch? There is no catch. We’re just continuing our community involvement next year and we all love presenting :-)

And don’t think that because you’re only reading this now (maybe a few weeks or months after the posting date) that we can’t fit you in – send me an email and we’ll see what we can do.

We’re really looking forward to engaging with you all!


PS By all means pass the word on to any SharePoint and .Net user group leaders you know too.

Investigating the proportional fill algorithm

This is something that came up recently on the Microsoft Certified Master DL, and is something I discuss in our IEPTO1 class because of the performance implications of it, so I thought it would make an interesting post.

Allocation Algorithms

The SQL Server Storage Engine (SE) uses two algorithms when allocating extents from files in a filegroup: round robin and proportional fill.

Round robin means that the SE will try to allocate from each file in a filegroup in succession. For instance, for a database with two files in the primary filegroup (with file IDs 1 and 3, as 2 is always the log file), the SE will try to allocate from file 1 then file 3 then file 1 then file 3, and so on.

The twist in this mechanism is that the SE also has to consider how much free space is in each of the files in the filegroup, and allocate more extents from the file(s) with more free space. In other words, the SE will allocate proportionally more frequently from files in a filegroup with more free space. This twist is called proportional fill.

Proportional fill works by assigning a number to each file in the filegroup, called a ‘skip target’. You can think of this as an inverse weighting, where the higher the value is above 1, the more times that file will be skipped when going round the round robin loop. During the round robin, the skip target for a file is examined, and if it’s equal to 1, an allocation takes place. If the skip target is higher than 1, it’s decremented by 1 (to a minimum value of 1), no allocation takes place, and consideration moves to the next file in the filegroup.

(Note that there’s a further twist to this: when the -E startup parameter is used, each file with a skip target of 1 will be used for 64 consecutive extent allocations before the round robin loop progresses. This is documented in Books Online here and is useful for increasing the contiguity of index leaf levels for very large scans – think data warehouses.)

The skip target for each file is the integer result of (number of free extents in file with most free space) / (number of free extents in this file). The files in the filegroup with the least amount of free space will therefore have the highest skip targets, and there has to be at least one file in the filegroup with a skip target of 1, guaranteeing that each time round the round robin loop, at least one extent allocation takes place.

The skip targets are recalculated whenever a file is added to or removed from a filegroup, or at least 8192 extent allocations take place in the filegroup.

Investigating the Skip Targets

There’s an undocumented trace flag, 1165, that lets us see the skip targets whenever they’re recalculated and I believe the trace flag was added in SQL Server 2008. It also requires trace flag 3605 to be enabled to allow the debugging info to be output.

Let’s try it out!

First I’ll turn on the trace flags, cycle the error log, creating a small database, and look in the error log for pertinent information:

DBCC TRACEON (1165, 3605);

EXEC sp_cycle_errorlog;

USE [master];

IF DATABASEPROPERTYEX (N'Company', N'Version') > 0

    NAME = N'Company_data',
    FILENAME = N'D:\SQLskills\Company_data.mdf',
	SIZE = 5MB,
    NAME = N'Company_log',
    FILENAME = N'D:\SQLskills\Company_log.ldf'

EXEC xp_readerrorlog;
2016-10-04 11:38:33.830 spid56       Proportional Fill Recalculation Starting for DB Company with m_cAllocs -856331000.
2016-10-04 11:38:33.830 spid56       Proportional Fill Recalculation Completed for DB Company new m_cAllocs 8192, most free file is file 1.
2016-10-04 11:38:33.830 spid56       	File [Company_data] (1) has 44 free extents and skip target of 1. 

The m_cAllocs is the threshold at which the skip targets will be recalculated. In the first line of output, it has a random number as the database has just been created and the counter hasn’t been initialized yet. It’s the name of a class member of the C++ class inside the SE that implements filegroup management.

Now I’ll add another file with the same size:

	NAME = N'SecondFile',
	FILENAME = N'D:\SQLskills\SecondFile.ndf',
	SIZE = 5MB,

EXEC xp_readerrorlog;
2016-10-04 11:41:27.880 spid56       Proportional Fill Recalculation Starting for DB Company with m_cAllocs 8192.
2016-10-04 11:41:27.880 spid56       Proportional Fill Recalculation Completed for DB Company new m_cAllocs 8192, most free file is file 3.
2016-10-04 11:41:27.880 spid56       	File [Company_data] (1) has 44 free extents and skip target of 1. 
2016-10-04 11:41:27.880 spid56       	File [SecondFile] (3) has 79 free extents and skip target of 1. 

Note that even though the two files have different numbers of extents, the integer result of 79 / 44 is 1, so the skip targets are both set to 1.

Now I’ll add a much larger file:

	NAME = N'ThirdFile',
	FILENAME = N'D:\SQLskills\ThirdFile.ndf',
	SIZE = 250MB,

EXEC xp_readerrorlog;
2016-10-04 11:44:20.310 spid56       Proportional Fill Recalculation Starting for DB Company with m_cAllocs 8192.
2016-10-04 11:44:20.310 spid56       Proportional Fill Recalculation Completed for DB Company new m_cAllocs 8192, most free file is file 4.
2016-10-04 11:44:20.310 spid56       	File [Company_data] (1) has 44 free extents and skip target of 90. 
2016-10-04 11:44:20.310 spid56       	File [ThirdFile] (4) has 3995 free extents and skip target of 1. 
2016-10-04 11:44:20.310 spid56       	File [SecondFile] (3) has 79 free extents and skip target of 50. 

The file with the most free space is file ID 4, so the skip targets of the other files are set to (file 4’s free extents) / (free extents in the file). For example, the skip target for file 1 becomes the integer result of 3995 / 44 = 90.

Now I’ll create a table that can have only one row per page, and force more than 8192 extent allocations to take place (by inserting more than 8192 x 8 rows, forcing that many pages to be allocated). This will also mean the files will have autogrown and will have roughly equal numbers of free extents.

USE [Company];

	[c2] CHAR (8000) DEFAULT 'a');


GO 70000

EXEC xp_readerrorlog;
2016-10-04 11:55:28.840 spid56       Proportional Fill Recalculation Starting for DB Company with m_cAllocs 8192.
2016-10-04 11:55:28.840 spid56       Proportional Fill Recalculation Completed for DB Company new m_cAllocs 8192, most free file is file 3.
2016-10-04 11:55:28.840 spid56       	File [Company_data] (1) has 0 free extents and skip target of 74. 
2016-10-04 11:55:28.840 spid56       	File [ThirdFile] (4) has 0 free extents and skip target of 74. 
2016-10-04 11:55:28.840 spid56       	File [SecondFile] (3) has 74 free extents and skip target of 1. 

We can see that all the files have filled up and auto grown, and randomly file ID 3 is now the one with the most free space.

Spinlock Contention

The skip targets for the files in a filegroup are protected by the FGCB_PRP_FILL spinlock, so this spinlock has to be acquired for each extent allocation, to determine which file to allocate from next. There’s an exception to this when all the files in a filegroup have roughly the same amount of free space (so they all have a skip target of 1). In that case, there’s no need to acquire the spinlock to check the skip targets.

This means that if you create a filegroup that has file sizes that are different, the odds are that they will auto grow at different times and the skip targets will not all be 1, meaning the spinlock has to be acquired for each extent allocation. Not a huge deal, but it’s still extra CPU cycles and the possibility of spinlock contention occurring (for a database with a lot of insert activity) that you could avoid by making all the files in the filegroup the same size initially.

If you want, you can watch the FGCB_PRP_FILL spinlock (and others) using the code from this blog post.

Performance Implications

So when do you need to care about proportional fill?

One example is when trying to alleviate tempdb allocation bitmap contention. If you have a single tempdb data file, and huge PAGELATCH_UP contention on the first PFS page in that file (from  a workload with many concurrent connections creating and dropping small temp tables), you might decide to add just one more data file to tempdb (which is not the correct solution). If that existing file is very full, and the new file isn’t, the skip target for the old file will be large and the skip target for the new file will be 1. This means that subsequent allocations in tempdb will be from the new file, moving all the PFS contention to the new file and not providing any contention relief at all! I discuss this case in my post on Correctly adding data file to tempdb.

The more common example is where a filegroup is full and someone adds another file to create space. In a similar way to the example above, subsequent allocations will come from the new file, meaning that when it’s time for a checkpoint operation, all the write activity will be on the new file (and it’s location on the I/O subsystem) rather than spread over multiple files (and multiple locations in the I/O subsystem). Depending on the characteristics of the I/O subsystem, this may or may not cause a degradation in performance.


Proportional fill is an algorithm that it’s worth knowing about, so you don’t inadvertently cause a performance issue, and so that you can recognize a performance issue caused by a misconfiguration of file sizes in a filegroup. I don’t expect you to be using trace flag 1165, but if you’re interested, it’s a way to dig into the internals of the allocation system.