2015 review: the year by the numbers

The last post of the year! It’s been a really excellent year all round and time for my traditional post counting down some of the numbers that have been my life this year.

  • 109318: the number of miles I flew on United
  • 33313: my current tweet total (up 1345 from 2014)
  • 12941: the number of subscribers to our Insider mailing list (up 1320 from 2014)
  • 11823: the number of emails I sent (down 444 from 2014)
  • 10843: the number of people who follow my Twitter ramblings (up 1448 from 2014)
  • 1603: the number of books (real ones) that I own (up 129 from 2014)
  • 868: the number of books I own but haven’t read yet (up 56 from 2014)
  • 148: the number of nights away from home (nearly all with Kimberly, so not *too* bad)
  • 131: the total number of hours of online training we have available on Pluralsight
  • 126: the number of dives I did this year in the Bahamas, Yap, Palau, and the Philippines, taking my total to 526
  • 115: the number of feet down on my deepest dive this year (going through swim-throughs with Jonathan in the Bahamas in January)
  • 91: the number of minutes of my longest dive this year
  • 88: the number of books I read (see this post)
  • 70: the number of days in Immersion Events and conferences
  • 42: the number of flights this year
  • 42: the number of Pluralsight courses we have available
  • 42: the answer to the question of life, the universe, and everything!
  • 40.55: the percentage of time we were away from home (which is why we call it our vacation home!)
  • 39: the number of SQLskills blog posts, including this one
  • 19: the number of different places we slept apart from our house and on planes
  • 18: the number of airports I flew through this year
  • 15: the number of new bird species I saw, taking my total to 499
  • 12: the number of monthly magazines I subscribe to
  • 8: the number of years I’ve been married to Kimberly
  • 8: the number of countries we visited this year
  • 7: the number of  SQLskills full-time employees, all of whom are fabulous and indispensable
  • 7: the number of new airports I flew through, taking my total to 89
  • 4: the number of new countries I visited (Bahamas, Federated States of Micronesia, Palau, Philippines), taking my total to 36
  • 2: the number of new airlines I flew on, taking my total to 34
  • 2: the number of awesome daughters we have
  • 1: number of new U.S. states I visited, taking my total to 23, and my first new one since 2011
  • 1: the number of new SQLskills team members, and accomplished breeder of tilapias: Tim Radney
  • 1: the person who is the best as snapping her fingers (especially when making fun of me – snap snap snap!): Erin Stellato
  • 1: the biggest hardware geek and ex-tank commander I know: Glenn Berry
  • 1: the number of Jonathan Kehayias in the world – thankfully :-)
  • 1: the number of indispensable assistants, without whom our lives would be a distressing quagmire – Libby we love you!
  • Finally, the one and only best person in my life: Kimberly, without whom I would be lost…

Thank you to everyone who reads our blogs, follows us on Twitter, sends us questions, watches our videos, comes to our classes, and generally makes being deeply involved in the SQL community a joy.

I sincerely wish you all a happy, healthy, and prosperous New Year!

Cheers!

(At Kanangra Walls in February a few hundred kilometers from Sydney, with Erin and Jon before teaching IEPTO2)

11001813_10153163615110452_4664459578093667159_n

 

(On board the Palau Aggressor liveaboard dive boat in July, our eldest behind me)

11825165_10153575057830452_8283356354666752999_n

2015: the year in books

Back in 2009 I started posting a summary at the end of the year of what I read during the year (see my posts from 200920102011, 2012, 2013, 2014) and people have been enjoying it, so here I present the 2015 end-of-year post. I set a moderate goal of 50 books this year and I managed 88! I thought about pushing for 100 like I did in 2009 but I didn’t read enough in October and November to be able to do it. Just like last year, I wanted to get through some of my larger non-fiction books but ended up not doing as many as I thought (reading more, shorter books). Next year I’m setting myself a goal of reading 50 books again.

For the record, I read ‘real’ books – i.e. not in electronic form – I don’t like reading off a screen. Yes, I’ve seen electronic readers – we both have iPads – and I’m not interested in ever reading electronically. I also don’t ‘speed read’ – I read quickly and make lots of time for reading.

Why do I track metrics? Because I like doing it, and being able to compare against previous years. Some people don’t understand the logic in that – each to their own :-)

I vacillated for the last few days about which book to crown as my favorite, and I just couldn’t come to a decision, so just like in 2012, I give you my favorite 3 books: Seveneves by Neal Stephenson, All The Light We Cannot See by Anthony Doerr, and The Bone Clocks by David Mitchell. All three are just superb books and I strongly recommend you give them a try. You can read my review of them in the top-10 (well, 14) list below.

Now the details. I enjoy putting this together as it will also serve as a record for me many years from now. I hope you get inspired to try some of these books – push yourself with new authors and very often you’ll be surprisingly pleased. Don’t forget to check out the previous year’s blog posts for more inspiration too.

Once again I leave you with a quote that describes a big part of my psychological make-up:

In omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro!

Analysis of What I Read

I read 37353 pages, or 102.34 pages a day, and a book every 4.1 days or so. The chart below shows the number of pages (y-axis) in each book I read (x-axis).

2015lengths

2015genres

The average book length was 423 pages, more than 100 pages shorter than last year. That’s because I read a lot of series books where each isn’t hugely long.

The Top-1014

I read a lot of truly *superb* books this year, and I just couldn’t whittle it down to a top-10, so here’s my top-14 (well, really more as some of them are the start of series). If you don’t read much, at least consider looking at some of these in 2016. It’s impossible to put them into a priority order so I’ve listed them in the order I read them, along with the short Facebook review I wrote at the time.

1 #2; All The Light We Cannot See; Anthony Doerr; 531pp; Historical Fiction; January 10; (Fabulous book about a blind French girl and an orphaned German boy who both experience WWII in their teenage years in vastly different ways, and come together briefly at the end of it. Wonderfully told, with richly evocative writing – I could visualize everything that was happening. Describes some of the horrors faced by those living through and perpetrating the occupation of France. Heading to Amazon to investigate his earlier works. Very strongly recommended.)

2 #10; Mr. Midshipman Hornblower (and the rest of the series); C.S. Forrester; 320pp; Historical Fiction; February 7; (I’m rereading the Hornblower Saga this year after having last (and first) read them in 2000. An excellent start to the series, this book introduces the young, inexperienced Hornblower and sees him transform into an honorable, competent Lieutenant. This book was also the inspiration for the first 4 episodes of the terribly good A&E television series starring Ioan Gruffudd. Looking forward to getting into the second one, and maybe I’ll shoot for 100 read books again this year?)

3 #12; Ready Player One; Ernest Cline; 384pp; Science Fiction; February 12; (Really good novel about players competing to ‘win’ a world-encompassing immersive, VR game after the founder dies and leaves a giant fortune to the winner. Quite similar in scope to Snow Crash, but obviously a different story. Quite a page turner, recommended.)

4 #13; The Storied Life of A.J. Fikry; Gabrielle Zevin; 288pp; Contemporary Fiction; February 13; (Start reading this yesterday morning and it became a page turner for me. It’s a great chick flick basically (which I love, but not usually in book form), about a book store and its owner and his life. Lots of little twists in the gentle story and a nice read. Now I’m taking the girls to Elliot Bay Bookstore in Seattle to buy more books. Chain book stores just don’t cut it unfortunately. Recommended!)

5 #40; The Girl Who Played With Fire; Stieg Larsson; 630pp; Contemporary Fiction; May 4; (I read the first book (The Girl With The Dragon Tattoo) back in 2011 and loved the movie last year (the new one, not the older Swedish one). This book’s even better than the first one I think – it turned into a real page turner for me over the last couple of hundred pages. Again it’s hard to talk about the plot without giving things away, but it’s a great thriller and strongly recommended.)

6 #41; Gone Girl; Gillian Flynn; 432pp; Contemporary Fiction; May 8; (Excellent page turner with some great twists. Highly recommended and I can’t wait to see the movie!)

7 #44; Seveneves; Neal Stephenson; 869pp; Science Fiction; June 14; (Really excellent, and long, novel about the destruction of the surface of the Earth (from the break up of the moon and subsequent bombardment with trillions of meteorites) and the human race’s survival in space (over a period of 5,000 years until the Earth’s surface cools down again) and re-colonization of the Earth. Very believable with no sci-fi that requires suspension of belief. Hugely recommended and I hope there’s a sequel.)

8 #48; Nexus (and the rest of the series); Ramez Naam; 528pp; Science Fiction; June 30; (Excellent book! Start of a trilogy (I have the other two with me) about a mind-altering drug that expands consciousness and allows minds to talk to each other. The protagonists have extended the concept to run a Linux-like OS in their heads, with all kinds of interesting apps. And of course the US govt. is against it so all kinds of clandestine ops result, with lots of mayhem. A page-turner – highly recommended!)

9 #50; Master and Commander (and the rest of the series); Patrick O’Brian; 403pp; Historical Fiction; July 8; (First of the fantastic Aubrey-Maturin novels by Patrick O’Brian. I listened to all 20 of them in 2000-2002 while driving back-and-forth to work at Microsoft. This book introduces the principals, and deals with Jack Aubrey’s eventful captaincy of the sloop Sophie in the Mediterranean. Highly recommended, the entire series.)

10 #56; Avogadro Corp (and the rest of the series); William Hertling; 240pp; Science Fiction; July 23; (Cool start to the Singularity Series about runaway A.I. technology. In this book Avogadro gives it’s email program the capability to rewrite and/or send emails for maximum chance of success, based on who the email is being sent to. And then someone adds another directive to maximize the chances of the survival of the project, and the story takes off from there. Clever concept and a quick read. Looking forward to the rest of them. Recommended.)

11 #67; The Bone Clocks; David Mitchell; 624pp; Contemporary Fiction; August 25; (What an excellent book! A very clever story, woven through long chapters/novellas, each set in a different time, introducing and cleverly drawing together the principal characters. The character development is brilliant and I couldn’t put the book down – enormously entertaining and so far the best book I’ve read this year. Highly recommended!)

12 #70; Outlander; Diana Gabaldon; 640pp; Historical Fiction; September 6; (Several people have recommended this to me over the last year, given my Scottish roots, and I finally took the plunge and bought the first four books in the series. I’m glad I did! It’s a really good story about a woman who is transported back 200 years to just before the 1745 rebellion under Bonnie Prince Charlie and has to suddenly find her way in that time. It has plenty of colorful characters and action and I’m really looking forward to continuing with the next books. And of course there’s the T.V. series (which I haven’t watched yet but I’ve heard is really good). Highly recommended!)

13 #72; In Xanadu: A Quest; William Dalrymple; 320pp; Travel; September 17; (Excellent travelogue following Marco Polo’s journey along the Silk Road to Xanadu. They travel through Israel, Syria, Turkey, Iran, Pakistan, and China in the late ’80s, with all kinds of interesting encounters along the way. Highly recommended – love Dalrymple’s writing style!)

14 #76; The Golem and the Jinni; Helene Wecker; 512pp; Historical Fiction; September 30; (Excellent debut novel set in early 1900s New York, following the story of a golem (a creature made from clay and brought to life with Kabbalistic magic) and a jinni (a natural, elemental creature made of fire) that was trapped in a copper flask by a wizard a thousand years ago. It covers their problems integrating into the populace of New York, their eventual meeting, and problems when their true nature starts to be discovered. Very well written and high engaging – highly recommended!)

The Complete List

And the complete list, with links to Amazon so you can explore further. One thing to bear in mind, the dates I finished reading the book don’t mean that I started, for instance, book #2 after finishing book #1. I usually have anywhere from 10-15 books on the go at any one time so I can dip into whatever my mood is for that day. Some books I read start to finish without picking up another one and some books take me over a year. Lots of long airplane flights help too!

  1. Mission Mongolia; David Treanor; 351pp; Travel; January 5
  2. All The Light We Cannot See; Anthony Doerr; 531pp; Historical Fiction; January 10
  3. The Pagan Lord; Bernard Cornwell; 300pp; Historical Fiction; January 14
  4. A Man on the Moon: The Voyages of the Apollo Astronauts; Andrew Chaikin; 720pp; History; January 17
  5. Design for Survival; General Thomas Power; 255pp; History; January 19
  6. Turing’s Cathedral: The Origins of the Digital Universe; George Dyson; 464pp; History; January 25
  7. The Soul of a New Machine; Tracy Kidder; 295pp; History; February 1
  8. The Book of Air and Shadows; Michael Gruber; 280pp; Contemporary Fiction; February 3
  9. State of the Art; Stan Augarten; 108pp; Nonfiction; February 6
  10. Mr. Midshipman Hornblower; C.S. Forrester; 320pp; Historical Fiction; February 7
  11. African Air; George Steinmetz; 216pp; Photography; February 11
  12. Ready Player One; Ernest Cline; 384pp; Science Fiction; February 12
  13. The Storied Life of A.J. Fikry; Gabrielle Zevin; 288pp; Contemporary Fiction; February 13
  14. Half Way Home; Hugh Howey; 359pp; Science Fiction; February 14
  15. Lieutenant Hornblower; C.S. Forrester; 320pp; Historical Fiction; February 16
  16. The Tipping Point: How Little Things Can Make a Big Difference; Malcom Gladwell; 304pp; Nonfiction; February 17
  17. Daemon; Daniel Saruez; 640pp; Science Fiction; February 18
  18. See No Evil: The True Story of a Ground Soldier in the CIA’s War on Terrorism; Robert Baer; 320pp; Nonfiction; February 28
  19. Inferno; Dan Brown; 620pp; Contemporary Fiction; March 6
  20. Freedom; Daniel Saruez; 496pp; Science Fiction; March 8
  21. The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine; Charles Petzold; 384pp; Nonfiction; March 14
  22. Influx; Daniel Saruez; 528pp; Science Fiction; March 15
  23. Diamond Dogs Turquoise Days; Alastair Reynolds; 304pp; Science Fiction; March 19
  24. Inferno: The Longfellow Translation; Dante; 200pp; Contemporary Fiction; March 19
  25. Wool; Hugh Howey; 528pp; Science Fiction; March 20
  26. Prador Moon; Neal Asher; 256pp; Science Fiction; March 21
  27. Halting State; Charles Stross; 336pp; Science Fiction; March 29
  28. Rule 34; Charles Stross; 352pp; Science Fiction; April 3
  29. Historical Atlas of the Pacific Northwest; Derek Hayes; 208pp; History; April 4
  30. Hornblower and the Hotspur; C. S. Forrester; 400pp; Historical Fiction; April 9
  31. Hornblower During the Crisis; C.S. Forrester; 176pp; Historical Fiction; April 11
  32. Hornblower and the Atropos; C.S. Forrester; 342pp; Historical Fiction; April 16
  33. Maps of North America; Ashley & Miles Baynton-Williams; 189pp; History; April 18
  34. Beat To Quarters; C.S. Forrester; 273pp; Historical Fiction; April 19
  35. Ship of the Line; C.S. Forrester; 304pp; Historical Fiction; April 24
  36. The New Health Rules; Frank Lipman & Danielle Claro; 224pp; Nonfiction; April 24
  37. Flying Colours; C.S. Forrester; 256pp; Historical Fiction; April 25
  38. Commodore Hornblower; C.S. Forrester; 343pp; Historical Fiction; April 26
  39. Lord Hornblower; C.S. Forrester; 336pp; Historical Fiction; May 2
  40. The Girl Who Played With Fire; Stieg Larsson; 630pp; Contemporary Fiction; May 4
  41. Gone Girl; Gillian Flynn; 432pp; Contemporary Fiction; May 8
  42. A Place Beyond Courage; Elizabeth Chadwick; 504pp; Historical Fiction; May 14
  43. Admiral Hornblower in the West Indies; C.S. Forrester; 336pp; Historical Fiction; May 16
  44. Seveneves; Neal Stephenson; 869pp; Science Fiction; June 14
  45. Kill Decision; Daniel Saruez; 513pp; Science Fiction; June 25
  46. Cibola Burn; James S. A. Corey; 610pp; Science Fiction; June 27
  47. Infinite Worlds: The People and Places of Space Exploration; Michael Soluri; 352pp; Photography; June 28
  48. Nexus; Ramez Naam; 528pp; Science Fiction; June 30
  49. Into The Black: Odyssey One; Evan Currie; 580pp; Science Fiction; July 5
  50. Master and Commander; Patrick O’Brian; 403pp; Historical Fiction; July 8
  51. Crux; Ramez Naam; 577pp; Science Fiction; July 11
  52. The Heart of Matter: Odyssey One; Evan Currie; 627pp; Science Fiction; July 14
  53. Homeworld: Odyssey One; Evan Currie; 500pp; Science Fiction; July 16
  54. A Constellation of Vital Phenomena; Anthony Marra; 383pp; Contemporary Fiction; July 18
  55. Apex; Ramez Naam; 602pp; Science Fiction; July 21
  56. Avogadro Corp; William Hertling; 240pp; Science Fiction; July 23
  57. A.I. Apocalypse; William Hertling; 239pp; Science Fiction; July 28
  58. The Last Firewall; William Hertling; 305pp; Science Fiction; July 30
  59. The Turing Exception; William Hertling; 290pp; Science Fiction; July 31
  60. The Kill Artist; Daniel Silva; 490pp; Contemporary Fiction; August 3
  61. Henry I; C. Warren Hollister; 588pp; History; August 9
  62. For The King’s Favor; Elizabeth Chadwick; 530pp; Historical Fiction; August 13
  63. Mapping the World; Michael Swift; 256pp; History; August 15
  64. Out of the Black; Evan Currie; 440pp; Science Fiction; August 16
  65. To Defy a King; Elizabeth Chadwick; 523pp; Historical Fiction; August 21
  66. @War: The Rise of the Military-Internet Complex; Shane Harris; 288pp; Nonfiction; August 22
  67. The Bone Clocks; David Mitchell; 624pp; Contemporary Fiction; August 25
  68. The Lions of Lucerne; Brad Thor; 624pp; Contemporary Fiction; August 28
  69. In An Antique Land: History in the Guise of a Traveller’s Tale; Amitav Ghosh; 400pp; Nonfiction; August 31
  70. Outlander; Diana Gabaldon; 640pp; Historical Fiction; September 6
  71. The Abyss Beyond Dreams; Peter F. Hamilton; 608pp; Science Fiction; September 12
  72. In Xanadu: A Quest; William Dalrymple; 320pp; Travel; September 17
  73. Post Captain; Patrick O’Brian; 467pp; Historical Fiction; September 22
  74. On The Steel Breeze; Alastair Reynolds; 532pp; Science Fiction; September 24
  75. The Age of Kali: Indian Travels and Encounters; William Dalrymple; 356pp; Travel; September 29
  76. The Golem and the Jinni; Helene Wecker; 512pp; Historical Fiction; September 30
  77. Veritas; Monaldi and Sorti; 693pp; Historical Fiction; October 7
  78. The Years of Rice and Salt; Kim Stanley Robinson; 784pp; Science Fiction; October 17
  79. The Moon is a Harsh Mistress; Robert Heinlein; 382pp; Science Fiction; October 18
  80. The Innovators: How a Group of Hackers; Geniuses; and Geeks Created the Digital Revolution; Walter Isaacson; 542pp; History; November 1
  81. Hunter Killer: Inside America’s Unmanned Air War; T. Mark McCurley; 368pp; Nonfiction; November 14
  82. Nemesis Games; James S. A. Corey; 544pp; Science Fiction; November 17
  83. The Girl Who Kicked the Hornet’s Nest; Steig Larsson; 672pp; Contemporary Fiction; November 30
  84. The English Assassin; Daniel Silva; 416pp; Contemporary Fiction; December 10
  85. Afghanistan: A Military History from Alexander the Great to the Taliban Insurgency; Stephen Tanner; 392pp; History; December 23
  86. H.M.S. Surprise; Patrick O’Brian; 416pp; Historical Fiction; December 24
  87. The Confessor; Daniel Silva; 480pp; Contemporary Fiction; December 26
  88. The Mauritius Command; Patrick O’Brian; 348pp; Historical Fiction; December 27

October 2016 Dublin IE1/IEPTO1 class open for registration

Through popular demand (our IEPTO2 class in Ireland in October 2015 sold out with 40 students!) we’ve managed to juggle a bit more of our schedule around and found space to fit in another European class in 2016, and it’s open for registration!

Kimberly and I will be teaching our signature IEPTO-1 (formerly IE1) Immersion Event on Performance Tuning and Optimization, in partnership with our great friends Bob and Carmel Duffy of Prodata.

The class will be October 3-7, and there’s an early-bird discount available depending on when you register:

  • Early Bird (before June 30th 2016) €2,395
  • Full Price (after June 30th 2016) €2,795

You can get all the details on the class page here.

We hope to see you there!

SQLskills holiday gift to you: all 2014 Insider videos

As we all wind down for the 2015 holiday season, we want to give the SQL Server community a holiday gift to say ‘thank you’ for all your support during 2014, and what better gift than more free content?!

As many of you know, I publish a bi-weekly newsletter to more than 13,000 subscribers that contains an editorial on a SQL Server topic, a demo video, and a book review of my most recently completed book. We’re making all the 2014 demo videos available  so everyone can watch them – 22 videos in all, mostly in WMV format. I did the same thing the last few years for the 2013 videos2012 videos, and 2011 videos.

Here are the details:

  • January 2014: Using Plan Explorer to find missing indexes (from Pluralsight) (video | demo code)
  • January 2014: Statistics updates and query plan recompilations (video | demo code)
  • February 2014: Exploring the Lock Pages In Memory setting (video | demo code)
  • February 2014: Getting started with Service Broker (video | demo code)
  • March 2014: Investigating FGCB_ADD_REMOVE latch contention (from Pluralsight) (video | demo code)
  • March 2014: Investigating CPU utilization issues on VMware (video | demo code)
  • March 2014: Creating a simple server monitoring system (video | demo code)
  • April 2014: Investigating sort operators in query plans (video | demo code)
  • May 2014: Investigating page split internals (from Pluralsight) (video | demo code)
  • May 2014: Examining instance configuration options (from Pluralsight) (video | demo code)
  • June 2014: Using Extended Events predicates correctly (video | demo code)
  • June 2014: Investigating the plan cache (from Pluralsight) (video | demo code)
  • July 2014: Investigating INCLUDEd columns (video | demo code)
  • July 2014: Part 2 on investigatng INCLUDEd columns (video | demo code)
  • August 2014: Using framing with window functions (video | demo code)
  • August 2014: Investigating join order forcing problems (from Pluralsight, MOV format) (video | demo code)
  • September 2014: Investigating global trace flags (video | demo code)
  • September 2014: Plan invalidation causes (from Pluralsight, MP4 format) (video | no demo code)
  • October 2014: Investigating reverse-order deadlocks (from Pluralsight) (video | demo code)
  • October 2014: Finding hidden plan costs using Extended Events (video | demo code)
  • November 2014: Using OFFSET and FETCH (video | demo code)
  • December 2014: Query plan operators from columnstore indexes (video | demo code)

If you want to see the 2015 videos before next December, get all the newsletter back-issues, and follow along as the newsletters come out, just sign-up at http://www.SQLskills.com/Insider. No strings attached, no marketing or advertising, just free content.

Happy Holidays and enjoy the videos!

Survey: tempdb file configuration (code to run)

I’m running this survey to help the SQL Server team at Microsoft, who would like to get a broad view of current tempdb configurations. I’ll editorialize the results as well in a week or two.

Feel free to run the code below any way you want, and also add a single preceding column to the result set (e.g. server name or number) if you want, but PLEASE do not add any *rows* of data apart from what I’ve asked for otherwise it makes the data processing very time consuming, especially if you send results from hundreds of servers. I know people that do that are trying to be helpful, but I really don’t need any other data apart from what I’ve asked for.

You can send me results in email in a text file or spreadsheet, or leave a comment below. The code will work on SQL Server 2005 onwards.

Thanks!

IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
    WHERE [name] LIKE N'#PSR_tracestatus%')
    DROP TABLE [#PSR_tracestatus];
GO

CREATE TABLE [#PSR_tracestatus] (
    [TraceFlag] INT, [Status] INT, [Global] INT, [Session] INT);

INSERT INTO #PSR_tracestatus EXEC ('DBCC TRACESTATUS (1117) WITH NO_INFOMSGS');
INSERT INTO #PSR_tracestatus EXEC ('DBCC TRACESTATUS (1118) WITH NO_INFOMSGS');

SELECT
	[os].[cores],
	(SELECT [Global] FROM #PSR_tracestatus WHERE [TraceFlag] = 1117) AS [1117],
	(SELECT [Global] FROM #PSR_tracestatus WHERE [TraceFlag] = 1118) AS [1118],
	[file_id], [type_desc], [size], [max_size], [growth], [is_percent_growth]
FROM
	tempdb.sys.database_files AS [df],
	(
		SELECT COUNT (*) AS [cores]
		FROM sys.dm_os_schedulers
		WHERE status = 'VISIBLE ONLINE'
	) AS [os];

DROP TABLE [#PSR_tracestatus];
GO

Data recovery: investigating weird SELECT failures around corruption

An interesting corruption problem cropped up on the MCM distribution list yesterday and after I figured it out, I thought it would make a good blog post in case anyone hits a similar problem.

In a nutshell, the problem was corruption such that a simple SELECT * query failed, but a SELECT * query with an ORDER BY clause worked.

Let’s investigate!

Creating the scenario

First I’ll create the specific corruption. I’m going to create a simple table with a clustered index, and sizing the rows so there’s only one per page.

CREATE DATABASE [Company];
GO

USE [Company];
GO

CREATE TABLE [test] (
	[c1] INT IDENTITY,
	[c2] UNIQUEIDENTIFIER DEFAULT NEWID (),
	[c3] CHAR (4100) DEFAULT 'a');
GO
CREATE CLUSTERED INDEX [test_cl] ON [test] ([c1], [c2]);
GO

SET NOCOUNT ON;
GO

INSERT INTO [test] DEFAULT VALUES;
GO 10000

Now I’ll delete one of the rows, creating a page with a single ghost record on it, which I can see using DBCC PAGE on the first PFS page in the database.

DELETE FROM [test] WHERE [c1] = 150;
GO

DBCC TRACEON (3604);
DBCC PAGE ([Company], 1, 1, 3);
GO
<snip output for brevity>
(1:289)      - (1:295)      =     ALLOCATED   0_PCT_FULL                     Mixed Ext
(1:296)      - (1:437)      =     ALLOCATED   0_PCT_FULL                              
(1:438)      -              =     ALLOCATED   0_PCT_FULL Has Ghost                    
(1:439)      - (1:8087)     =     ALLOCATED   0_PCT_FULL                              

So page (1:438) is the one that had the row with key value 150 on it. It’s still allocated and linked into the clustered index structure though, so I’ll force the Access Methods code to ‘see’ it by doing a scan that’ll include it, and that will queue the page up for cleaning by the Ghost Cleanup Task.

SELECT COUNT (*) FROM [test] WHERE [c1] &amp;lt; 200;
GO

And now if I wait 10 seconds and look at the PFS page again, I can see it’s been cleaned and deallocated – it’s no longer part of the clustered index. (You’ll notice that the PFS byte still says that the page has a ghost record; that’s because when a page is deallocated, the only PFS bits that are changed are the allocation status.)

DBCC PAGE ([Company], 1, 1, 3);
GO
<snip output for brevity>
(1:289)      - (1:295)      =     ALLOCATED   0_PCT_FULL                     Mixed Ext
(1:296)      - (1:437)      =     ALLOCATED   0_PCT_FULL                              
(1:438)      -              = NOT ALLOCATED   0_PCT_FULL Has Ghost                    
(1:439)      - (1:8087)     =     ALLOCATED   0_PCT_FULL                              

Nothing’s corrupt at this point, so let’s cause some problems.

Creating the corruption

First off I’m going to zero out page (1:438) using DBCC WRITEPAGE:

ALTER DATABASE [Company] SET SINGLE_USER;
GO

DECLARE @offset INT;
SELECT @offset = 0;

WHILE (@offset < 8185)
BEGIN
	DBCC WRITEPAGE (N'Company', 1, 438, @offset, 8, 0x0000000000000000, 1);
	SELECT @offset = @offset + 8;
END;
GO

ALTER DATABASE [Company] SET MULTI_USER;
GO

And there’s still no corruption here, because page (1:438) is a deallocated page.

So now I’ll corrupt it by forcing it to be allocated again. For this I need to find the offset of the PFS byte for page (1:438) using a hex dump of the PFS page and looking for a page that has the PFS bits matching the PFS output for page (1:438) above. The page only has the ‘Has Ghost’ bit set, which is 0x08.

DBCC PAGE ([Company], 1, 1, 2);
GO
<snip>
Memory Dump @0x00000000185EA000

00000000185EA000:   010b0000 00000000 00000000 00000000 00000000  ....................
00000000185EA014:   00000100 63000000 0200fc1f 01000000 01000000  ....c.....ü.........
00000000185EA028:   12010000 fd000000 01000000 00000000 00000000  ....ý...............
00000000185EA03C:   7944876a 01000000 00000000 00000000 00000000  yD‡j................
00000000185EA050:   00000000 00000000 00000000 00000000 00009c1f  ..................œ.
00000000185EA064:   44444444 00004444 60647060 74706070 60607060  DDDD..DD`dp`tp`p``p`
00000000185EA078:   60707060 40404040 40404040 61706070 60606070  `pp`@@@@@@@@ap`p```p
00000000185EA08C:   60706060 60706060 60706060 60606070 40404040  `p```p```p`````p@@@@
00000000185EA0A0:   40404040 40404040 40404030 60706060 70607060  @@@@@@@@@@@0`p``p`p`
00000000185EA0B4:   70706070 70606060 70607060 70607060 70607060  pp`pp```p`p`p`p`p`p`
00000000185EA0C8:   70607060 70607060 70606060 60607060 60706070  p`p`p`p`p`````p``p`p
00000000185EA0DC:   60706070 60706070 60707070 60607060 60706060  `p`p`p`p`ppp``p``p``
00000000185EA0F0:   60706060 70606060 60606060 70607060 60706060  `p``p```````p`p``p``
00000000185EA104:   60606060 60606060 40000000 00000000 60606060  ````````@.......````
00000000185EA118:   60606060 60606060 60606060 60606060 60606060  ````````````````````
00000000185EA12C:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA140:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA154:   60606060 64646260 40404040 40404040 40404040  ````ddb`@@@@@@@@@@@@
00000000185EA168:   40404040 40400000 00000000 40400000 00000000  @@@@@@......@@......
00000000185EA17C:   40404040 00000000 70606060 60606060 40404040  @@@@....p```````@@@@
00000000185EA190:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1A4:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1B8:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1CC:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1E0:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1F4:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA208:   40404040 40404040 40404040 40404040 40400840  @@@@@@@@@@@@@@@@@@.@
00000000185EA21C:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA230:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
<snip>

Can you spot the 0x08 byte? It’s at offset 0x21a on the page.

I can force page (1:438) to become allocated again by setting that byte offset in the PFS page to 0x40, again using DBCC WRITEPAGE.

DBCC WRITEPAGE (N'Company', 1, 1, 538, 1, 0x40);
GO

And now if I run DBCC CHECKDB, I can see some corruption:

DBCC CHECKDB (N'Company') WITH NO_INFOMSGS;
GO
Msg 8909, Level 16, State 1, Line 68
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:438) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).
CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object.
Msg 8928, Level 16, State 1, Line 68
Object ID 245575913, index ID 1, partition ID 72057594040614912, alloc unit ID 72057594045857792 (type In-row data): Page (1:438) could not be processed.  See other errors for details.
CHECKDB found 0 allocation errors and 1 consistency errors in table 'test' (object ID 245575913).
CHECKDB found 0 allocation errors and 2 consistency errors in database 'Company'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (Company).

And the final step is to make the database read-only:

ALTER DATABASE [Company] SET READ_ONLY;
GO

Investigating the corruption

In the case described in the DL, there was no backup and so the client wanted to extract as much data as possible.

Running a simple SELECT * didn’t work, like so:

SELECT * FROM [test];
GO

The query will start to give results and then fail with:

Msg 824, Level 24, State 2, Line 74
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:438; actual 0:0). It occurred during a read of page (1:438) in database ID 10 at offset 0x0000000036c000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\Company.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

But if I run a SELECT * that has ordering, it works fine:

SELECT * FROM [test] ORDER BY [c1];
GO

What’s going on?

Explanation

The explanation is to do with how the two scans work for the SELECT statements.

The first scan is doing what’s called an allocation order scan. This is where the Access Methods decides not to use the index structure to give back the records. An allocation order scan has three requirements:

  • The query plan must allow for an unordered scan of the index
  • The index must be bigger than 64 pages
  • The data in the index must be guaranteed not to change

The allocation order scan uses the IAM pages to load a scanning object and then zip through the extents in allocation order, using the PFS pages to determine which pages in the extents are allocated and should be read and processed by the scan.

The second scan is doing a normal ordered scan, which navigates down to the left-hand side of the leaf level in the index and then follows the leaf-level page linkages to scan through the index.

So where does the corruption come in?

The page that I corrupted and then forced to be allocated again isn’t linked in to the leaf-level of the index, and so the ordered scan doesn’t attempt to read it. However, because it’s allocated, the allocation order scan thinks it’s a valid part of the extent that contains it and so tried to read it, resulting in the 823 error.

The key to having this scenario is that the database is set to read-only, which satisfies the third requirement for an allocation order scan. If you set the database back to read-write, and then run the first SELECT statement, it will work perfectly because the allocation order scan requirements aren’t being met any longer.

You can read more about allocation order scans in this great post from Paul White.

Summary

A lot of the time when dealing with database corruption and trying to effect comprehensive data recovery without backups, you’ll run into weird situations like this. When you do, step back, look at the query plan for what you’re doing, and think about what the Access Methods is doing under the covers to implement the query plan. And then think about how to work around that so you can continue getting more data back.

Hope this helps!

Expanded IEPDS class on Practical Data Science in Chicago, May 2016

After the success of our brand-new class on Practical Data Science this week, we’ve decided to expand it to five days and bring it back to Chicago again next year.

The course is our Immersion Event on Practical Data Science using Azure Machine Learning, SQL Data Mining, and R, presented by our great friend Rafal Lukawiecki.

Rafal’s course will be five days long, with the following modules:

  1. Overview of Practical Data Science for Business
  2. Data
  3. Process
  4. Algorithm Overview
  5. Tools and Getting Started
  6. Segmentation
  7. Classification
  8. Basic Statistics
  9. Model Validation
  10. Classifier Precision
  11. Regressions
  12. Similarity Matching and Recommenders
  13. Other Algorithms (Brief Overview)
  14. Production and Model Maintenance

You can read through the detailed curriculum here.

The class will be in Chicago, at our usual location, from May 9-13, 2016 – all the details are here.

Cool stuff – hope to see you there!

Calling all user group leaders! We want to present for you in 2016!

By the end of December, we at SQLskills will have remotely presented to 87 user groups and PASS virtual chapters around the world in 2015!

We’d love to present remotely for your user group in 2016, anywhere in the world. It’s not feasible for us to travel to user groups or SQL Saturdays unless we’re already in that particular city, but remote presentations are easy to do and are becoming more and more popular (we haven’t had any bandwidth problems doing remote presentations to groups as far away as South Africa, Australia, and New Zealand, plus Poland, Canada, Belgium, Netherlands, Ukraine, Ireland, UK, Israel, Denmark, Austria). This way we can spread the community love around user groups everywhere that we wouldn’t usually get to in person.

We have our own Webex accounts which we generally use, or we can use your GoToMeeting or Webex. We prefer not to use Lync as we’ve had too many problems with it around user group laptops and sound.

So, calling all user group leaders! If you’d like one of us (me, Kimberly, Jon, Erin, Glenn, Tim) to present remotely for you in 2016 (or maybe even multiple times), send me an email including:

  • Details of which user group you represent
  • The usual day of the month, meeting time, and timezone of the user group
  • Which months you have available, starting in January 2016

And I’ll let you know who’s available with what topics so you can pick.

What’s the catch? There is no catch. We’re just continuing our community involvement next year and we all love presenting :-)

And don’t think that because you’re only reading this now that we can’t fit you in – send me an email and we’ll see what we can do.

We’re really looking forward to engaging with you all!

Cheers

PS By all means pass the word on to any SharePoint and .Net user group leaders you know too.

On index key size, index depth, and performance

In my Insider newsletter a couple of weeks ago, I discussed how index fragmentation is often considered when designing indexes, but index depth often isn’t. In the newsletter I said I’d do a more comprehensive blog post with some data, so this is it.

Fanout and Index Depth

The index depth is determined by the fanout of the index. From the newsletter:

The fanout of an index measures, for a page at level x in an index, how many pages it references in the level below (nearer the leaf level). The higher the fanout is, the fewer the number of levels in the index.

The index key size impacts the size of the structure needed to reference it. Specifically, the index key is pushed up to all entries (and all levels) in the index as it’s used to allow navigation through the index from the root page down to the leaf level.

The larger the index key size, the fewer index records can be stored in an index page and so the lower the fanout. The lower the fanout is the more levels are required in the index, depending on the number of pages at the leaf level.

For instance, if the fanout is 10 in an index, that means each index page can hold 10 index records, referencing 10 pages at the level below in the index. If the index has 10,000 pages at the leaf level, there needs to be 1,000 pages in the level above, then 100 pages, then 10 pages, and finally the root page. That’s a total of 5 levels.

For the same data, if the index fanout is changed to 100, and the index has 10,000 pages at the leaf level, the next level needs 100 pages, and then there’s the root page. That’s a total of only three levels.

I want to measure whether there’s a noticeable performance difference based on the fanout, and hence index depth, of an index from varying it’s key size, for single-row select operations. There won’t be any noticeable effect on scans, as that only involves a single traversal of the index, to find the starting point of the scan. (Ok, it’s a little more complicated than that for scans if any of the index leaf-level pages change while the scan is positioned on them, but that’s not relevant here.)

Test Description

The test I’m going to use is:

  • Create a table with 2 million records, each record being large enough that only one record can fit on each data page (I was going to do ten million rows, but that was just taking too long)
  • Drop any existing clustered index
  • Create a new clustered index with varying key size from 8 to 900 bytes (creating the index after populating the table guarantees the tightest space usage)
  • Ensure that all the index is in memory (clear wait stats and make sure there are no page reads from disk during the next step)
  • Time how long it takes to do a single row lookup of all 2 million rows (run 5 tests and average the times)

Here’s the code for my test:

USE [master];
GO

IF DATABASEPROPERTYEX (N'IndexDepthTest', N'Version') != 0
BEGIN
    ALTER DATABASE [IndexDepthTest] SET SINGLE_USERWITH ROLLBACK IMMEDIATE;
    DROP DATABASE [IndexDepthTest];
END
GO

CREATE DATABASE [IndexDepthTest] ON PRIMARY (
    NAME = N'IndexDepthTest_data',
    FILENAME = N'T:\IDT\IndexDepthTest_data.mdf',
    SIZE = 32768MB,
    FILEGROWTH = 256MB)
LOG ON (
    NAME = N'IndexDepthTest_log',
    FILENAME = N'N:\IDT\IndexDepthTest_log.ldf',
    SIZE = 2048MB,
    FILEGROWTH = 256MB);
GO

ALTER DATABASE [IndexDepthTest] SET RECOVERY SIMPLE;
GO

SET NOCOUNT ON;
GO

USE [IndexDepthTest];
GO

CREATE TABLE [DepthTest] (
    [c1] BIGINT IDENTITY,
    [c2] CHAR (8) DEFAULT 'c2',		-- to allow 16-byte key
    [c3] CHAR (92) DEFAULT 'c3',	-- to allow 100-byte key
    [c4] CHAR (300) DEFAULT 'c4',	-- to allow 400-byte key
    [c5] CHAR (500) DEFAULT 'c5',	-- to allow 900-byte key
    [c6] CHAR (4000) DEFAULT 'c6');	-- to force one row per leaf page

INSERT INTO [DepthTest] DEFAULT VALUES;
GO 2000000

-- Run one of the following sets of DROP/CREATE statements

-- No existing clustered index to drop
CREATE CLUSTERED INDEX [8ByteKey] ON [DepthTest] ([c1]);
GO

DROP INDEX [8ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [16ByteKey] ON [DepthTest] ([c1], [c2]);
GO

DROP INDEX [16ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [100ByteKey] ON [DepthTest] ([c1], [c3]);
GO

DROP INDEX [100ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [400ByteKey] ON [DepthTest] ([c1], [c3], [c4]);
GO

DROP INDEX [400ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [900ByteKey] ON [DepthTest] ([c1], [c3], [c4], [c5]);
GO

SELECT
    [index_depth],
    [index_level],
    [page_count],
    [record_count]
FROM sys.dm_db_index_physical_stats (
    DB_ID (N'IndexDepthTest'),
    OBJECT_ID (N'DepthTest'),
    1,
    0,
    'DETAILED');
GO

DECLARE @c INT = 0;
WHILE (@c != 5)
BEGIN
    DECLARE @t DATETIME = GETDATE ();
    DECLARE @a BIGINT = 0;
    DECLARE @b BIGINT;

    WHILE (@a != 2000000)
    BEGIN
        SELECT @b = [c1] FROM [DepthTest] WHERE [c1] = @a;
        SELECT @a = @a + 1;
    END;

    SELECT GETDATE () - @t;
    SELECT @c = @c + 1;
END;
GO

My test server is a Dell R720 with 16 physical cores (Intel E5-2670 @ 2.60 GHz), 64GB of memory, a Fusion-io/SanDisk 640GB SSD for storage, and I’m running the test on SQL Server 2012.

The test is designed both to make sure that the index is traversed all the way down to the leaf level (and the leaf record has to be accessed to check the existence of the value being selected and to retrieve it), and to make sure that all pages in the index are in memory.

I’ll walk through the steps for the 8-byte cluster key and then present the data for all the tests.

It took a few minutes to do the 2 million inserts, and then create the first clustered index. The results of the DMV call were:

index_depth index_level page_count           record_count
----------- ----------- -------------------- --------------------
4           0           2000000              2000000
4           1           4214                 2000000
4           2           16                   4214
4           3           1                    16

So with a index key size of 8 bytes, the index needs 4214 pages at level 1 in the index structure to hold references to all 2 million leaf-level pages. This means the fanout value is 2000000 / 4214, which is approximately 474.

The times for the 2 million selects for the 8-byte cluster key were 21.983s, 21.94s, 21.973s, 21.967s, 21.963s, with an average of 21.9652s, and a per-select average of 10.98 microseconds.

Test Results

Running the test for each of my test key sizes produced the following results:

Key Size Index Depth Total Page Count Fanout Average Time for selects Rough time per select
-------- ----------- ---------------- ------ ------------------------ ---------------------
8        4           2004231          474    21.9652 secs             10.9826 microsecs
16       4           2006980          288    21.8122 secs             10.9061 microsecs
100      5           2028182          72     22.9522 secs             11.4976 microsecs
400      6           2111124          19     23.7482 secs             11.8741 microsecs
900      8           2285728          8      25.5732 secs             12.7866 microsecs

The results clearly show that there’s a performance penalty for index seeks when the index has more levels. At each level of the index during a seek, a binary search takes place, to find the right index record to use to navigate down to the next level lower in the index, and this binary search takes CPU time.

For each additional level in the index, my results show that it takes roughly 0.4 to 0.5 microseconds of extra time, and that’s pure CPU time as there were no page reads during the tests.

You might wonder why the per-select time for the 16-byte key index is less than for the 8-byte key index, even though they have the same depth of 4 in my test. That’s to do with the binary search algorithm. On average, the number of comparisons that need to be done for a binary search of x elements is log (x) for log base 2. For the 8-byte index, the fanout (i.e. number of records per page for the binary search) is 474, giving an average number of comparisons of 8.9. For the 16-byte index, the fanout is 288, giving the average number of comparisons of 8.2. This slight drop accounts for the slight drop we see in the test time – it’s a tiny bit more efficient for a lower fanout with the same index depth. I’m not going to say that this means you’re better off with a GUID cluster key than a bigint – that’s a whole other discussion with much more to consider than just single-row select performance :-)

Summary

My results show that index depth does matter.

Index depth is determined by the number of rows in the index and the index key size. You can’t control the number of rows, but you can control the index key size. Where possible, the smaller you can keep the index key size, the smaller the index depth will be for the same number of records, and the faster an index traversal from root page to leaf level will be.

Even though we’re only talking about fractions of a microsecond, for workloads with huge numbers of single-row select operations, that all adds up, and especially so on older, slower processors where the difference will be more pronounced than in my tests. And these results also counter the argument that says “index depth doesn’t matter because it’s all in memory anyway”.

Bottom line – this is one more reason to keep your index keys as narrow as possible.

Btw, Kimberly goes into all this in much more detail in her excellent 4-hour Pluralsight course on SQL Server: Why Physical Database Design Matters.

Low priority locking wait types

[Edit 2016: Check out my new resource – a comprehensive library of all wait types and latch classes – see here.]

SQL Server 2014 (and Azure SQL Database V12) added some cool new functionality for online index operations to allow you to prevent long-term blocking because of the two blocking locks that online index operations require.

At the start of any online index operation, it acquires a S (share) table lock. This lock will be blocked until all transactions that are changing the table have committed, and while the lock is pending, it will block any transactions wanting to change the table in any way. The S lock is only held for a short amount of time, then dropped to an IS (Intent-Share) lock for the long duration of the operation. At the end of any online index operation, it acquires a SCH-M (schema modification) table lock, which you can think of as a super-exclusive lock. This lock will be blocked by any transaction accessing or changing the table, and while the lock is pending, it will block any transactions wanting to read or change the table in any way.

The new syntax allow you to specify how long the online index operation will wait for each of these locks, and what to do when the timeout expires (nothing: NONE, kill the online index operation: SELF, or kill the blockers of the online index operation: BLOCKERS – see Books Online for more info). While the online index operation is blocked, it shows a different lock wait type than we’re used to seeing, and any lock requests are allowed to essentially jump over the online index operation in the lock pending queues – i.e. the online index operation waits with lower priority than everything else on the system.

To demonstrate this, I’ve got a table called NonSparseDocRepository, with a clustered index called NonSparse_CL, and 100,000 rows in the table.

First, I’ll kick off an online index rebuild of the clustered index, specifying a 1 minute wait, and to kill itself of the wait times out:

ALTER INDEX [NonSparse_CL] ON [nonsparsedocrepository] REBUILD
WITH (FILLFACTOR = 70, ONLINE = ON (
	WAIT_AT_LOW_PRIORITY (
		MAX_DURATION = 1 MINUTES, ABORT_AFTER_WAIT = SELF)
	)
);
GO

I let it run for ten seconds or so, so make sure it got past the initial table S lock required. Now, in another connection, I’ll start a transaction that takes an IX table lock, which will block the final SCH-M lock the online index operation requires:

BEGIN TRAN;
GO

UPDATE [NonSparseDocRepository]
SET [c4] = '1'
WHERE [DocID] = 1;
GO

And then I’ll wait until the drive light on my laptop goes off, which lets me know that the online index rebuild is stalled. If I look in sys.dm_os_waiting_tasks (using the script in this post), I’ll see the rebuild is blocked (script output heavily edited for clarity and brevity):

session_id exec_context_id scheduler_id wait_duration_ms wait_type                blocking_session_id resource_description
57         0               4            7786             LCK_M_SCH_M_LOW_PRIORITY 58                  objectlock

Look at the wait type: LCK_M_SCH_M_LOW_PRIORITY. The _LOW_PRIORITY suffix indicates that this is a special lock wait attributable to the online index operation being blocked.

This also neatly proves that the wait-at-low-priority feature applies to both the blocking locks that online index operations require, even if the first one isn’t blocked.

And eventually the online index operation fails, as follows:

Msg 1222, Level 16, State 56, Line 1
Lock request time out period exceeded.

If I leave that open transaction in the other connection (holding its IX table lock), and try the index rebuild again, with the exact same syntax, it’s immediately blocked and the sys.dm_os_waiting_tasks script shows:

session_id exec_context_id scheduler_id wait_duration_ms wait_type                blocking_session_id resource_description
57         0               4            8026             LCK_M_S_LOW_PRIORITY     58                  objectlock

This shows that the initial blocking lock is blocked, and is waiting at low priority.

So if either of these wait types show up during your regular wait statistics analysis, now you know what’s causing them.