Survey: tempdb file configuration (code to run)

I’m running this survey to help the SQL Server team at Microsoft, who would like to get a broad view of current tempdb configurations. I’ll editorialize the results as well in a week or two.

Feel free to run the code below any way you want, and also add a single preceding column to the result set (e.g. server name or number) if you want, but PLEASE do not add any *rows* of data apart from what I’ve asked for otherwise it makes the data processing very time consuming, especially if you send results from hundreds of servers. I know people that do that are trying to be helpful, but I really don’t need any other data apart from what I’ve asked for.

You can send me results in email in a text file or spreadsheet, or leave a comment below. The code will work on SQL Server 2005 onwards.

Thanks!

IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
    WHERE [name] LIKE N'#PSR_tracestatus%')
    DROP TABLE [#PSR_tracestatus];
GO

CREATE TABLE [#PSR_tracestatus] (
    [TraceFlag] INT, [Status] INT, [Global] INT, [Session] INT);

INSERT INTO #PSR_tracestatus EXEC ('DBCC TRACESTATUS (1117) WITH NO_INFOMSGS');
INSERT INTO #PSR_tracestatus EXEC ('DBCC TRACESTATUS (1118) WITH NO_INFOMSGS');

SELECT
	[os].[cores],
	(SELECT [Global] FROM #PSR_tracestatus WHERE [TraceFlag] = 1117) AS [1117],
	(SELECT [Global] FROM #PSR_tracestatus WHERE [TraceFlag] = 1118) AS [1118],
	[file_id], [type_desc], [size], [max_size], [growth], [is_percent_growth]
FROM
	tempdb.sys.database_files AS [df],
	(
		SELECT COUNT (*) AS [cores]
		FROM sys.dm_os_schedulers
		WHERE status = 'VISIBLE ONLINE'
	) AS [os];

DROP TABLE [#PSR_tracestatus];
GO

Data recovery: investigating weird SELECT failures around corruption

An interesting corruption problem cropped up on the MCM distribution list yesterday and after I figured it out, I thought it would make a good blog post in case anyone hits a similar problem.

In a nutshell, the problem was corruption such that a simple SELECT * query failed, but a SELECT * query with an ORDER BY clause worked.

Let’s investigate!

Creating the scenario

First I’ll create the specific corruption. I’m going to create a simple table with a clustered index, and sizing the rows so there’s only one per page.

CREATE DATABASE [Company];
GO

USE [Company];
GO

CREATE TABLE [test] (
	[c1] INT IDENTITY,
	[c2] UNIQUEIDENTIFIER DEFAULT NEWID (),
	[c3] CHAR (4100) DEFAULT 'a');
GO
CREATE CLUSTERED INDEX [test_cl] ON [test] ([c1], [c2]);
GO

SET NOCOUNT ON;
GO

INSERT INTO [test] DEFAULT VALUES;
GO 10000

Now I’ll delete one of the rows, creating a page with a single ghost record on it, which I can see using DBCC PAGE on the first PFS page in the database.

DELETE FROM [test] WHERE [c1] = 150;
GO

DBCC TRACEON (3604);
DBCC PAGE ([Company], 1, 1, 3);
GO
<snip output for brevity>
(1:289)      - (1:295)      =     ALLOCATED   0_PCT_FULL                     Mixed Ext
(1:296)      - (1:437)      =     ALLOCATED   0_PCT_FULL                              
(1:438)      -              =     ALLOCATED   0_PCT_FULL Has Ghost                    
(1:439)      - (1:8087)     =     ALLOCATED   0_PCT_FULL                              

So page (1:438) is the one that had the row with key value 150 on it. It’s still allocated and linked into the clustered index structure though, so I’ll force the Access Methods code to ‘see’ it by doing a scan that’ll include it, and that will queue the page up for cleaning by the Ghost Cleanup Task.

SELECT COUNT (*) FROM [test] WHERE [c1] &amp;lt; 200;
GO

And now if I wait 10 seconds and look at the PFS page again, I can see it’s been cleaned and deallocated – it’s no longer part of the clustered index. (You’ll notice that the PFS byte still says that the page has a ghost record; that’s because when a page is deallocated, the only PFS bits that are changed are the allocation status.)

DBCC PAGE ([Company], 1, 1, 3);
GO
<snip output for brevity>
(1:289)      - (1:295)      =     ALLOCATED   0_PCT_FULL                     Mixed Ext
(1:296)      - (1:437)      =     ALLOCATED   0_PCT_FULL                              
(1:438)      -              = NOT ALLOCATED   0_PCT_FULL Has Ghost                    
(1:439)      - (1:8087)     =     ALLOCATED   0_PCT_FULL                              

Nothing’s corrupt at this point, so let’s cause some problems.

Creating the corruption

First off I’m going to zero out page (1:438) using DBCC WRITEPAGE:

ALTER DATABASE [Company] SET SINGLE_USER;
GO

DECLARE @offset INT;
SELECT @offset = 0;

WHILE (@offset < 8185)
BEGIN
	DBCC WRITEPAGE (N'Company', 1, 438, @offset, 8, 0x0000000000000000, 1);
	SELECT @offset = @offset + 8;
END;
GO

ALTER DATABASE [Company] SET MULTI_USER;
GO

And there’s still no corruption here, because page (1:438) is a deallocated page.

So now I’ll corrupt it by forcing it to be allocated again. For this I need to find the offset of the PFS byte for page (1:438) using a hex dump of the PFS page and looking for a page that has the PFS bits matching the PFS output for page (1:438) above. The page only has the ‘Has Ghost’ bit set, which is 0x08.

DBCC PAGE ([Company], 1, 1, 2);
GO
<snip>
Memory Dump @0x00000000185EA000

00000000185EA000:   010b0000 00000000 00000000 00000000 00000000  ....................
00000000185EA014:   00000100 63000000 0200fc1f 01000000 01000000  ....c.....ü.........
00000000185EA028:   12010000 fd000000 01000000 00000000 00000000  ....ý...............
00000000185EA03C:   7944876a 01000000 00000000 00000000 00000000  yD‡j................
00000000185EA050:   00000000 00000000 00000000 00000000 00009c1f  ..................œ.
00000000185EA064:   44444444 00004444 60647060 74706070 60607060  DDDD..DD`dp`tp`p``p`
00000000185EA078:   60707060 40404040 40404040 61706070 60606070  `pp`@@@@@@@@ap`p```p
00000000185EA08C:   60706060 60706060 60706060 60606070 40404040  `p```p```p`````p@@@@
00000000185EA0A0:   40404040 40404040 40404030 60706060 70607060  @@@@@@@@@@@0`p``p`p`
00000000185EA0B4:   70706070 70606060 70607060 70607060 70607060  pp`pp```p`p`p`p`p`p`
00000000185EA0C8:   70607060 70607060 70606060 60607060 60706070  p`p`p`p`p`````p``p`p
00000000185EA0DC:   60706070 60706070 60707070 60607060 60706060  `p`p`p`p`ppp``p``p``
00000000185EA0F0:   60706060 70606060 60606060 70607060 60706060  `p``p```````p`p``p``
00000000185EA104:   60606060 60606060 40000000 00000000 60606060  ````````@.......````
00000000185EA118:   60606060 60606060 60606060 60606060 60606060  ````````````````````
00000000185EA12C:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA140:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA154:   60606060 64646260 40404040 40404040 40404040  ````ddb`@@@@@@@@@@@@
00000000185EA168:   40404040 40400000 00000000 40400000 00000000  @@@@@@......@@......
00000000185EA17C:   40404040 00000000 70606060 60606060 40404040  @@@@....p```````@@@@
00000000185EA190:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1A4:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1B8:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1CC:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1E0:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA1F4:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA208:   40404040 40404040 40404040 40404040 40400840  @@@@@@@@@@@@@@@@@@.@
00000000185EA21C:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
00000000185EA230:   40404040 40404040 40404040 40404040 40404040  @@@@@@@@@@@@@@@@@@@@
<snip>

Can you spot the 0x08 byte? It’s at offset 0x21a on the page.

I can force page (1:438) to become allocated again by setting that byte offset in the PFS page to 0x40, again using DBCC WRITEPAGE.

DBCC WRITEPAGE (N'Company', 1, 1, 538, 1, 0x40);
GO

And now if I run DBCC CHECKDB, I can see some corruption:

DBCC CHECKDB (N'Company') WITH NO_INFOMSGS;
GO
Msg 8909, Level 16, State 1, Line 68
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:438) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).
CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object.
Msg 8928, Level 16, State 1, Line 68
Object ID 245575913, index ID 1, partition ID 72057594040614912, alloc unit ID 72057594045857792 (type In-row data): Page (1:438) could not be processed.  See other errors for details.
CHECKDB found 0 allocation errors and 1 consistency errors in table 'test' (object ID 245575913).
CHECKDB found 0 allocation errors and 2 consistency errors in database 'Company'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (Company).

And the final step is to make the database read-only:

ALTER DATABASE [Company] SET READ_ONLY;
GO

Investigating the corruption

In the case described in the DL, there was no backup and so the client wanted to extract as much data as possible.

Running a simple SELECT * didn’t work, like so:

SELECT * FROM [test];
GO

The query will start to give results and then fail with:

Msg 824, Level 24, State 2, Line 74
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:438; actual 0:0). It occurred during a read of page (1:438) in database ID 10 at offset 0x0000000036c000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\Company.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

But if I run a SELECT * that has ordering, it works fine:

SELECT * FROM [test] ORDER BY [c1];
GO

What’s going on?

Explanation

The explanation is to do with how the two scans work for the SELECT statements.

The first scan is doing what’s called an allocation order scan. This is where the Access Methods decides not to use the index structure to give back the records. An allocation order scan has three requirements:

  • The query plan must allow for an unordered scan of the index
  • The index must be bigger than 64 pages
  • The data in the index must be guaranteed not to change

The allocation order scan uses the IAM pages to load a scanning object and then zip through the extents in allocation order, using the PFS pages to determine which pages in the extents are allocated and should be read and processed by the scan.

The second scan is doing a normal ordered scan, which navigates down to the left-hand side of the leaf level in the index and then follows the leaf-level page linkages to scan through the index.

So where does the corruption come in?

The page that I corrupted and then forced to be allocated again isn’t linked in to the leaf-level of the index, and so the ordered scan doesn’t attempt to read it. However, because it’s allocated, the allocation order scan thinks it’s a valid part of the extent that contains it and so tried to read it, resulting in the 823 error.

The key to having this scenario is that the database is set to read-only, which satisfies the third requirement for an allocation order scan. If you set the database back to read-write, and then run the first SELECT statement, it will work perfectly because the allocation order scan requirements aren’t being met any longer.

You can read more about allocation order scans in this great post from Paul White.

Summary

A lot of the time when dealing with database corruption and trying to effect comprehensive data recovery without backups, you’ll run into weird situations like this. When you do, step back, look at the query plan for what you’re doing, and think about what the Access Methods is doing under the covers to implement the query plan. And then think about how to work around that so you can continue getting more data back.

Hope this helps!

Expanded IEPDS class on Practical Data Science in Chicago, May 2016

After the success of our brand-new class on Practical Data Science this week, we’ve decided to expand it to five days and bring it back to Chicago again next year.

The course is our Immersion Event on Practical Data Science using Azure Machine Learning, SQL Data Mining, and R, presented by our great friend Rafal Lukawiecki.

Rafal’s course will be five days long, with the following modules:

  1. Overview of Practical Data Science for Business
  2. Data
  3. Process
  4. Algorithm Overview
  5. Tools and Getting Started
  6. Segmentation
  7. Classification
  8. Basic Statistics
  9. Model Validation
  10. Classifier Precision
  11. Regressions
  12. Similarity Matching and Recommenders
  13. Other Algorithms (Brief Overview)
  14. Production and Model Maintenance

You can read through the detailed curriculum here.

The class will be in Chicago, at our usual location, from May 9-13, 2016 – all the details are here.

Cool stuff – hope to see you there!

Calling all user group leaders! We want to present for you in 2016!

By the end of December, we at SQLskills will have remotely presented to 87 user groups and PASS virtual chapters around the world in 2015!

We’d love to present remotely for your user group in 2016, anywhere in the world. It’s not feasible for us to travel to user groups or SQL Saturdays unless we’re already in that particular city, but remote presentations are easy to do and are becoming more and more popular (we haven’t had any bandwidth problems doing remote presentations to groups as far away as South Africa, Australia, and New Zealand, plus Poland, Canada, Belgium, Netherlands, Ukraine, Ireland, UK, Israel, Denmark, Austria). This way we can spread the community love around user groups everywhere that we wouldn’t usually get to in person.

We have our own Webex accounts which we generally use, or we can use your GoToMeeting or Webex. We prefer not to use Lync as we’ve had too many problems with it around user group laptops and sound.

So, calling all user group leaders! If you’d like one of us (me, Kimberly, Jon, Erin, Glenn, Tim) to present remotely for you in 2016 (or maybe even multiple times), send me an email including:

  • Details of which user group you represent
  • The usual day of the month, meeting time, and timezone of the user group
  • Which months you have available, starting in January 2016

And I’ll let you know who’s available with what topics so you can pick.

What’s the catch? There is no catch. We’re just continuing our community involvement next year and we all love presenting :-)

And don’t think that because you’re only reading this now that we can’t fit you in – send me an email and we’ll see what we can do.

We’re really looking forward to engaging with you all!

Cheers

PS By all means pass the word on to any SharePoint and .Net user group leaders you know too.

On index key size, index depth, and performance

In my Insider newsletter a couple of weeks ago, I discussed how index fragmentation is often considered when designing indexes, but index depth often isn’t. In the newsletter I said I’d do a more comprehensive blog post with some data, so this is it.

Fanout and Index Depth

The index depth is determined by the fanout of the index. From the newsletter:

The fanout of an index measures, for a page at level x in an index, how many pages it references in the level below (nearer the leaf level). The higher the fanout is, the fewer the number of levels in the index.

The index key size impacts the size of the structure needed to reference it. Specifically, the index key is pushed up to all entries (and all levels) in the index as it’s used to allow navigation through the index from the root page down to the leaf level.

The larger the index key size, the fewer index records can be stored in an index page and so the lower the fanout. The lower the fanout is the more levels are required in the index, depending on the number of pages at the leaf level.

For instance, if the fanout is 10 in an index, that means each index page can hold 10 index records, referencing 10 pages at the level below in the index. If the index has 10,000 pages at the leaf level, there needs to be 1,000 pages in the level above, then 100 pages, then 10 pages, and finally the root page. That’s a total of 5 levels.

For the same data, if the index fanout is changed to 100, and the index has 10,000 pages at the leaf level, the next level needs 100 pages, and then there’s the root page. That’s a total of only three levels.

I want to measure whether there’s a noticeable performance difference based on the fanout, and hence index depth, of an index from varying it’s key size, for single-row select operations. There won’t be any noticeable effect on scans, as that only involves a single traversal of the index, to find the starting point of the scan. (Ok, it’s a little more complicated than that for scans if any of the index leaf-level pages change while the scan is positioned on them, but that’s not relevant here.)

Test Description

The test I’m going to use is:

  • Create a table with 2 million records, each record being large enough that only one record can fit on each data page (I was going to do ten million rows, but that was just taking too long)
  • Drop any existing clustered index
  • Create a new clustered index with varying key size from 8 to 900 bytes (creating the index after populating the table guarantees the tightest space usage)
  • Ensure that all the index is in memory (clear wait stats and make sure there are no page reads from disk during the next step)
  • Time how long it takes to do a single row lookup of all 2 million rows (run 5 tests and average the times)

Here’s the code for my test:

USE [master];
GO

IF DATABASEPROPERTYEX (N'IndexDepthTest', N'Version') != 0
BEGIN
    ALTER DATABASE [IndexDepthTest] SET SINGLE_USERWITH ROLLBACK IMMEDIATE;
    DROP DATABASE [IndexDepthTest];
END
GO

CREATE DATABASE [IndexDepthTest] ON PRIMARY (
    NAME = N'IndexDepthTest_data',
    FILENAME = N'T:\IDT\IndexDepthTest_data.mdf',
    SIZE = 32768MB,
    FILEGROWTH = 256MB)
LOG ON (
    NAME = N'IndexDepthTest_log',
    FILENAME = N'N:\IDT\IndexDepthTest_log.ldf',
    SIZE = 2048MB,
    FILEGROWTH = 256MB);
GO

ALTER DATABASE [IndexDepthTest] SET RECOVERY SIMPLE;
GO

SET NOCOUNT ON;
GO

USE [IndexDepthTest];
GO

CREATE TABLE [DepthTest] (
    [c1] BIGINT IDENTITY,
    [c2] CHAR (8) DEFAULT 'c2',		-- to allow 16-byte key
    [c3] CHAR (92) DEFAULT 'c3',	-- to allow 100-byte key
    [c4] CHAR (300) DEFAULT 'c4',	-- to allow 400-byte key
    [c5] CHAR (500) DEFAULT 'c5',	-- to allow 900-byte key
    [c6] CHAR (4000) DEFAULT 'c6');	-- to force one row per leaf page

INSERT INTO [DepthTest] DEFAULT VALUES;
GO 2000000

-- Run one of the following sets of DROP/CREATE statements

-- No existing clustered index to drop
CREATE CLUSTERED INDEX [8ByteKey] ON [DepthTest] ([c1]);
GO

DROP INDEX [8ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [16ByteKey] ON [DepthTest] ([c1], [c2]);
GO

DROP INDEX [16ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [100ByteKey] ON [DepthTest] ([c1], [c3]);
GO

DROP INDEX [100ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [400ByteKey] ON [DepthTest] ([c1], [c3], [c4]);
GO

DROP INDEX [400ByteKey] ON [DepthTest];
GO
CREATE CLUSTERED INDEX [900ByteKey] ON [DepthTest] ([c1], [c3], [c4], [c5]);
GO

SELECT
    [index_depth],
    [index_level],
    [page_count],
    [record_count]
FROM sys.dm_db_index_physical_stats (
    DB_ID (N'IndexDepthTest'),
    OBJECT_ID (N'DepthTest'),
    1,
    0,
    'DETAILED');
GO

DECLARE @c INT = 0;
WHILE (@c != 5)
BEGIN
    DECLARE @t DATETIME = GETDATE ();
    DECLARE @a BIGINT = 0;
    DECLARE @b BIGINT;

    WHILE (@a != 2000000)
    BEGIN
        SELECT @b = [c1] FROM [DepthTest] WHERE [c1] = @a;
        SELECT @a = @a + 1;
    END;

    SELECT GETDATE () - @t;
    SELECT @c = @c + 1;
END;
GO

My test server is a Dell R720 with 16 physical cores (Intel E5-2670 @ 2.60 GHz), 64GB of memory, a Fusion-io/SanDisk 640GB SSD for storage, and I’m running the test on SQL Server 2012.

The test is designed both to make sure that the index is traversed all the way down to the leaf level (and the leaf record has to be accessed to check the existence of the value being selected and to retrieve it), and to make sure that all pages in the index are in memory.

I’ll walk through the steps for the 8-byte cluster key and then present the data for all the tests.

It took a few minutes to do the 2 million inserts, and then create the first clustered index. The results of the DMV call were:

index_depth index_level page_count           record_count
----------- ----------- -------------------- --------------------
4           0           2000000              2000000
4           1           4214                 2000000
4           2           16                   4214
4           3           1                    16

So with a index key size of 8 bytes, the index needs 4214 pages at level 1 in the index structure to hold references to all 2 million leaf-level pages. This means the fanout value is 2000000 / 4214, which is approximately 474.

The times for the 2 million selects for the 8-byte cluster key were 21.983s, 21.94s, 21.973s, 21.967s, 21.963s, with an average of 21.9652s, and a per-select average of 10.98 microseconds.

Test Results

Running the test for each of my test key sizes produced the following results:

Key Size Index Depth Total Page Count Fanout Average Time for selects Rough time per select
-------- ----------- ---------------- ------ ------------------------ ---------------------
8        4           2004231          474    21.9652 secs             10.9826 microsecs
16       4           2006980          288    21.8122 secs             10.9061 microsecs
100      5           2028182          72     22.9522 secs             11.4976 microsecs
400      6           2111124          19     23.7482 secs             11.8741 microsecs
900      8           2285728          8      25.5732 secs             12.7866 microsecs

The results clearly show that there’s a performance penalty for index seeks when the index has more levels. At each level of the index during a seek, a binary search takes place, to find the right index record to use to navigate down to the next level lower in the index, and this binary search takes CPU time.

For each additional level in the index, my results show that it takes roughly 0.4 to 0.5 microseconds of extra time, and that’s pure CPU time as there were no page reads during the tests.

You might wonder why the per-select time for the 16-byte key index is less than for the 8-byte key index, even though they have the same depth of 4 in my test. That’s to do with the binary search algorithm. On average, the number of comparisons that need to be done for a binary search of x elements is log (x) for log base 2. For the 8-byte index, the fanout (i.e. number of records per page for the binary search) is 474, giving an average number of comparisons of 8.9. For the 16-byte index, the fanout is 288, giving the average number of comparisons of 8.2. This slight drop accounts for the slight drop we see in the test time – it’s a tiny bit more efficient for a lower fanout with the same index depth. I’m not going to say that this means you’re better off with a GUID cluster key than a bigint – that’s a whole other discussion with much more to consider than just single-row select performance :-)

Summary

My results show that index depth does matter.

Index depth is determined by the number of rows in the index and the index key size. You can’t control the number of rows, but you can control the index key size. Where possible, the smaller you can keep the index key size, the smaller the index depth will be for the same number of records, and the faster an index traversal from root page to leaf level will be.

Even though we’re only talking about fractions of a microsecond, for workloads with huge numbers of single-row select operations, that all adds up, and especially so on older, slower processors where the difference will be more pronounced than in my tests. And these results also counter the argument that says “index depth doesn’t matter because it’s all in memory anyway”.

Bottom line – this is one more reason to keep your index keys as narrow as possible.

Btw, Kimberly goes into all this in much more detail in her excellent 4-hour Pluralsight course on SQL Server: Why Physical Database Design Matters.

Low priority locking wait types

SQL Server 2014 (and Azure SQL Database V12) added some cool new functionality for online index operations to allow you to prevent long-term blocking because of the two blocking locks that online index operations require.

At the start of any online index operation, it acquires a S (share) table lock. This lock will be blocked until all transactions that are changing the table have committed, and while the lock is pending, it will block any transactions wanting to change the table in any way. The S lock is only held for a short amount of time, then dropped to an IS (Intent-Share) lock for the long duration of the operation. At the end of any online index operation, it acquires a SCH-M (schema modification) table lock, which you can think of as a super-exclusive lock. This lock will be blocked by any transaction accessing or changing the table, and while the lock is pending, it will block any transactions wanting to read or change the table in any way.

The new syntax allow you to specify how long the online index operation will wait for each of these locks, and what to do when the timeout expires (nothing: NONE, kill the online index operation: SELF, or kill the blockers of the online index operation: BLOCKERS – see Books Online for more info). While the online index operation is blocked, it shows a different lock wait type than we’re used to seeing, and any lock requests are allowed to essentially jump over the online index operation in the lock pending queues – i.e. the online index operation waits with lower priority than everything else on the system.

To demonstrate this, I’ve got a table called NonSparseDocRepository, with a clustered index called NonSparse_CL, and 100,000 rows in the table.

First, I’ll kick off an online index rebuild of the clustered index, specifying a 1 minute wait, and to kill itself of the wait times out:

ALTER INDEX [NonSparse_CL] ON [nonsparsedocrepository] REBUILD
WITH (FILLFACTOR = 70, ONLINE = ON (
	WAIT_AT_LOW_PRIORITY (
		MAX_DURATION = 1 MINUTES, ABORT_AFTER_WAIT = SELF)
	)
);
GO

I let it run for ten seconds or so, so make sure it got past the initial table S lock required. Now, in another connection, I’ll start a transaction that takes an IX table lock, which will block the final SCH-M lock the online index operation requires:

BEGIN TRAN;
GO

UPDATE [NonSparseDocRepository]
SET [c4] = '1'
WHERE [DocID] = 1;
GO

And then I’ll wait until the drive light on my laptop goes off, which lets me know that the online index rebuild is stalled. If I look in sys.dm_os_waiting_tasks (using the script in this post), I’ll see the rebuild is blocked (script output heavily edited for clarity and brevity):

session_id exec_context_id scheduler_id wait_duration_ms wait_type                blocking_session_id resource_description
57         0               4            7786             LCK_M_SCH_M_LOW_PRIORITY 58                  objectlock

Look at the wait type: LCK_M_SCH_M_LOW_PRIORITY. The _LOW_PRIORITY suffix indicates that this is a special lock wait attributable to the online index operation being blocked.

This also neatly proves that the wait-at-low-priority feature applies to both the blocking locks that online index operations require, even if the first one isn’t blocked.

And eventually the online index operation fails, as follows:

Msg 1222, Level 16, State 56, Line 1
Lock request time out period exceeded.

If I leave that open transaction in the other connection (holding its IX table lock), and try the index rebuild again, with the exact same syntax, it’s immediately blocked and the sys.dm_os_waiting_tasks script shows:

session_id exec_context_id scheduler_id wait_duration_ms wait_type                blocking_session_id resource_description
57         0               4            8026             LCK_M_S_LOW_PRIORITY     58                  objectlock

This shows that the initial blocking lock is blocked, and is waiting at low priority.

So if either of these wait types show up during your regular wait statistics analysis, now you know what’s causing them.

Cool workshops at SQLintersection in October

This year’s Fall SQLintersection conference in October is fast approaching and we’ve got some excellent full-day workshops lined up for you:

  • Sunday 25th: Kimberly with Queries Gone Wrong: Statistics, Cardinality, Solutions
  • Monday 26th: Jon and Tim with Managing SQL Server for the Non-DBA
  • Monday 26th: Glenn with Analyzing and Improving I/O Subsystem Performance
  • Friday 30th: Me with Performance Troubleshooting with Waits and Latches
  • Friday 30th: Brent with Advanced SQL Server High Availability and Disaster Recovery

And then of course there are another 30+ workshops on all kinds of Developer and I.T. topics (even a hands-on Internet-of-Things workshop where you program micro-controllers against Azure IoT Services!).

You can get more details about this year’s Fall SQLintersection conference in Las Vegas, October 25-30 at this website. And don’t forget to use the discount code SQLSKILLS when you register to save $50.

We hope to see you there!

June 2016 London classes open for registration

I’ve just released our London classes for 2016 for registration!

We’ll be teaching the following classes:

  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • June 13-17 (US$200 discount for registering in 2015)
  • IEPTO2: Immersion Event on Performance Tuning and Optimization – Part 2
    • June 20-24 (US$200 discount for registering in 2015)

As you can see, we’re offering discounts off the early-bird price for all our 2016 classes if you register before the end of this year. The regular early-bird prices will apply from January 1st, 2016. If you’ve previously attended an Immersion Event, there’s a larger discount that’s always available to you whenever you register.

You can get all the logistical, registration, and curriculum details by drilling down from our main schedule page.

We hope to see you there!

New SSIS Immersion Events with Andy Leonard and Tim Mitchell

We’re very excited to announce that we’re partnering with our friends at Linchpin People to offer two new Immersion Events on SSIS, both debuting in 2016 along side our existing classes.

IESSIS1: Immersion Event on Learning SQL Server Integration Services

This is a 5-day course taught by Andy Leonard, with the following modules:

  1. Introduction
  2. Creating Your First SSIS package
  3. Introduction to the Data Flow Task
  4. Data Flow Task 201
  5. Data Flow Task 202
  6. The Control Flow
  7. Loop Containers
  8. Data Flow Task 301
  9. Data Flow Task 302
  10. Data Flow Task 303
  11. Event Handlers, Logging, and Configurations
  12. Security, Deployment, and Execution
  13. ETL Design Patterns
  14. Enterprise Execution Patterns

You can read through the detailed curriculum here.

The first offering of this new class will be in Chicago, at our usual location, from May 2-6, 2016 – all the details are here.

IESSIS2: Immersion Event on Advanced SQL Server Integration Services

This is a 4-day course taught by Andy Leonard and Tim Mitchell, with the following modules:

  1. SSIS Catalog
  2. SSIS Security
  3. Load Performance
  4. Data Flow Internals
  5. Testing Patterns
  6. SSIS Scripting
  7. Data Quality and SSIS
  8. Hacking the SSIS Catalog
  9. Advanced Logging
  10. Advanced SSIS Load Patterns
  11. Error Handling
  12. Data Edge Cases
  13. Business Intelligence Markup Language (Biml)

You can read through the detailed curriculum here.

The first offering of this new class will be in Bellevue, at our usual location, from September 19-22, 2016 – all the details are here.

Cool stuff – hope to see you there!

2016 Chicago and Bellevue classes open for registration

I’ve just released our first sets of US classes for 2016 for registration!

Our classes in April/May will be in Chicago, IL:

  • IE0: Immersion Event for Junior/Accidental DBAs
    • April 25-27 (US$120 discount for registering in 2015)
  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • April 25-29 (US$200 discount for registering in 2015)
  • IEBI: Immersion Event on Business Intelligence
    • April 25-29 (US$200 discount for registering in 2015)
  • IEPTO2: Immersion Event on Performance Tuning and Optimization – Part 2
    • May 2-6 (US$200 discount for registering in 2015)
  • **NEW** IESSIS1: Immersion Event on Learning SQL Server Integration Services
    • May 2-6 (US$200 discount for registering in 2015)
  • IEHADR: Immersion Event on High Availability and Disaster Recovery
    • May 9-13 (US$200 discount for registering in 2015)

Our classes in September will be in Bellevue, WA:

  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • September 12-16 (US$200 discount for registering in 2015)
  • IEPTO2: Immersion Event on Performance Tuning and Optimization – Part 2
    • September 19-23 (US$200 discount for registering in 2015)
  • **NEW** IESSIS2: Immersion Event on Advanced SQL Server Integration Services
    • September 19-22 (US$160 discount for registering in 2015)

We’ll likely have some US classes in the second half of 2016 (details in Spring 2016), and there will be classes in London in June that I’ll announce in the next few days.

As you can see, we’re offering discounts off the early-bird price for all our 2016 classes if you register before the end of this year. The regular early-bird prices will apply from January 1st, 2016. If you’ve previously attended an Immersion Event, there’s a larger discount that’s always available to you whenever you register.

You can get all the logistical, registration, and curriculum details by drilling down from our main schedule page.

We hope to see you there!