Announcing the comprehensive SQL Server Wait Types and Latch Classes Library

It’s finally ready!

For the last two years, I’ve been working on-and-off on a new community resource. It was postponed during 2015 while I mentored 50+ people, but this year I’ve had a bunch of time to work on it.

I present to the community a comprehensive library of all wait types and latch classes that have existed since SQL Server 2005 (yes, it includes 2016 waits and latches).

The idea is that over time, this website will have the following information about all wait types and latch classes:

  • What they mean
  • When they were added
  • How they map into Extended Events (complete for all entries already)
  • Troubleshooting information
  • Example call stacks of where they occur inside SQL Server
  • Email link for feedback and questions

As of today, I have complete information for more than 225 common wait types (897 entries in the library) and 26 common latch classes (185 entries in the library), and I’m adding more information constantly.

If there’s one that I haven’t done yet, and you’re really interested in it, click the email link on the page to let me know. Also, if you’ve seen something be a bottleneck that I haven’t, let me know so I can update the relevant page too.

I’m doing this because there is so little information about waits and latches available online and I know that people are constantly searching for information about them. This is very much a labor of love for me as wait statistics are now my favorite area to teach.

Waiting until it’s complete isn’t feasible, so it’s open and available for use now!

Check it out at https://www.SQLskills.com/help/waits and https://www.SQLskills.com/help/latches, or jump to some examples PAGELATCH_EX, SOS_SCHEDULER_YIELD, WRITELOG, CXPACKET.

Enjoy!

Reconciling set-based operations with row-by-row iterative processing

Yesterday in class we had a discussion around the conceptual problem of reconciling the fact that SQL Server does set-based operations, but that it does them in query plans that pass single rows around between operators. In other words, it uses iterative processing to implement set-based operators.

The crux of the discussion is: if SQL Server is passing single rows around, how is that set-based operations?

I explained it in two different ways…

SQL Server Example

This explanation compares two ways of doing the following logical operation using SQL Server: update all the rows in the Products table where ProductType = 1 and set the Price field to be 10% higher.

The cursor based way (row-by-agonizing-row, or RBAR) would be something like the following:

DECLARE @ProductID   INT;
DECLARE @Price       FLOAT;

DECLARE [MyUpdate] CURSOR FAST_FORWARD FOR
SELECT [ProductID], [Price]
FROM [Products]
WHERE [ProductType] = 1;

OPEN [MyUpdate];

FETCH NEXT FROM [MyUpdate] INTO @ProductID, @Price;

WHILE @@FETCH_STATUS = 0
BEGIN
    UPDATE [Products]
    SET [Price] = @Price * 1.1
    WHERE [ProductID] = @ProductID;

    FETCH NEXT FROM [MyUpdate] INTO @ProductID, @Price;
END

CLOSE [MyUpdate];
DEALLOCATE [MyUpdate];

This method has to set up a scan over the Products table based on the ProductType, and then runs a separate UPDATE transaction for each row returned from the scan, incurring all the overhead of setting up the UPDATE query, starting the transaction, seeking to the correct row based on the ProductID, updating it, and tearing down the transaction and query framework again each time.

The set-based way of doing it would be:

UPDATE [Products]
SET [Price] = [Price] * 1.1
WHERE [ProductType] = 1;

This will have one scan based on the ProductType, which will update rows matching the ProductType, but the query, transaction, and scan are only set up once, and then all the rows are processed, one-at-a-time inside SQL Server.

The difference is that in the set-based way, all the iteration is done inside SQL Server, in the most efficient way it can, rather than manually iterating outside of SQL Server using the cursor.

Non-Technical Example

This explanation involves a similar problem but not involving SQL Server. Imagine you need to acquire twelve 4′ x 8′ plywood sheets from your local home improvement store.

You could drive to and from the store twelve times, and each time you need to go into the store, purchase the sheet, and wait for a staff member to become available to load the sheet into your pickup truck, then drive home and unload the sheet.

Or you could drive to the store once and purchase all twelve sheets in one go, with maybe four staff members making three trips each out to your pickup, carrying one sheet each time. Or even just one staff member making twelve trips out to your pickup.

Which method is more efficient? Multiple trips to the store or one trip to the store, no matter how many staff members are available to carry the sheets out?

Summary

No-one in their right mind is going to make twelve trips to the home improvement store when one will suffice. Just like no developer should be writing cursor/RBAR code to perform an operation that SQL Server can do in a set-based manner (when possible).

Set-based operations don’t mean that SQL Server processes the whole set at once – that’s clearly not possible as most sets have more rows than your server has processors (so all the rows in the set simply *can’t* be processed at the same time, even if all processors were running the same code at the same time) – but that it can process the set very, very efficiently by only constructing the processing framework (i.e. query plan with operators, scans, etc.) for the operation once and then iterating over the set of rows inside this framework.

PS Check out the technical comment from Conor Cunningham below (Architect on the SQL Server team, and my counterpart on the Query Optimizer when I was a Dev Lead in the Storage Engine for SQL Server 2005)

Updated sys.dm_os_waiting_tasks script to add query DOP

[Edit 2016: Check out my new resource – a comprehensive library of all wait types and latch classes – see here.]

A question came up in class today about easily seeing the degree of parallelism for parallel query plans, so I’ve updated my waiting tasks script to pull in the dop field from sys.dm_exec_query_memory_grants. I’ve also added in a URL field that points into the new waits library, and shortened some of the column names.

Here it is for your use.

Enjoy!

(Note that ‘text’ on one line does not have delimiters because that messes up the code formatting plugin):

/*============================================================================
  File:     WaitingTasks.sql

  Summary:  Snapshot of waiting tasks

  SQL Server Versions: 2005 onwards
------------------------------------------------------------------------------
  Written by Paul S. Randal, SQLskills.com

  (c) 2016, SQLskills.com. All rights reserved.

  For more scripts and sample code, check out 
    http://www.SQLskills.com

  You may alter this code for your own *non-commercial* purposes. You may
  republish altered code as long as you include this copyright and give due
  credit, but you must obtain prior permission before blogging this code.
  
  THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF 
  ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED 
  TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
  PARTICULAR PURPOSE.
============================================================================*/
SELECT
    [owt].[session_id] AS [SPID],
    [owt].[exec_context_id] AS [Thread],
    [ot].[scheduler_id] AS [Scheduler],
    [owt].[wait_duration_ms] AS [wait_ms],
    [owt].[wait_type],
    [owt].[blocking_session_id] AS [Blocking SPID],
    [owt].[resource_description],
    CASE [owt].[wait_type]
        WHEN N'CXPACKET' THEN
            RIGHT ([owt].[resource_description],
                CHARINDEX (N'=', REVERSE ([owt].[resource_description])) - 1)
        ELSE NULL
    END AS [Node ID],
    [eqmg].[dop] AS [DOP],
    [er].[database_id] AS [DBID],
    CAST ('https://www.sqlskills.com/help/waits/' + [owt].[wait_type] as XML) AS [Help/Info URL],
    [eqp].[query_plan],
    [est].text
FROM sys.dm_os_waiting_tasks [owt]
INNER JOIN sys.dm_os_tasks [ot] ON
    [owt].[waiting_task_address] = [ot].[task_address]
INNER JOIN sys.dm_exec_sessions [es] ON
    [owt].[session_id] = [es].[session_id]
INNER JOIN sys.dm_exec_requests [er] ON
    [es].[session_id] = [er].[session_id]
FULL JOIN sys.dm_exec_query_memory_grants [eqmg] ON
    [owt].[session_id] = [eqmg].[session_id]
OUTER APPLY sys.dm_exec_sql_text ([er].[sql_handle]) [est]
OUTER APPLY sys.dm_exec_query_plan ([er].[plan_handle]) [eqp]
WHERE
    [es].[is_user_process] = 1
ORDER BY
    [owt].[session_id],
    [owt].[exec_context_id];
GO

Code to analyze the transaction hierarchy in the log

Over the weekend there was a discussion on the MVP distribution list about the sys.dm_tran_database_transactions DMV and how one cannot use it to accurately determine how much log an operation has generated because it doesn’t provide a roll-up of the sub-transaction metrics to the outer transaction. This makes the output somewhat non-intuitive.

The discussion prompted me to write some code I’ve been meaning to do since 2012, when SQL Server 2012 introduced a field in LOP_BEGIN_XACT log records that tracks the transaction ID of the parent transaction, allowing the hierarchy of transactions to be investigated.

The actual code is at the bottom of the article, and is available in a zip file here.

It provides two stored procs, sp_SQLskillsAnalyzeLog and sp_SQLskillsAnalyzeLogInner, with the former making use of the latter, and the latter calling itself recursively.

The sp_SQLskillsAnalyzeLog proc will dump the hierarchy of transactions in the transaction log. By default it will only show the top-level transactions (with no parent transaction), and it has the following parameters:

  • @DBName (with a default of master)
  • @Detailed (default 0, when 1 it will shows the transaction begin time and Windows login, for top-level transactions only)
  • @Deep (default 0, when 1 it will show the sub-transaction hiearchy)
  • @PrintOption (default 0 for a resultset, 1 for textual output)

I’ve set the procs to be in master and system objects using sp_MS_marksystemobject. You can change them to be stored wherever you want.

The pseudo-code is as follows:

  • Get the info from the log into temp table 1
  • Create temp table 2 with a clustered index on an identity column
  • For each top-level transaction
    • If @Detailed, add the user name and start time
    • Get the last transaction added to temp table 2
    • If it’s the same as the one we’re about to add, increment the counter for the last one added, else add the new one
    • if @Deep, then, with recursion depth = 1,
      • **RP** for each sub-transaction of current next-level up transaction
        • Prefix ‘…’ x the recursion depth to the transaction name
        • Get the last transaction added to temp table 2
        • If it’s the same as the one we’re about to add, increment the counter for the last one added, else add the new one
        • Recurse to **RP**, increasing recursion depth
    • (doing it this way vastly reduces the amount of data to be stored in temp table 2)
  • select the result set or print it, depending on @PrintOption

Let’s look at an example, using the SalesDB database that you can restore from a zip file on our resources page:

-- Restore the database
USE [master];
GO
ALTER DATABASE [SalesDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
RESTORE DATABASE [SalesDB]
	FROM DISK = N'D:\SQLskills\DemoBackups\SalesDB2014.bak'
WITH STATS = 10, REPLACE;
GO

ALTER DATABASE [SalesDB] SET RECOVERY SIMPLE;
GO

-- Create a smaller copy of the Sales table
USE [SalesDB];
GO

SELECT *
INTO [SalesCopy]
FROM [Sales]
WHERE [SalesID] < 100000;
GO

CREATE CLUSTERED INDEX [SalesCopy_CL] ON [SalesCopy] ([SalesID]);
GO

-- Empty the log
CHECKPOINT;
GO

-- Online rebuild the clustered index
ALTER INDEX [SalesCopy_CL] ON [SalesCopy] REBUILD WITH (ONLINE = ON);
GO

-- Analyze the log
EXEC sp_SQLskillsAnalyzeLog salesdb, @Detailed = 1, @Deep = 1, @PrintOption = 1;
GO
ALTER INDEX by APPLECROSS\Paul @ 2016/05/01 11:26:48:113
...ONLINE_INDEX_DDL 2 times
OnlineIndexInsertTxn by APPLECROSS\Paul @ 2016/05/01 11:26:48:113
...BTree Split/Shrink
...BulkExtentAlloc
...SplitPage
...BulkExtentAlloc
...SplitPage
...BTree Split/Shrink
...BulkExtentAlloc
...SplitPage
...BulkExtentAlloc
...SplitPage 85 times
Allocate Root by APPLECROSS\Paul @ 2016/05/01 11:26:48:113
...AllocFirstPage
Allocate Root by APPLECROSS\Paul @ 2016/05/01 11:26:48:113
...AllocFirstPage
OnlineIndexInsertTxn by APPLECROSS\Paul @ 2016/05/01 11:26:48:150
...SplitPage
...BulkExtentAlloc
...SplitPage
...BulkExtentAlloc
...SplitPage 86 times
...BulkExtentAlloc
...SplitPage 89 times
...BulkExtentAlloc
...SplitPage 57 times
...BulkExtentAlloc
...SplitPage 31 times
...BulkExtentAlloc
...SplitPage 88 times
...BulkExtentAlloc
...SplitPage 52 times
SetFileSize @ 2016/05/01 11:26:48:303

Pretty cool, eh? You can see that the online rebuild uses a bunch of top-level transactions, which makes it difficult to determine exactly how much transaction log it generated as there isn’t one transaction that then drives everything else. But using this script, now you can see what an operation does.

There are other uses of this too:

  • Searching through the log to see who’s doing what
  • Analysis of your stored proc transactions and what they cause to happen under the covers on the system (e.g. page splits)

I hope you find this useful! Let me know if there are any other features you’d like to see and I’ll figure out if they’re possible and feasible. I can think of at least:

  • Making it work on log backups
  • Providing a roll-up of log space used for transactions and their sub-transactions (would be pretty slow, but do-able)

Enjoy!

Here’s the code, and it’s in the zip file here. I’m sure there are probably some ways to make this code more efficient, I’m not an expert T-SQL programmer :-)

/*============================================================================
  File:     sp_SQLskillsAnalyzeLog.sql

  Summary:  This script cracks the transaction log and prints a hierarchy of
			transactions

  SQL Server Versions: 2012 onwards
------------------------------------------------------------------------------
  Written by Paul S. Randal, SQLskills.com

  (c) 2016, SQLskills.com. All rights reserved.

  For more scripts and sample code, check out 
    http://www.SQLskills.com

  You may alter this code for your own *non-commercial* purposes. You may
  republish altered code as long as you include this copyright and give due
  credit, but you must obtain prior permission before blogging this code.
  
  THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF 
  ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED 
  TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
  PARTICULAR PURPOSE.
============================================================================*/

USE [master];
GO

IF OBJECT_ID (N'sp_SQLskillsAnalyzeLog') IS NOT NULL
	DROP PROCEDURE [sp_SQLskillsAnalyzeLog];
GO

IF OBJECT_ID (N'sp_SQLskillsAnalyzeLogInner') IS NOT NULL
	DROP PROCEDURE [sp_SQLskillsAnalyzeLogInner];
GO

CREATE PROCEDURE sp_SQLskillsAnalyzeLogInner (
	@XactID AS CHAR (13),
	@Depth AS INT)
AS
BEGIN
	DECLARE @String VARCHAR (8000);
	DECLARE @InsertString VARCHAR (8000);
	DECLARE @Name VARCHAR (256);
	DECLARE @ID INT;

	DECLARE @SubXactID CHAR (13);
	DECLARE @SubDepth INT = @Depth + 3;

	DECLARE [LogAnalysisX] CURSOR FAST_FORWARD LOCAL FOR
	SELECT [Transaction ID], [Transaction Name]
	FROM ##SQLskills_Log_Analysis
	WHERE [Parent Transaction ID] = @XactID;

	OPEN [LogAnalysisX];

	FETCH NEXT FROM [LogAnalysisX] INTO @SubXactID, @Name;

	WHILE @@FETCH_STATUS = 0
	BEGIN
		SELECT @InsertString = REPLICATE ('.', @Depth) + @Name;

		-- Select the last transaction name inserted into the table
		SELECT TOP 1
			@ID = [ID],
			@String = [XactName]
		FROM
			##SQLskills_Log_Analysis2
		ORDER BY [ID] DESC;

		IF @String = @InsertString
			UPDATE
				##SQLskills_Log_Analysis2
			SET
				[Times] = [Times] + 1
			WHERE
				[ID] = @ID;
		ELSE
			INSERT INTO ##SQLskills_Log_Analysis2
			VALUES (@InsertString, 1);

		-- Recurse...
		EXEC sp_SQLskillsAnalyzeLogInner @SubXactID, @SubDepth;

		FETCH NEXT FROM [LogAnalysisX] INTO @SubXactID, @Name;
	END;

	CLOSE [LogAnalysisX];
	DEALLOCATE [LogAnalysisX];
END
GO

CREATE PROCEDURE sp_SQLskillsAnalyzeLog (
	-- The name of a database, default of master
	@DBName AS sysname = N'master',

	-- Detailed = 0 means just the transaction name
	-- Detailed = 1 means time and user
	@Detailed AS INT = 0,

	-- Deep = 0 means only the top-level transactions
	-- Deep = 1 means sub-transaction hierarchy (slow!)
	@Deep AS INT = 0,

	-- PrintOption = 0 means SELECT as a resultset
	-- PrintOption = 1 means PRINT as text
	@PrintOption VARCHAR (25) = 0)
AS
BEGIN
	SET NOCOUNT ON;

	IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
		WHERE [name] = N'##SQLskills_Log_Analysis')
		DROP TABLE [##SQLskills_Log_Analysis];

	IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
		WHERE [name] = N'##SQLskills_Log_Analysis2')
		DROP TABLE [##SQLskills_Log_Analysis2];

	-- Only get the detailed info if we need it
	IF @Detailed = 1
		EXEC ('USE ' + @DBName + ';' +
			'SELECT [Transaction ID], [Transaction Name], [Parent Transaction ID],' +
			'[Begin Time], SUSER_SNAME ([Transaction SID]) AS [Who] ' +
			'INTO ##SQLskills_Log_Analysis FROM fn_dblog (null,null) ' +
			'WHERE [Operation] = ''LOP_BEGIN_XACT'';');
	ELSE
		EXEC ('USE ' + @DBName + ';' +
			'SELECT [Transaction ID], [Transaction Name], [Parent Transaction ID],' +
			'NULL AS [Begin Time], NULL AS [Who]' +
			'INTO ##SQLskills_Log_Analysis FROM fn_dblog (null,null) ' +
			'WHERE [Operation] = ''LOP_BEGIN_XACT'';');
	

	CREATE TABLE ##SQLskills_Log_Analysis2 (
		[ID]		INT IDENTITY,
		[XactName]	VARCHAR (8000),
		[Times]		INT);

	CREATE CLUSTERED INDEX [ID_CL]
	ON ##SQLskills_Log_Analysis2 ([ID]);

	-- Insert a dummy row to make the loop logic simpler
	INSERT INTO ##SQLskills_Log_Analysis2
	VALUES ('PSRDummy', 1);

	-- Calculate the transaction hierarchy
	DECLARE @XactID		CHAR (13);
	DECLARE @Name		VARCHAR (256);
	DECLARE @Begin		VARCHAR (100);
	DECLARE @Who		VARCHAR (100);
	DECLARE @String		VARCHAR (8000);
	DECLARE @ID			INT;
	DECLARE @Counter	INT;

	DECLARE [LogAnalysis] CURSOR FAST_FORWARD FOR
	SELECT
		[Transaction ID], [Transaction Name], [Begin Time], [Who]
	FROM
		##SQLskills_Log_Analysis
	WHERE
		[Parent Transaction ID] IS NULL;

	OPEN [LogAnalysis];

	FETCH NEXT FROM [LogAnalysis] INTO @XactID, @Name, @Begin, @Who;

	WHILE @@FETCH_STATUS = 0
	BEGIN
		-- Select the last transaction name inserted into the table
		SELECT TOP 1
			@ID = [ID],
			@String = [XactName]
		FROM
			##SQLskills_Log_Analysis2
		ORDER BY ID DESC;

		-- If it's the same as we're about to insert, update the counter,
		-- otherwise insert the new transaction name
		IF @String = @Name
			UPDATE
				##SQLskills_Log_Analysis2
			SET
				[Times] = [Times] + 1
			WHERE
				[ID] = @ID;
		ELSE
		BEGIN
			SELECT @String = @Name;

			-- Add detail if necessary
			IF @Detailed = 1
			BEGIN
				-- Do this separately in case CONCAT_NULL_YIELDS_NULL is set
				IF @WHO IS NOT NULL
					 SELECT @String = @String + ' by ' + @Who;

				SELECT @String = @String + ' @ ' + @Begin;
			END

			INSERT INTO ##SQLskills_Log_Analysis2 VALUES (@String, 1);
		END

		-- Look for subtransactions of this one
		IF @Deep = 1
			EXEC sp_SQLskillsAnalyzeLogInner @XactID, 3;

		FETCH NEXT FROM [LogAnalysis] INTO @XactID, @Name, @Begin, @Who;
	END;

	CLOSE [LogAnalysis];
	DEALLOCATE [LogAnalysis];

	-- Discard the dummy row
	DELETE
	FROM
		##SQLskills_Log_Analysis2
	WHERE
		[ID] = 1;

	-- Print the hierachy
	DECLARE [LogAnalysis2] CURSOR FOR
	SELECT
		[ID],
		[XactName],
		[Times]
	FROM
		##SQLskills_Log_Analysis2;

	OPEN [LogAnalysis2];

	-- Fetch the first transaction name, if any
	FETCH NEXT FROM [LogAnalysis2] INTO @ID, @String, @Counter;

	WHILE @@FETCH_STATUS = 0
	BEGIN
		IF @Counter > 1
		BEGIN
			SELECT @String = @String + ' ' +
				CONVERT (VARCHAR, @Counter) + ' times';
		END
		
		-- If we're going to SELECT the output, update the row
		IF @PrintOption = 0
			UPDATE
				##SQLskills_Log_Analysis2
			SET
				[XactName] = @String
			WHERE
				[ID] = @ID;
		ELSE
			PRINT @String;

		FETCH NEXT FROM [LogAnalysis2] INTO @ID, @String, @Counter;
	END;

	CLOSE [LogAnalysis2];
	DEALLOCATE [LogAnalysis2];

	IF @PrintOption = 0
	BEGIN
		SELECT
			[XactName]
		FROM
			##SQLskills_Log_Analysis2;
	END

	DROP TABLE ##SQLskills_Log_Analysis;
	DROP TABLE ##SQLskills_Log_Analysis2;
END
GO

EXEC sys.sp_MS_marksystemobject [sp_SQLskillsAnalyzeLog];
EXEC sys.sp_MS_marksystemobject [sp_SQLskillsAnalyzeLogInner];
GO

-- EXEC sp_SQLskillsAnalyzeLog salesdb, 1, 1, 1;

SQL Server health check by SQLskills: Spring discount offer

Through May and June, we’re offering a complete single-instance health-check for a flat fee of US$2,500… 

One of the most popular services we provide to new clients is a SQL Server Health Check, whether the new client is a Fortune-25 company or a small company with only a handful of employees. There’s a lot of healthy competition in the SQL Server world to provide this service, but we at SQLskills believe we provide the best value for money, because…

  • We automate the data collection process using a minimal-impact diagnostic tool that’s already installed with SQL Server, saving the client time and allowing data collection to be scheduled and handled by the client based on their schedule, rather than requiring many hours of data collection through an interactive online session.
  • We have custom data processing tools that we’ve developed that help us to analyze the data, saving the client money.
  • We document all our findings, and the report includes advice and justification (and links to deeper explanations) on remediation of each problem found, allowing the client to make the necessary changes on their own or investigate the problem further and plan/implement an appropriate fix. We also have a summary call with the client, allowing them to ask whatever questions they have on the information in the report. And of course, sometimes the client may choose to have us assist with or perform all the remediation work/further investigations – whatever works for them.
  • We only charge for the time we use (typically 12 hours or less per instance) rather than locking the client in to a costly, fixed-price engagement, saving the client money.
  • Our health check process has been developed and refined over the last 7 years based on the accumulated knowledge and experience of our small team (all Data Platform MVPs with a combined 80+ years of deep SQL Server experience), past client engagements and problems, and is constantly updated based on current trends and issues we see with SQL Server installations. Each of our consultants utilizes an internally-developed checklist of over 130 items that we look for during our health audits to ensure consistency and accuracy across our team, and we routinely review our processes to ensure that the team is current with new checks and findings with each release of SQL Server. Our process is constantly evolving to be more efficient and thorough based on our customer interactions.
  • For large environments, we encourage the client to pick representative instances to check, and then extrapolate the results to instances with common configurations, saving the client money.

We do all of this to take the least amount of our client’s time, and provide the best return on their investment, whether for small environments or large corporate data centers. Also, as our audit is engineered to be as efficient as possible, it allows small companies with small I.T. budgets to make use of our services.

Some of the clients we’ve been working with for many years started with a single-instance health check and come back to us a few times per year for help when they need it (and we don’t charge any retainer fee). It’s really fun to get to know our clients, watch how their environments grow and improve, and meet them in our classes and at conferences like SQLintersection.

Back to the point of my blog post… It’s Spring, so it’s time for some Spring Cleaning! Throughout the months of May and June, we’re offering new or existing clients a complete single-instance health-check for a flat fee of US$2,500 – that’s more than 1/3 off the usual price for a 12-hour health check. The discount price covers us performing the health check, documenting the results, and a wrap-up conference call or Webex to go over the results.

So no matter what your I.T. budget, you CAN afford to have SQLskills on your team. And if you’re from a large corporation, for US$2,500, you really CAN’T afford to pass up this opportunity!

If you’re interested in working with us in May or June, send us an email and we’ll get in touch with you right away.

We look forward to getting to know you and your data team – we promise you won’t be disappointed!

New course: Scaling SQL Server 2012 and 2014: Part 2

Glenn’s new course is called SQL Server: Scaling SQL Server 2012 and 2014: Part 2 and is just under 3 hours long. It covers a plethora of configuration and hardware issues that can prevent a workload from scaling, plus methods for scaling up and out and new features in 2014 that can help. Part 1 of the course (here) covers application and code design issues that can prevent workload scaling.

The modules are:

  • Introduction
  • Database Configuration Settings
  • Instance Configuration Settings
  • Storage Subsystem Issues
  • Hardware Issues
  • Scaling Up SQL Server
  • Scaling Out SQL Server
  • New Scalability Features in SQL Server 2014

Check it out here.

We now have more than 135 hours of SQLskills online training available (see all our courses here), all for as little as $29/month through Pluralsight (including more than 4,500 other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!

New course: Understanding and Using DBCC Commands

Erin’s new course is called SQL Server: Understanding and Using DBCC Commands and is just over 2.25 hours long. It covers all the documented DBCC commands and a few of the undocumented ones, plus Erin goes into details about how to use DMVs, when available, to provide more detailed information.

The modules are:

  • Introduction
  • Basic Commands
  • Informational Commands
  • Maintenance Commands
  • Validation Commands
  • Undocumented Commands

Check it out here.

We now have more than 135 hours of SQLskills online training available (see all our courses here), all for as little as $29/month through Pluralsight (including more than 4,500 other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!

Developer Edition and Pluralsight subscription for free, courtesy of Microsoft

No, this isn’t an April fools joke. Microsoft announced today at the Build conference that they’ve made SQL Server 2014 (and 2016 when it ships) Developer Edition completely free to download for anyone who’s a member of the free Visual Studio Dev Essentials community. The idea is to make it easier for developers to work with SQL Server for their application.

Now that in itself is cool, as it saves $59.95 per Developer Edition license (today’s price on the Microsoft Store), but they’re also throwing in a six-month, completely unlimited subscription to Pluralsight (where we have 140 hours of SQL Server training). That’s worth 6 x $29 = $174 at today’s prices. When they run out of six-month subscriptions, they’ll be giving out three-month subscriptions.

With more than 4,500 courses online to learn from, how can you beat free?

Check out the Visual Studio page to sign up – first come, first served on the six-month subscriptions!

New course: Building Simple Asynchronous Applications

My first post of the year is about our first Pluralsight course of the year!

Jonathan’s new course is called SQL Server: Building Simple Asynchronous Applications and is just over 1.5 hours long. It’s the first in a series of courses that Jonathan’s doing this year about using Service Broker, based on the extensive work he’s done with some of our clients building asynchronous processes. I’m really excited about this course being published as I think Service Broker is hugely underutilized in the SQL Server world.

The modules are:

  • Introduction
  • “Hello World” with Service Broker
  • Basic Architecture and Components
  • Building a Full Application
  • Basic Troubleshooting

Check it out here.

We now have 135 hours of SQLskills online training available (see all our courses here), all for as little as $29/month through Pluralsight (including more than four thousand other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!

2015 review: the year by the numbers

The last post of the year! It’s been a really excellent year all round and time for my traditional post counting down some of the numbers that have been my life this year.

  • 109318: the number of miles I flew on United
  • 33313: my current tweet total (up 1345 from 2014)
  • 12941: the number of subscribers to our Insider mailing list (up 1320 from 2014)
  • 11823: the number of emails I sent (down 444 from 2014)
  • 10843: the number of people who follow my Twitter ramblings (up 1448 from 2014)
  • 1603: the number of books (real ones) that I own (up 129 from 2014)
  • 868: the number of books I own but haven’t read yet (up 56 from 2014)
  • 148: the number of nights away from home (nearly all with Kimberly, so not *too* bad)
  • 131: the total number of hours of online training we have available on Pluralsight
  • 126: the number of dives I did this year in the Bahamas, Yap, Palau, and the Philippines, taking my total to 526
  • 115: the number of feet down on my deepest dive this year (going through swim-throughs with Jonathan in the Bahamas in January)
  • 91: the number of minutes of my longest dive this year
  • 88: the number of books I read (see this post)
  • 70: the number of days in Immersion Events and conferences
  • 42: the number of flights this year
  • 42: the number of Pluralsight courses we have available
  • 42: the answer to the question of life, the universe, and everything!
  • 40.55: the percentage of time we were away from home (which is why we call it our vacation home!)
  • 39: the number of SQLskills blog posts, including this one
  • 19: the number of different places we slept apart from our house and on planes
  • 18: the number of airports I flew through this year
  • 15: the number of new bird species I saw, taking my total to 499
  • 12: the number of monthly magazines I subscribe to
  • 8: the number of years I’ve been married to Kimberly
  • 8: the number of countries we visited this year
  • 7: the number of  SQLskills full-time employees, all of whom are fabulous and indispensable
  • 7: the number of new airports I flew through, taking my total to 89
  • 4: the number of new countries I visited (Bahamas, Federated States of Micronesia, Palau, Philippines), taking my total to 36
  • 2: the number of new airlines I flew on, taking my total to 34
  • 2: the number of awesome daughters we have
  • 1: number of new U.S. states I visited, taking my total to 23, and my first new one since 2011
  • 1: the number of new SQLskills team members, and accomplished breeder of tilapias: Tim Radney
  • 1: the person who is the best as snapping her fingers (especially when making fun of me – snap snap snap!): Erin Stellato
  • 1: the biggest hardware geek and ex-tank commander I know: Glenn Berry
  • 1: the number of Jonathan Kehayias in the world – thankfully :-)
  • 1: the number of indispensable assistants, without whom our lives would be a distressing quagmire – Libby we love you!
  • Finally, the one and only best person in my life: Kimberly, without whom I would be lost…

Thank you to everyone who reads our blogs, follows us on Twitter, sends us questions, watches our videos, comes to our classes, and generally makes being deeply involved in the SQL community a joy.

I sincerely wish you all a happy, healthy, and prosperous New Year!

Cheers!

(At Kanangra Walls in February a few hundred kilometers from Sydney, with Erin and Jon before teaching IEPTO2)

11001813_10153163615110452_4664459578093667159_n

 

(On board the Palau Aggressor liveaboard dive boat in July, our eldest behind me)

11825165_10153575057830452_8283356354666752999_n