Where Can I Download a Specific SQL Server Cumulative Update?

Microsoft has made a lot of changes to their servicing model for SQL Server over the years, specifically moving more towards Cumulative Updates and away from Service Packs as a means to deliver updates and fixes to SQL Server. Today I needed to replace a server in a Failover Clustered Instance (FCI) of SQL Server 2014 that was on Service Pack 1 + CU7. So I went to the KB article for CU7 (https://support.microsoft.com/en-us/kb/3162659) and clicked what I thought was the download link to CU7 for SP1:

Download Latest CU Link

However, to my dismay that link took me to the download link for SQL Server 2014 SP1 CU8 and not to CU7 as I was expecting. So I tried other SP1 CU links and they all redirect to the CU8 download page. Slightly irritated I decided to send a Skype IM to Glenn (@GlennAlanBerry|Blog) what was going on because he keeps up with every update Microsoft publishes for SQL Server. Glenn told me that Microsoft’s recommendation and preference is for people to install the latest CU so the KB articles now link to the latest CU download page. That doesn’t really help me with adding a failed FCI node back into a FCI that is on CU7, so Glenn offered to share the CU7 file with me by Dropbox.

It turns out that I didn’t need Glenn to share the file with me, I needed to read the information on the KB article closer and pay attention.

Download CU7 Link

The Note at the bottom of the information specifically says that after future CUs are released this specific CU can be downloaded from the Microsoft Windows Update Catalog.  So if you need a specific CU for SQL Server, either for testing purposes, or to fix a failed FCI node, head over to the Microsoft Windows Update Catalog, and you can easily find it there.

One additional note:

You have to use Internet Explorer to access the Microsoft Windows Update Catalog, the site won’t allow you to access it using Google Chrome or other non-IE browsers it seems.

SQL Sentry Plan Explorer Completely Free!

Just in case you missed the announcement by Greg Gonzalez at SQL Sentry this morning, Plan Explorer Pro and what would have become Plan Explorer Ultimate are now completely free.

plan-explorer-blog-post

Since SQL Sentry first released Plan Explorer in late 2007, I have been a huge fan and this is a tool that is one of my most heavily used in consulting work. I recommend it to clients almost everyday and routinely demonstrate the different editions during our Immersion Events and at SQL Saturdays. Where the previous Free edition simplified analysis of execution plans, the new features that have been added from the previous Pro edition and what would have become Ultimate edition make this hands down the best tool for performance tuning and execution plan analysis. The new Index Analysis tab with statistics histogram graphed out will make analyzing plan skew due to parameter sensitivity much easier. This is a feature that I’ve been waiting to see live for months now and can’t wait to make more use of in client engagements.

index-analysis

Add to that the ability to analyze deadlock graphs, collection of execution wait statistics with Extended Events, query execution plan history with notes, and advanced plan layout and filtering options. Now it is all COMPLETELY FREE!  As if that’s not good enough, you can also attend a free query tuning seminar with Aaron Bertrand on Friday September 9th 11:00AM-12:00PM EST where he will demonstrate how to use the new tool more effectively.

Free Seminar on Query Tuning

SSMS Availability Group Sync Addin Update

The SSMS Availability Group addin I wrote a few years back for SQL Server 2012 has been updated and released with SSMS 2014 support available.  This has been something that has been highly requested and I’ve been through a couple of rounds of beta tests privately, so I’ve put together a new installer that will install the 2012 version and the 2014 version side by side.  You can get the new installer from the download page for the Addin in our Free Tools section. If you have the older version of the Addin, you might need to uninstall it first for the newer updates to the 2012 version to be installed.  There is no current support for SSMS 2016 due to the change in VS shell platforms to VS2015 which requires a complete overhaul of the Addin to a VS Extension (VSIX) package.

Parallel Maintenance Tasks with Service Broker

I’ve been doing a bit of work on Service Broker while building a series of new Pluralsight courses and one of the things I recently found a good use for Service Broker on a client system with a 8TB VLDB was for implementing parallel maintenance tasks using multiple threads.  The idea for this implementation came out of the fact that their maintenance window continues to become shorter and shorter the larger their workload increases, while the size of their database continues to get larger and larger.  One of the things I noticed while doing a monthly health check of their server was that during the maintenance window, their hardware is no where close to being utilized and there are plenty of resources available due to the serial execution of their maintenance tasks.  This specific client uses Ola Hallengren’s maintenance procedures, which we recommend to all of our clients, so I started to think about ways I could make things run in parallel to get better utilization of the hardware during the maintenance window and hopefully reduce the time required to perform the day to day maintenance in their database.

I wish I could say the first thing I thought of was Service Broker, but it wasn’t I toyed around with the idea of a custom C# app to run Ola’s scripts and log the output commands to the CommandLog table in master, then use multiple threads to run each of the individual tasks, something similar using Powershell instead of a compiled app, and even multiple agent jobs to break things up using different parameters for IndexOptimize. It turns out that these are all over engineering a very basic problem that Service Broker is highly suited towards solving and it only took a little bit of trial and error to create a very robust solution that runs index and statistics maintenance tasks in parallel in SQL Server.

Setting up Service Broker Components

I chose to implement the Service Broker components for this configuration in the MSDB database for a couple of reasons.  First, it is already enabled for Service Broker, so I don’t have to worry about that, second, it is also already TRUSTWORTHY ON which is required by Agent, and third it is owned by SA, so an activation stored procedure marked with EXECUTE AS OWNER, runs as a full sysadmin in the server allowing it to run any maintenance command generated by Ola’s scripts.  We need to configure some basic objects for Service Broker in the database; a message type, a contract, a queue and a service.

USE msdb
GO
-- Create the message types
CREATE MESSAGE TYPE
[OlaHallengrenMaintenanceTaskMessage]
VALIDATION = WELL_FORMED_XML;
GO

-- Create the contract
CREATE CONTRACT [OlaHallengrenMaintenanceTaskContract]
([OlaHallengrenMaintenanceTaskMessage]
SENT BY INITIATOR);
GO

-- Create the target queue and service
CREATE QUEUE OlaHallengrenMaintenanceTaskQueue;
GO

CREATE SERVICE
[OlaHallengrenMaintenanceTaskService]
ON QUEUE OlaHallengrenMaintenanceTaskQueue
([OlaHallengrenMaintenanceTaskContract]);
GO

Building an Activation Procedure to Automate Processing

With that setup, now we need to build an activation stored procedure to process the messages from the queue and perform the operations generated by Ola’s scripts.  At first I thought that this would be pretty straightforward, but it actually required a fair amount of additional logic to prevent poison messages and queue deactivation from occurring when multiple online index rebuilds were attempted on the same table.  When this occurs the engine raises Error 1912:

Could not proceed with index DDL operation on %S_MSG ‘%.*ls’ because it conflicts with another concurrent operation that is already in progress on the object. The concurrent operation could be an online index operation on the same object or another concurrent operation that moves index pages like DBCC SHRINKFILE.

which causes the XACT_STATE() of the operation to become –1, which only allows the transaction processing the command to be rolled back, leading to a poison message within Service Broker.  So to mitigate against that I had to add in a check against sys.dm_tran_locks for any ALTER_INDEX command to identify if a conflicting lock is currently being held and if it is then we just requeue the message and increment the retry count held within the message so that we can incrementally back-off attempting to run the command each time it encounters a conflicting lock using a WAITFOR.  Since Ola’s scripts log the command type, database name, object name, and command to be executed to the CommandLog table in the master database, all we need to queue in our message is the ID of the command to be processed by Service Broker, and we can get the rest of the pertinent information from the CommandLog table directly.  As a part of each tasks execution in our activation stored procedure we will also update the StartTime and EndTime of the operation for historical trending of execution durations.

IF OBJECT_ID('OlaHallengrenMaintenanceTaskQueue_ActivationProcedure') IS NOT NULL
BEGIN
	DROP PROCEDURE OlaHallengrenMaintenanceTaskQueue_ActivationProcedure;
END
GO

CREATE PROCEDURE OlaHallengrenMaintenanceTaskQueue_ActivationProcedure
AS

  DECLARE @conversation_handle UNIQUEIDENTIFIER;
  DECLARE @message_body XML;
  DECLARE @message_type_name SYSNAME;
  DECLARE @Command NVARCHAR(MAX);
  DECLARE @ID INT
  DECLARE @DBName SYSNAME;
  DECLARE @ObjectName SYSNAME;
  DECLARE @CommandType NVARCHAR(60);
  DECLARE @Retry INT;
  DECLARE @FQN NVARCHAR(400);

  WHILE (1=1)
  BEGIN

    BEGIN TRANSACTION;

    WAITFOR
    ( RECEIVE TOP(1)
        @conversation_handle = conversation_handle,
        @message_body = message_body,
        @message_type_name = message_type_name
      FROM OlaHallengrenMaintenanceTaskQueue
    ), TIMEOUT 5000;

    IF (@@ROWCOUNT = 0)
    BEGIN
      ROLLBACK TRANSACTION;
      BREAK;
    END

    IF @message_type_name = N'OlaHallengrenMaintenanceTaskMessage'
    BEGIN
		SELECT @ID = @message_body.value('(CommandLogID)[1]', 'int'),
				@Retry = ISNULL(@message_body.value('(CommandLogID/@retry)[1]', 'int'), 0);

	   
		SELECT @Command = Command, 
				@ObjectName = ObjectName, 
				@DBName = DatabaseName,
				@FQN = QUOTENAME(DatabaseName) + '.' + QUOTENAME(SchemaName) + '.' + QUOTENAME(ObjectName),
				@CommandType = CommandType
		FROM master.dbo.CommandLog
		WHERE ID = @ID;

		-- 	Check for Index rebuilds if one is already running and requeue the request after waiting.
		IF @CommandType = 'ALTER_INDEX'
		BEGIN

			-- Check if we have an incompatible lock that would lead to a failed execution 
			IF EXISTS (SELECT 1
						FROM sys.dm_tran_locks AS tl
						WHERE (request_mode = 'SCH-M' OR 
							-- Concurrent maintenance task doing UpdateStats?
							(request_mode = 'LCK_M_SCH_S' AND EXISTS (SELECT 1 FROM sys.dm_exec_sessions AS s WHERE is_user_process = 1 AND tl.request_session_id = s.session_id)))
						  AND resource_associated_entity_id = OBJECT_ID(@FQN)
						  AND resource_database_id = DB_ID(@DBName) )
			BEGIN
				-- Wait for 5 seconds times the number of retrys to do an incremental backoff
				-- This will eventually cause all queue readers to die off and serial execution of tasks
				DECLARE @Delay NVARCHAR(8) = CAST(DATEADD(ss, @Retry*5, CAST('00:00:00'AS TIME)) AS VARCHAR)
				WAITFOR DELAY @Delay

				-- Increment retry count in the message
				SELECT @message_body = N'<CommandLogID retry="'+CAST(@Retry+1 AS NVARCHAR)+'">'+CAST(@id AS NVARCHAR)+N'</CommandLogID>';


				-- Send the message back to the queue for later processing
				;SEND ON CONVERSATION @conversation_handle
						MESSAGE TYPE [OlaHallengrenMaintenanceTaskMessage]
						(@message_body);

				GOTO SkipThisRun
			END
		END

		UPDATE master.dbo.CommandLog
		SET StartTime = GETDATE()
		WHERE ID = @ID;

		BEGIN TRY 
			EXECUTE(@Command);

			UPDATE master.dbo.CommandLog
			SET EndTime = GETDATE()
			WHERE ID = @ID;

		END TRY
		BEGIN CATCH

			UPDATE master.dbo.CommandLog
			SET EndTime = GETDATE(),
				ErrorMessage = ERROR_MESSAGE(),
				ErrorNumber = ERROR_NUMBER()
			WHERE ID = @ID;

		END CATCH

       END CONVERSATION @conversation_handle;
    END

	-- If end dialog message, end the dialog
    ELSE IF @message_type_name = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
    BEGIN
       END CONVERSATION @conversation_handle;
    END

	-- If error message, log and end conversation
    ELSE IF @message_type_name = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
    BEGIN
		DECLARE @error INT;
		DECLARE @description NVARCHAR(4000);

		-- Pull the error code and description from the doc
		WITH XMLNAMESPACES ('http://schemas.microsoft.com/SQL/ServiceBroker/Error' AS ssb)
		SELECT
			@error = @message_body.value('(//ssb:Error/ssb:Code)[1]', 'INT'),
			@description = @message_body.value('(//ssb:Error/ssb:Description)[1]', 'NVARCHAR(4000)');
		
		RAISERROR(N'Received error Code:%i Description:"%s"', 16, 1, @error, @description) WITH LOG;

		-- Now that we handled the error logging cleanup
		END CONVERSATION @conversation_handle;
	END
 
 SkipThisRun:   
    COMMIT TRANSACTION;

  END
GO

Enabling Parallel Queue Activation

The last thing we need to do is bind our stored procedure to the queue for automated activation and to set the MAX_QUEUE_READERS to the number of concurrent parallel tasks we want to have executing on the system at any given point in time.  I spent some time testing different values for MAX_QUEUE_READERS and found that 5 was generally sufficient to provide significant reductions in runtimes without creating contention between the tasks being executed.

-- Alter the target queue to specify internal activation
ALTER QUEUE OlaHallengrenMaintenanceTaskQueue
WITH ACTIVATION
( STATUS = ON,
PROCEDURE_NAME = OlaHallengrenMaintenanceTaskQueue_ActivationProcedure,
MAX_QUEUE_READERS = 5,
EXECUTE AS SELF
);
GO

Now all we have to do is setup a job to feed the tasks into the queue as messages and Service Broker will automatically begin processing the tasks in parallel up to the MAX_QUEUE_READERS configured value.  To do this, I just used Ola’s scripts to log the commands to the CommandLog table but not execute them and then using a cursor and ROW_NUMBER() definition to split the work up across different tables instead of queuing the commands in the order generated, send each ID as a message to Service Broker to process.

DECLARE @MaxID INT
SELECT @MaxID = MAX(ID)
FROM master.dbo.CommandLog;

SELECT @MaxID = ISNULL(@MaxID, 0)

-- Load new tasks into the Command Log
EXECUTE master.dbo.IndexOptimize
@Databases = 'AdventureWorks2014',
@FragmentationLow = 'INDEX_REBUILD_OFFLINE',
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@UpdateStatistics = 'ALL',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@LogToTable = 'Y',
@Execute = 'N';

DECLARE @NewMaxID INT
SELECT @NewMaxID = MAX(ID)
FROM master.dbo.CommandLog;

USE    msdb;

DECLARE @id int;

-- Don't submit commands  in exact command order or parallel processing
-- of indexes/stats on same object will occur and could block
DECLARE command_cursor CURSOR FOR
SELECT ID
FROM (
SELECT ROW_NUMBER() OVER
(PARTITION BY ObjectName
ORDER BY COALESCE(IndexName, StatisticsName), CommandType) AS Ranking,
ID
FROM master.dbo.CommandLog
WHERE ID > @MaxID AND ID <= @NewMaxID)  AS t
ORDER BY Ranking

OPEN command_cursor

FETCH NEXT FROM command_cursor
INTO @id

WHILE @@FETCH_STATUS = 0
BEGIN

-- Begin a conversation and send a request message
DECLARE @conversation_handle UNIQUEIDENTIFIER;
DECLARE @message_body XML;

BEGIN TRANSACTION;

BEGIN DIALOG @conversation_handle
FROM SERVICE [OlaHallengrenMaintenanceTaskService]
TO SERVICE N'OlaHallengrenMaintenanceTaskService'
ON CONTRACT [OlaHallengrenMaintenanceTaskContract]
WITH ENCRYPTION = OFF;

SELECT @message_body = N'<CommandLogID>'+CAST(@id AS NVARCHAR)+N'</CommandLogID>';

SEND ON CONVERSATION @conversation_handle
MESSAGE TYPE [OlaHallengrenMaintenanceTaskMessage]
(@message_body);

COMMIT TRANSACTION;

-- Get the next command to run
FETCH NEXT FROM command_cursor
INTO @id
END
CLOSE command_cursor;
DEALLOCATE command_cursor;
GO

I originally started out processing the records in the order generated, but I found that I would hit blocking with statistics updates on the same table until Trace Flag 7471 was enabled for the instance to change the locking behavior in SQL Server 2014 SP1+CU6.  However, if you don’t have access to the trace flag, by randomizing the order of objects as they are queued, only the tables with a large number of indexes or statistics will have problems running parallel and only at the end of the maintenance tasks since the threads will be working through different tables by the order of queuing.  This has resulted in 30-45% reductions in maintenance times depending on the database schema and the types of maintenance that have to be performed so far in testing.

A warning about Diagnostic Manager’s Query Monitor Feature in v9+

During a recent client engagement to look at the performance problems of a production SQL Server I ran into something I hoped to never see in real life. For a long time I’ve been a proponent of Extended Events in SQL Server, and I’ve really looked forward to seeing them used more and more inside of SQL Server for diagnostics data collection, especially by third party software vendors. I’ve done a lot of performance tests comparing Extended Events to SQL Trace and Extended Events generally has lower performance impact to SQL Server performance. However, I have also found and demonstrated how specific events like collecting the actual execution plan with Extended Events can severely impact a server’s performance. Any time Erin or I talk about Extended Events, whether it is at a conference or in our IEPT02 – Performance Tuning class, one of the key things we both repeat to people is that if you see an completely unexplainable drop in server performance and throughput, check for Extended Events sessions and Traces running on the server and look at what events are being collected. For Extended Events we always demo the impact of the query_post_execution_showplan event live to demonstrate why this event should not be used in production environments ever. Yesterday I was incredibly surprised to not only find the event in an Event Session on the affected server, but also to find that it came from Idera Diagnostic Manager’s Query Monitor feature.

TL;DR

If you have Diagnostic Manager 9.0 or higher, DO NOT enable the Query Monitor feature for your servers, without first disabling (uncheck) the box to Collect Query Plans (SQL Server 2008 and up only). If you have the Query Monitor feature enabled, check the properties for all SQL Server 2008+ servers and disable the Collect Query Plans (SQL Server 2008 and up only) then do a Refresh Alerts for the server,  right click the server in the console and click Refresh Server, to force it to modify the event session and remove the query_post_execution_showplan event.  I have already contacted Idera regarding the performance impacts of this event being used in production environments, and have provided images of CPU Usage with the default configuration and after removing execution plan collection, showing reductions of up to 20-25%. I have also been proactively working with clients I know have Diagnostic Manager installed to get their configurations changed to remove the performance impact from any server monitored by Query Monitor in their environments.

How Do I Disable this Event/Feature?

The idea behind the Query Monitor feature is to collect information about poor-performing queries based on configurable values for duration, logical_reads, CPU, or writes. This is not a bad thing and is something that most servers will want to have configured for monitoring. The problem is not with the functionality of the Query Monitor feature itself, but with just one extra event that gets added to the Extended Events session that gets created by default. To enable/disable the Query Monitor feature and change the behavior of it for an individual server, open the server’s Properties dialog from the right-click context menu in the console and click on the Query Monitor page as shown below.

clip_image001

If you simply accept the defaults for the Query Monitor feature, leaving the checkbox highlighted in red above checked, and turn it on for SQL Server 2012 or higher, an Extended Events session called SERVERNAMESQLdmQMEXevents will be created and started on your server that captures the query_post_execution_showplan event. You can examine the entire session for  yourself using the Script Session As… CREATE To… New Query Editor Window option in SSMS (or equivalent DMVs in earlier versions than 2012).

You can also use the Session Properties to see everything it’s doing (just as you can for all sessions created on your server, by any third-party tools):

clip_image002

Notice the highlighted event in the screenshot of the event session properties in Management Studio. This is bad! Even though the query_post_execution_showplan event has a duration filter greater than five seconds (5000000 microseconds) the performance impact of the event occurs for every single statement that gets executed inside of SQL Server because the event has to collect ALL of the default data elements, which include the showplan XML, before the predicate is evaluated. There is NO WAY to eliminate the performance overhead of this event because it collects the plan before filtering is applied. This event is one of a handful of events in Extended Events that have a warning about their performance impact in the event description:

Occurs after a SQL statement is executed. This event returns an XML representation of the actual query plan. Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.

There is no warning about the performance impact anywhere in the Diagnostic Manager UI and based on the information in Idera’s online forums, Query Monitor gets enabled by default for all new instances unless you remove a checkbox on the Add Server wizard in the UI. For existing servers after an upgrade to Diagnostic Manager that were not previously using the Query Monitor feature, it remains disabled. However, when you go to the Query Details page for that server, a warning message is shown that Query Monitor is disabled, prompting you to click on it to configure the Query Monitor for the server.

clip_image003

Why am I writing this blog post?

The purpose of this post is to warn users of the performance impact that this feature has when the defaults are used to enable the Query Monitor against a server, and to at the same time show users how to fix the issue so that Query Monitor can be used to collect important diagnostic information without severely affecting the performance of the servers. I’ve already raised this issue with Idera separately, and I hope to see this event removed from a future update to their product entirely. The performance effects of this event outweigh the cost of having it enabled on production servers, which is why I warn anyone I talk to about Extended Events about this specific events usage in production servers. I think the usage of Extended Events by third party vendors instead of SQL Trace for this type of diagnostic data collection is a good thing overall, but I don’t believe a monitoring tool for performance should ever use an event that can severely degrade performance of a production server.

If you use Diagnostic Manager to monitor your SQL Servers, check the configuration of the Query Monitor feature on ALL SQL Server 2012+ servers and disable the execution plan collection or you WILL have performance degradation from the usage of the query_post_execution_showplan event being included in the event session monitoring your server!

Configuring a Multi-Subnet SQL Server Test Environment

Have you ever wanted to play with a multi-subnet SQL Server configuration for Availability Groups or Failover Clustering with SQL Server 2012 or higher? Getting an environment setup to support multiple subnets is not trivial, and a lot of times if you look online for information about how to configure routing for multiple subnets you will find a recommendation to use a Linux based VM separately as a router/appliance to handle the routing.  In this blog post I’m going to show you how to setup a virtual environment with a single Windows Server 2012 R2 server that performs the role of Active Directory Domain Controller, DNS server, DHCP host, and as a router for multiple subnets as the basis for a multi-subnet virtual environment for testing and playing with multi-subnet configurations for SQL Server 2012 and higher.  I am building this environment in VMware Workstation 10, but the same configuration can be performed under any hypervisor that supports multiple virtual network configurations, once the host configuration of the virtual networks has been appropriately configured.  VMware Workstation makes this configuration very easy through the Virtual Network editor that is included as a part of the VMware Workstation implementation.

Initial VM Configuration

For the initial VM setup, I created a VM with 2 vCPUs and 2GB RAM and then edited the settings to change the virtual network adapter settings to add two new network adapters to the VM.  The default setup includes one virtual network adapter for NAT, so I am going to add one network adapter on VMNet2, then add another adapter on VMNet3 as shown below.

imageimage

The final hardware configuration for the VM should then look like this:

image

Now we can install Windows Server 2012 R2 and setup our basic server configuration.  In this case I’ve changed the name of the server to be DC, and set static IP’s to each of the network adapters, 192.168.52.1 for VMNet2, and 192.168.53.1 to VMNet3.

imageimage

To properly identify each adapter, you can edit the VM settings and disconnect them one at a time so it shows Network cable unplugged for the network connection.  I usually set the NAT adapter to disconnected unless there is a need to have the VM connected to the internet through the host internet connection for some reason.  Once this has been done we can proceed to setup the server as our Active Directory Domain Controller, DNS server, DHCP host for the networks, and as a Router between the two subnets.

Configuring Active Directory Domain Services

To setup Active Directory Domain Services, open the Add Roles and Features Wizard, and then check Active Directory Domain Services as a role for the server.  It will open a popup window shown below to add additional features required by Active Directory Domain Services.  Click the Add Features button then click Next twice followed by the Install button.

image

When the installation finishes, don’t click the Close button.  Instead click on the link that says Promote this server to a domain controller to launch the Active Directory Domain Services Configuration Wizard.

image

Click Close

image

Click the Add a new forest radio button, then add the Domain Name and click Next

image

Set the domain functional level and then specify the recovery password and click next

image

Click Next

image

Click Next

image

Click Next and wait for the prerequisites checks to complete

image

Click Install and it will finalize the configuration of the Active Directory Domain Services on the server making it a new Domain Controller in the environment.  When it finishes Windows will prompt to sign you out as shown below, and then the VM will reboot.

image

Configuring DHCP

Configuring DHCP is not required, but it simplifies creating all the remaining VMs by automatically configuring their network settings based on the VMNet that they have been assigned on the virtual network adapters. Open the Add Roles and Features Wizard and add the DHCP Server role to the machine.

image

A popup will open with the additional features required and you can click Add Features.

image

Then click Next, followed by Install and when the installation finishes you can click the Complete DHCP Configuration.  If you mistakenly click Close, you can still complete the configuration by opening Server Manager and then clicking on Manageability under the DHCP box.

image

This will open up the Manageability Detail View shown below.

image

Click Go To DHCP.

image

Click the More link at the top in the yellow bar to the far right

image

Click the Complete DHCP configuration link.

image

Click Next

image

Click Commit

image

This authorizes the server to be a DHCP host, so now we need to customize the configuration of DHCP. Click the Start button then type in DHCP and open the DHCP console.

image

Right click on the IPv4 and choose New Scope then click Next.

image

image

I name the scopes by the Subnet they apply to so in this case 192.168.52.x and once we complete this scope a second will have to be made for the 192.168.53.x subnet by simply repeating these same steps and using that subnet for the gateway and DNS server addresses.

image

Set the starting and ending IP address range for the scope and click Next. Then click Next on the Add Exclusions and Delay and Lease Duration windows.

image

On the Configure DHCP Options page, make sure it says Yes, I want to configure these options now and click Next.

image

Add the local IP address for this server as the Router address, in this case 192.168.52.1 and click Add, then click Next. (For the second scope you will use the other IP address, in this case the 192.168.53.1 address)

image

If the IP Address for the DNS Server is not the local address for the subnet, remove any existing IP addresses, and add the local IP address for the subnet, in this case 192.168.52.1 as the DNS server and then click Next. (Again, for the second scope you will use the other IP address, in this case the 192.168.53.1 address)

image

Remove any WINS server IP Addresses and click Next.

image

Then click Next to activate the scope.  Now repeat these same steps for the 192.168.53.x subnet specifying the 192.168.53.1 address for the router and DNS server and activate that scope.

Configuring Routing Between Subnets

To setup routing between the subnets, open the Add Roles and Features Wizard, and then check Remote Access as a role for the server.

image

Click Next three times and then check the Routing box.  It will open a popup window shown below to add additional features required for Routing.  Click the Add Features button.

image

This will also cause the DirectAccess and VPN (RAS) option to become checked as shown below.

image

Click Next twice and it will have the addition IIS role services window shown below with the required features checked.  Nothing needs to be changed here, just click Next again, followed by Install to install the Remote Access Role to the server.

image

When the installation completes, click Close.  Now click on Start and type in Routing and Remote Access and open the Routing and Remote Access management console.

image

Right click on the server and select the Configure and Enable Routing and Remote Access option.

image

Click Next on the first window then select the Custom Configuration option and click Next.

image

Check the box for LAN routing and click Next.  Then click Finish.

image

Now click the Start service button to start the Routing and Remote Access services.

image

 

EDIT: After initially publishing this blog post, Thomas Stringer (Blog | Twitter) pointed out that static routes don’t have to be configured and RIP can be used to simplify the configuration of routing.  So I am updating this post with the RIP configuration as an additional section. Either option will work but I have to admit that RIP is easier to configure. You only need to configure RIP or the Static Routes, don’t configure both options. 

Routing with Routing Information Protocol (RIP)

  1. Expand the IPv4 folder and right click on General and choose New Routing Protocol.
  2. Select the RIP Version 2 for Internet Protocol option and click OK.
  3. Now right click on RIP under IPv4 and choose New Interface.
  4. Select the 192.168.52.1 interface and click OK.
  5. Then click OK again.
  6. Now right click on RIP under IPv4 and choose New Interface.
  7. Select the 192.168.53.1 interface and click OK.
  8. Then click OK again.
  9. Now you are finished.

Routing with Static Routes

Expand the IPv4 folder and right click on Static Routes and choose the New Static Route option.

image

Select the 192.168.52.1 interface and then add a Destination of 192.168.53.0, subnet mask of 255.255.255.0 and gateway of 192.168.53.1 to create a static route between the subnets.

image

Now create a New Static route again.

image

This time, change to the 192.168.53.1 interface and set the Destination to 192.168.52.0, network mask to 255.255.255.0, and the Gateway to 192.168.52.1 and click OK to add the static route between the subnets.

image

Restart the Routing and Remote Access Services and you’re all set.

Testing

To test the configuration, I created two new VMs named SQL1 and SQL2 with a single network adapter each.  SQL1 was configured on VMNet2 and SQL2 was configured on VMNet3, then joined both of the servers to the SQLskillsDomain.com domain.  The Windows Firewall needs to be configured to allow ICMP echo packets in the Inbound Rules on each of the servers by enabling the File and Printer Sharing (Echo Request –ICMPv4-In) rule that already exists in the Inbound Rules of the firewall. Once this is enabled on each of the servers, we can test with ping between the VMs as shown below:

SQL1 to SQL2

image

SQL2 to SQL1

image

These same steps can be performed in Windows Server 2008R2 and Windows Server 2012, with only minor differences in the configuration of the roles and features.  The Routing and Remote Access configuration to provide the routing between the subnets is nearly identical on all three versions of Windows.  Once the server is configured, the environment can be used to create a multi-subnet Windows Server Failover Cluster for a multi-subnet Availability Group or even a multi-subnet Failover Clustered instance.

Availability Groups and Columnstore Indexes

This week Paul, Kimberly, Erin, and Tim are at SQLIntersection in Arizona presenting, and one of the best things about presenting at conferences is the questions that get asked about SQL Server features.  Kimberly was asked a question about whether or not columnstore indexes could be used in a database that participates in an Availability Group or not and there is conflicting information available online. Since I have multiple 2012 and 2014 Availability Groups setup in my laptop for teaching our Immersion Events, I spent a little time testing and agreed to blog about the findings. 

Columnstore indexes can be created in any database participating in an Availability Group without a problem. If you create a nonclustered column store index, it can also be used on a readable secondary for queries. However, a clustered columnstore index can only be read on the primary database. If you try to query a clustered columnstore index on a readable secondary you will get error 35371,

Msg 35371, Level 16, State 1, Line 4
SNAPSHOT isolation level is not supported on a table which has a clustered columnstore index.

So as long as the columnstore index is nonclustered, it can still be read on the secondary, but keep in mind that the nonclustered columnstore index is not updateable through DML operations and has to either leverage partition switching or rebuilding the index to modify the data in the table. A good read on the differences can be found in the Books Online.

Update: SQL Server 2016 will support clustered columnstore indexes on readable secondaries as well.

New Performance Tuning using Wait Statistics Whitepaper

Last week a new whitepaper titled SQL Server Performance Tuning Using Wait Statistics: A Beginners Guide was made available through SimpleTalk. This paper is a combination of information from Chapter 1 of my Troubleshooting SQL Server: A Guide for the Accidental DBA book (Amazon|eBook download), also published by SimpleTalk, and a series of articles created by Erin on baseline data collection techniques.

Understanding the time a session spends waiting inside of SQL Server is an incredibly important part of performance tuning and diagnosing problems. As consultants we use the information contained in the wait statistics during health checks of servers for bottleneck identification, and as a part of root cause analysis while troubleshooting problems. This whitepaper will provide you an introduction into the world of performance tuning using wait statistics in SQL Server and will explain the common wait types and what they do and do not mean in the context of performance tuning and troubleshooting. You will also find code for collecting and aggregating the information available in SQL Server DMVs about the waits that have occurred in the past and what sessions are currently waiting for in the paper to simplify getting started with understanding the activity on you SQL Servers.

Table of Contents

  1. Introduction
  2. The SQLOS scheduler and thread scheduling
  3. Using wait statistics for performance tuning
  4. Investigating active-but-blocked requests using sys.dm_os_waiting_tasks
  5. Analyzing historical wait statistics using sys.dm_os_wait_stats
  6. Common wait types
  7. Wait Statistics baselines
  8. Summary
  9. Further reading
  10. About the authors
  11. About the technical editor

A big thanks goes to Tony Davis at Redgate for editing all of the content and merging the two sets of information into a single concise introduction to the world of Performance Tuning SQL Server using Wait Statistics. The whitepaper can be downloaded from the SimpleTalk website, or from the help section of our site through the link below.

http://www.sqlskills.com/help/sql-server-performance-tuning-using-wait-statistics/

Why I hate the ring_buffer target in Extended Events

I’ve had so many questions about the same problem with the ring_buffer target in Extended Events lately that I figured I would write a blog post that explains all the information I teach about the ring_buffer target and the problems associated with it.  Since the release of SQL Server 2012, and the new UI for Extended Events, I have never used the ring_buffer target once, in fact as the title of this post says I really hate the ring_buffer target. In this post I’ll go over the reasons I hate the ring_buffer target and hopefully convince you why you should use the file_target instead.

Missing Events

This is by far the most common problem I have to explain about the ring_buffer target by email.  Generally the question is phrased along the lines of:

“Hey Jonathan,

I got the code below from one of your articles on SQL Server central and it is not working. The problem I have is that when I run the code, it doesn’t show any deadlock graphs even though I know one just occurred in the application. It seems like I only see older deadlocks in the system_health session, but never the most recent one.  I’ve turned on Trace 1222 and get the information that way, so why doesn’t this work.”

The reality of the situation is that the events are actually there, you just can’t see them because of a limitation of the sys.dm_xe_session_targets DMV.  The target_data column of this DMV can only output roughly 4MB of XML data. The information about the 4MB formated XML limitation of the DMV was explained by Bob Ward on the CSS SQL Server Engineers blog post You may not see the data you expect in Extended Event Ring Buffer Targets…. back in 2009.  To demonstrate the effect of this limitation, lets look at the number of events contained in the ring_buffer target for the system_health event session on a SQL Server 2012 SP1+CU7 server that I have permission to share the information from using the following query.

SELECT 
    ring_buffer_event_count, 
    event_node_count, 
    ring_buffer_event_count - event_node_count AS events_not_in_xml
FROM
(    SELECT target_data.value('(RingBufferTarget/@eventCount)[1]', 'int') AS ring_buffer_event_count,
        target_data.value('count(RingBufferTarget/event)', 'int') as event_node_count
    FROM 
    (    SELECT CAST(target_data AS XML) AS target_data  
        FROM sys.dm_xe_sessions as s
        INNER JOIN sys.dm_xe_session_targets AS st 
            ON s.address = st.event_session_address
        WHERE s.name = N'system_health'
            AND st.target_name = N'ring_buffer'    ) AS n    ) AS t;

image

The ring_buffer_event_count comes from the eventCount attribute maintained in the root RingBufferTarget element of the XML document returned, and the event_node_count is just a count of the RingBufferTarget/event nodes contained by the XML document returned from the DMV.  Here you can see that there are a total of 5000 events contained in the ring_buffer target, based on the new 2012 MAX_EVENTS_LIMIT option which is set at 5000 for the system_health session. Out of those, only 3574 are actually being output by the XML document from the DMV.  That leaves us with 1426 events that are not available, but still exist in memory. The XML document outputs the events oldest first, so the events that we can’t see are actually the most recently collected events and are the ones that we’d probably want to actually have access to.

We can look at the memory usage for the target using the query below to compare the binary data size maintained within the target, versus the size of the formatted XML being output by the DMV.

SELECT 
    target_data.value('(RingBufferTarget/@memoryUsed)[1]', 'int') AS buffer_memory_used_bytes,
    ROUND(target_data.value('(RingBufferTarget/@memoryUsed)[1]', 'int')/1024., 1) AS buffer_memory_used_kb,
    ROUND(target_data.value('(RingBufferTarget/@memoryUsed)[1]', 'int')/1024/1024., 1) AS buffer_memory_used_MB,
    DATALENGTH(target_data) AS xml_length_bytes,
    ROUND(DATALENGTH(target_data)/1024., 1) AS xml_length_kb,
    ROUND(DATALENGTH(target_data)/1024./1024,1) AS xml_length_MB
FROM (
SELECT CAST(target_data AS XML) AS target_data  
FROM sys.dm_xe_sessions as s
INNER JOIN sys.dm_xe_session_targets AS st 
    ON s.address = st.event_session_address
WHERE s.name = N'system_health'
 AND st.target_name = N'ring_buffer') as tab(target_data)

image

Here we can see that the memory usage is roughly 1.7MB of binary data, but once serialized to formatted XML, the length of the document is roughly 4.7MB in size, which is larger than the published XML size, but only slightly.  This is the problem with the nature of the data that can be generated by Extended Events, it can be very compact in binary format, but serializing the formatted XML creates a lot of added overhead for the events that have been collected.  The system_health event session is specifically prone to having the maximum, 5000, number of events collected but the ring_buffer, but only a smaller subset of the events can actually be output by the DMV due to the sizes of the events that get collected by that event session.  The events most likely to have a fairly large size are the sp_server_diagnostics_component_output and the xml_deadlock_report events, since the sizes of the XML being returned by both of those events depends on the specifics of the conditions for when they fired.  I’ve seen sp_server_diagnostics_component_output events that are over 512KB in size on busy servers when problems occur, so that would significantly limit the available data that could be returned by the DMV for the ring_buffer target when one of those events is contained in the output XML.

No UI Support

This for me is probably the biggest reason that I don’t deal with the ring_buffer target anymore in my day to day work.  The UI for Extended Events doesn’t support breaking down the information that is contained in the ring_buffer target at all.  The only thing the UI does is show the XML that was output by the target_data column of the sys.dm_xe_session_targets DMV.

image

This means that to consume the data you have to either open the XML and scan down the events, or write XQuery to parse the XML into table form, which requires that you know the events, columns, and actions that were used in the event sessions definition to really access the data. If I am doing short term data collection and don’t want it persisted to a file in the file system on SQL Server 2012, I just opt for the Live View to stream the data back into the grid, where I don’t have to deal with the XML and can rapidly find the information I was interested.  For anything long term, or even looking at information from the system_health event session, I use the file_target, which the UI can read and process the events from, eliminating the need to do any XQuery manually.

Watch the Memory Usage

This is a new one to me, but it is something to be incredibly careful about with the way the ring_buffer target gets configured on a production server.  Two weeks ago, Andy Galbraith (Blog|Twitter) ran into an issue on a server where all connections were receiving 701 Insufficient System Memory errors. During his analysis Andy realized that the MEMORYCLERK_XE memory clerk in SQL Server was consuming close to 10GB of memory on a server with 16GB of total RAM with ‘max server memory’ configured to 11000MB.  The problem was that the event session that was created with a MAX_EVENTS_LIMIT of 1,000,000 events, but with no MAX_MEMORY configuration on the target.  Since the memory usage of the target is based on the events that have been captured this is, and no maximum was set, it can essentially use an unlimited amount of memory, leading to problems on the server if memory is limited like the example server Andy was troubleshooting.

New Article: Asynchronous Processing with Service Broker

My most recent article on SQLPerformance.com was published today, and begins a series on Service Broker use cases. While Service Broker doesn’t have the sexy UI support that other features of SQL Server have, it is a incredibly powerful tool.

An Introduction to Asynchronous Processing with Service Broker

Over the series I’ll be explaining Service Broker concepts and how you can leverage Service Broker to improve the end user experience where other performance tuning techniques might not.