Downgrading SQL Server Editions

At some point in your career working with SQL Server, you will run into a situation where the wrong edition of SQL Server has been installed on a server and will need to change the edition for licensing reasons.  Whether it is Enterprise Edition where Standard Edition should have been installed, Enterprise Edition where Developer Edition should have been used, or my favorite, Evaluation Edition where the 180 day trial has expired and Enterprise Edition isn’t going to be used, the only route available for downgrading the edition is to uninstall and reinstall SQL Server entirely.  SQL Server Setup makes upgrading editions a piece of cake with SKUUPGRADE as a command line option for going from Standard/Developer/Evaluation to Enterprise, but anything else requires a full uninstall and reinstall to change the SKU/Edition and then restore all of the system and user databases to the new instance, which typically means a lot of work.  I hate having to restore system databases and avoid having to do it if possible, so here is how I do this process and minimize the work required:

No matter what you are going to have to do an uninstall and reinstall of the SQL Server instance to downgrade the SKU.  However, you can save yourself some time and the headache of trying to restore the system databases if you are careful about what you do.  I have done a plenty of SKU downgrades in the past and the easiest way to do it, and I am not saying this is the Microsoft supported way but that it works if done correctly, is to:

  1. Take a good backup of all of your databases (system and user).  
  2. Run SELECT @@VERSION and note the specific build number of SQL Server that you are currently on.
  3. Shut down your existing instance of SQL Server.  
  4. Copy the master, model, and msdb database files (both mdf and ldf), don’t move them copy them, from the current location to a new folder that you mark as readonly. 
  5. Uninstall SQL Server from the system.
  6. Reboot the server.
  7. Install SQL Server Standard Edition.
  8. Apply the necessary Service Pack and/or Cumulative Updates to bring the instance up to your previous build number.
  9. Shutdown SQL Server.
  10. Copy the master, model, and msdb database files (both mdf and ldf) from the folder you saved them in to the correct location for the new install and remove the readonly flag from the files, and change the file ACL’s so that the SQL Service account has Full Control over the files.
  11. Startup SQL Server and if you did it correctly it will startup and be exactly where you were before you made any changes, with all of your user databases online and you should be ready to let applications connect and resume operations.

If you screw something up in the above, you still have your backups and you can run setup to rebuild the system databases and then go about following the Microsoft supported path for restoring the system databases and then user databases into the system to bring it online.  Essentially the file copy is no different that what would occur through attach/detach you are just doing it with system databases which is not explicitly supported, but it does work.  The key is to have your backups from before you do anything so you have the supported route available if you encounter an issue.  The only issue I have ever had doing this set of steps is that I didn’t set the file ACL’s correctly and the database engine threw Access Denied errors and failed to start until I fixed the ACL’s correctly.  This can save you many hours of frustration and downtime trying to restore everything since the database files are already there and it is just some small copy operations to put them where you need them to be.

Parallel Maintenance Tasks with Service Broker

I’ve been doing a bit of work on Service Broker while building a series of new Pluralsight courses and one of the things I recently found a good use for Service Broker on a client system with a 8TB VLDB was for implementing parallel maintenance tasks using multiple threads.  The idea for this implementation came out of the fact that their maintenance window continues to become shorter and shorter the larger their workload increases, while the size of their database continues to get larger and larger.  One of the things I noticed while doing a monthly health check of their server was that during the maintenance window, their hardware is no where close to being utilized and there are plenty of resources available due to the serial execution of their maintenance tasks.  This specific client uses Ola Hallengren’s maintenance procedures, which we recommend to all of our clients, so I started to think about ways I could make things run in parallel to get better utilization of the hardware during the maintenance window and hopefully reduce the time required to perform the day to day maintenance in their database.

I wish I could say the first thing I thought of was Service Broker, but it wasn’t I toyed around with the idea of a custom C# app to run Ola’s scripts and log the output commands to the CommandLog table in master, then use multiple threads to run each of the individual tasks, something similar using Powershell instead of a compiled app, and even multiple agent jobs to break things up using different parameters for IndexOptimize. It turns out that these are all over engineering a very basic problem that Service Broker is highly suited towards solving and it only took a little bit of trial and error to create a very robust solution that runs index and statistics maintenance tasks in parallel in SQL Server.

Setting up Service Broker Components

I chose to implement the Service Broker components for this configuration in the MSDB database for a couple of reasons.  First, it is already enabled for Service Broker, so I don’t have to worry about that, second, it is also already TRUSTWORTHY ON which is required by Agent, and third it is owned by SA, so an activation stored procedure marked with EXECUTE AS OWNER, runs as a full sysadmin in the server allowing it to run any maintenance command generated by Ola’s scripts.  We need to configure some basic objects for Service Broker in the database; a message type, a contract, a queue and a service.

USE msdb
GO
-- Create the message types
CREATE MESSAGE TYPE
[OlaHallengrenMaintenanceTaskMessage]
VALIDATION = WELL_FORMED_XML;
GO

-- Create the contract
CREATE CONTRACT [OlaHallengrenMaintenanceTaskContract]
([OlaHallengrenMaintenanceTaskMessage]
SENT BY INITIATOR);
GO

-- Create the target queue and service
CREATE QUEUE OlaHallengrenMaintenanceTaskQueue;
GO

CREATE SERVICE
[OlaHallengrenMaintenanceTaskService]
ON QUEUE OlaHallengrenMaintenanceTaskQueue
([OlaHallengrenMaintenanceTaskContract]);
GO

Building an Activation Procedure to Automate Processing

With that setup, now we need to build an activation stored procedure to process the messages from the queue and perform the operations generated by Ola’s scripts.  At first I thought that this would be pretty straightforward, but it actually required a fair amount of additional logic to prevent poison messages and queue deactivation from occurring when multiple online index rebuilds were attempted on the same table.  When this occurs the engine raises Error 1912:

Could not proceed with index DDL operation on %S_MSG ‘%.*ls’ because it conflicts with another concurrent operation that is already in progress on the object. The concurrent operation could be an online index operation on the same object or another concurrent operation that moves index pages like DBCC SHRINKFILE.

which causes the XACT_STATE() of the operation to become –1, which only allows the transaction processing the command to be rolled back, leading to a poison message within Service Broker.  So to mitigate against that I had to add in a check against sys.dm_tran_locks for any ALTER_INDEX command to identify if a conflicting lock is currently being held and if it is then we just requeue the message and increment the retry count held within the message so that we can incrementally back-off attempting to run the command each time it encounters a conflicting lock using a WAITFOR.  Since Ola’s scripts log the command type, database name, object name, and command to be executed to the CommandLog table in the master database, all we need to queue in our message is the ID of the command to be processed by Service Broker, and we can get the rest of the pertinent information from the CommandLog table directly.  As a part of each tasks execution in our activation stored procedure we will also update the StartTime and EndTime of the operation for historical trending of execution durations.

IF OBJECT_ID('OlaHallengrenMaintenanceTaskQueue_ActivationProcedure') IS NOT NULL
BEGIN
	DROP PROCEDURE OlaHallengrenMaintenanceTaskQueue_ActivationProcedure;
END
GO

CREATE PROCEDURE OlaHallengrenMaintenanceTaskQueue_ActivationProcedure
AS

  DECLARE @conversation_handle UNIQUEIDENTIFIER;
  DECLARE @message_body XML;
  DECLARE @message_type_name SYSNAME;
  DECLARE @Command NVARCHAR(MAX);
  DECLARE @ID INT
  DECLARE @DBName SYSNAME;
  DECLARE @ObjectName SYSNAME;
  DECLARE @CommandType NVARCHAR(60);
  DECLARE @Retry INT;
  DECLARE @FQN NVARCHAR(400);

  WHILE (1=1)
  BEGIN

    BEGIN TRANSACTION;

    WAITFOR
    ( RECEIVE TOP(1)
        @conversation_handle = conversation_handle,
        @message_body = message_body,
        @message_type_name = message_type_name
      FROM OlaHallengrenMaintenanceTaskQueue
    ), TIMEOUT 5000;

    IF (@@ROWCOUNT = 0)
    BEGIN
      ROLLBACK TRANSACTION;
      BREAK;
    END

    IF @message_type_name = N'OlaHallengrenMaintenanceTaskMessage'
    BEGIN
		SELECT @ID = @message_body.value('(CommandLogID)[1]', 'int'),
				@Retry = ISNULL(@message_body.value('(CommandLogID/@retry)[1]', 'int'), 0);

	   
		SELECT @Command = Command, 
				@ObjectName = ObjectName, 
				@DBName = DatabaseName,
				@FQN = QUOTENAME(DatabaseName) + '.' + QUOTENAME(SchemaName) + '.' + QUOTENAME(ObjectName),
				@CommandType = CommandType
		FROM master.dbo.CommandLog
		WHERE ID = @ID;

		-- 	Check for Index rebuilds if one is already running and requeue the request after waiting.
		IF @CommandType = 'ALTER_INDEX'
		BEGIN

			-- Check if we have an incompatible lock that would lead to a failed execution 
			IF EXISTS (SELECT 1
						FROM sys.dm_tran_locks AS tl
						WHERE (request_mode = 'SCH-M' OR 
							-- Concurrent maintenance task doing UpdateStats?
							(request_mode = 'LCK_M_SCH_S' AND EXISTS (SELECT 1 FROM sys.dm_exec_sessions AS s WHERE is_user_process = 1 AND tl.request_session_id = s.session_id)))
						  AND resource_associated_entity_id = OBJECT_ID(@FQN)
						  AND resource_database_id = DB_ID(@DBName) )
			BEGIN
				-- Wait for 5 seconds times the number of retrys to do an incremental backoff
				-- This will eventually cause all queue readers to die off and serial execution of tasks
				DECLARE @Delay NVARCHAR(8) = CAST(DATEADD(ss, @Retry*5, CAST('00:00:00'AS TIME)) AS VARCHAR)
				WAITFOR DELAY @Delay

				-- Increment retry count in the message
				SELECT @message_body = N'<CommandLogID retry="'+CAST(@Retry+1 AS NVARCHAR)+'">'+CAST(@id AS NVARCHAR)+N'</CommandLogID>';


				-- Send the message back to the queue for later processing
				;SEND ON CONVERSATION @conversation_handle
						MESSAGE TYPE [OlaHallengrenMaintenanceTaskMessage]
						(@message_body);

				GOTO SkipThisRun
			END
		END

		UPDATE master.dbo.CommandLog
		SET StartTime = GETDATE()
		WHERE ID = @ID;

		BEGIN TRY 
			EXECUTE(@Command);

			UPDATE master.dbo.CommandLog
			SET EndTime = GETDATE()
			WHERE ID = @ID;

		END TRY
		BEGIN CATCH

			UPDATE master.dbo.CommandLog
			SET EndTime = GETDATE(),
				ErrorMessage = ERROR_MESSAGE(),
				ErrorNumber = ERROR_NUMBER()
			WHERE ID = @ID;

		END CATCH

       END CONVERSATION @conversation_handle;
    END

	-- If end dialog message, end the dialog
    ELSE IF @message_type_name = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
    BEGIN
       END CONVERSATION @conversation_handle;
    END

	-- If error message, log and end conversation
    ELSE IF @message_type_name = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
    BEGIN
		DECLARE @error INT;
		DECLARE @description NVARCHAR(4000);

		-- Pull the error code and description from the doc
		WITH XMLNAMESPACES ('http://schemas.microsoft.com/SQL/ServiceBroker/Error' AS ssb)
		SELECT
			@error = @message_body.value('(//ssb:Error/ssb:Code)[1]', 'INT'),
			@description = @message_body.value('(//ssb:Error/ssb:Description)[1]', 'NVARCHAR(4000)');
		
		RAISERROR(N'Received error Code:%i Description:"%s"', 16, 1, @error, @description) WITH LOG;

		-- Now that we handled the error logging cleanup
		END CONVERSATION @conversation_handle;
	END
 
 SkipThisRun:   
    COMMIT TRANSACTION;

  END
GO

Enabling Parallel Queue Activation

The last thing we need to do is bind our stored procedure to the queue for automated activation and to set the MAX_QUEUE_READERS to the number of concurrent parallel tasks we want to have executing on the system at any given point in time.  I spent some time testing different values for MAX_QUEUE_READERS and found that 5 was generally sufficient to provide significant reductions in runtimes without creating contention between the tasks being executed.

-- Alter the target queue to specify internal activation
ALTER QUEUE OlaHallengrenMaintenanceTaskQueue
WITH ACTIVATION
( STATUS = ON,
PROCEDURE_NAME = OlaHallengrenMaintenanceTaskQueue_ActivationProcedure,
MAX_QUEUE_READERS = 5,
EXECUTE AS SELF
);
GO

Now all we have to do is setup a job to feed the tasks into the queue as messages and Service Broker will automatically begin processing the tasks in parallel up to the MAX_QUEUE_READERS configured value.  To do this, I just used Ola’s scripts to log the commands to the CommandLog table but not execute them and then using a cursor and ROW_NUMBER() definition to split the work up across different tables instead of queuing the commands in the order generated, send each ID as a message to Service Broker to process.

DECLARE @MaxID INT
SELECT @MaxID = MAX(ID)
FROM master.dbo.CommandLog;

SELECT @MaxID = ISNULL(@MaxID, 0)

-- Load new tasks into the Command Log
EXECUTE master.dbo.IndexOptimize
@Databases = 'AdventureWorks2014',
@FragmentationLow = 'INDEX_REBUILD_OFFLINE',
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@UpdateStatistics = 'ALL',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@LogToTable = 'Y',
@Execute = 'N';

DECLARE @NewMaxID INT
SELECT @NewMaxID = MAX(ID)
FROM master.dbo.CommandLog;

USE    msdb;

DECLARE @id int;

-- Don't submit commands  in exact command order or parallel processing
-- of indexes/stats on same object will occur and could block
DECLARE command_cursor CURSOR FOR
SELECT ID
FROM (
SELECT ROW_NUMBER() OVER
(PARTITION BY ObjectName
ORDER BY COALESCE(IndexName, StatisticsName), CommandType) AS Ranking,
ID
FROM master.dbo.CommandLog
WHERE ID > @MaxID AND ID <= @NewMaxID)  AS t
ORDER BY Ranking

OPEN command_cursor

FETCH NEXT FROM command_cursor
INTO @id

WHILE @@FETCH_STATUS = 0
BEGIN

-- Begin a conversation and send a request message
DECLARE @conversation_handle UNIQUEIDENTIFIER;
DECLARE @message_body XML;

BEGIN TRANSACTION;

BEGIN DIALOG @conversation_handle
FROM SERVICE [OlaHallengrenMaintenanceTaskService]
TO SERVICE N'OlaHallengrenMaintenanceTaskService'
ON CONTRACT [OlaHallengrenMaintenanceTaskContract]
WITH ENCRYPTION = OFF;

SELECT @message_body = N'<CommandLogID>'+CAST(@id AS NVARCHAR)+N'</CommandLogID>';

SEND ON CONVERSATION @conversation_handle
MESSAGE TYPE [OlaHallengrenMaintenanceTaskMessage]
(@message_body);

COMMIT TRANSACTION;

-- Get the next command to run
FETCH NEXT FROM command_cursor
INTO @id
END
CLOSE command_cursor;
DEALLOCATE command_cursor;
GO

I originally started out processing the records in the order generated, but I found that I would hit blocking with statistics updates on the same table until Trace Flag 7471 was enabled for the instance to change the locking behavior in SQL Server 2014 SP1+CU6.  However, if you don’t have access to the trace flag, by randomizing the order of objects as they are queued, only the tables with a large number of indexes or statistics will have problems running parallel and only at the end of the maintenance tasks since the threads will be working through different tables by the order of queuing.  This has resulted in 30-45% reductions in maintenance times depending on the database schema and the types of maintenance that have to be performed so far in testing.

A warning about Diagnostic Manager’s Query Monitor Feature in v9+

During a recent client engagement to look at the performance problems of a production SQL Server I ran into something I hoped to never see in real life. For a long time I’ve been a proponent of Extended Events in SQL Server, and I’ve really looked forward to seeing them used more and more inside of SQL Server for diagnostics data collection, especially by third party software vendors. I’ve done a lot of performance tests comparing Extended Events to SQL Trace and Extended Events generally has lower performance impact to SQL Server performance. However, I have also found and demonstrated how specific events like collecting the actual execution plan with Extended Events can severely impact a server’s performance. Any time Erin or I talk about Extended Events, whether it is at a conference or in our IEPT02 – Performance Tuning class, one of the key things we both repeat to people is that if you see an completely unexplainable drop in server performance and throughput, check for Extended Events sessions and Traces running on the server and look at what events are being collected. For Extended Events we always demo the impact of the query_post_execution_showplan event live to demonstrate why this event should not be used in production environments ever. Yesterday I was incredibly surprised to not only find the event in an Event Session on the affected server, but also to find that it came from Idera Diagnostic Manager’s Query Monitor feature.

TL;DR

If you have Diagnostic Manager 9.0 or higher, DO NOT enable the Query Monitor feature for your servers, without first disabling (uncheck) the box to Collect Query Plans (SQL Server 2008 and up only). If you have the Query Monitor feature enabled, check the properties for all SQL Server 2008+ servers and disable the Collect Query Plans (SQL Server 2008 and up only) then do a Refresh Alerts for the server,  right click the server in the console and click Refresh Server, to force it to modify the event session and remove the query_post_execution_showplan event.  I have already contacted Idera regarding the performance impacts of this event being used in production environments, and have provided images of CPU Usage with the default configuration and after removing execution plan collection, showing reductions of up to 20-25%. I have also been proactively working with clients I know have Diagnostic Manager installed to get their configurations changed to remove the performance impact from any server monitored by Query Monitor in their environments.

How Do I Disable this Event/Feature?

The idea behind the Query Monitor feature is to collect information about poor-performing queries based on configurable values for duration, logical_reads, CPU, or writes. This is not a bad thing and is something that most servers will want to have configured for monitoring. The problem is not with the functionality of the Query Monitor feature itself, but with just one extra event that gets added to the Extended Events session that gets created by default. To enable/disable the Query Monitor feature and change the behavior of it for an individual server, open the server’s Properties dialog from the right-click context menu in the console and click on the Query Monitor page as shown below.

clip_image001

If you simply accept the defaults for the Query Monitor feature, leaving the checkbox highlighted in red above checked, and turn it on for SQL Server 2012 or higher, an Extended Events session called SERVERNAMESQLdmQMEXevents will be created and started on your server that captures the query_post_execution_showplan event. You can examine the entire session for  yourself using the Script Session As… CREATE To… New Query Editor Window option in SSMS (or equivalent DMVs in earlier versions than 2012).

You can also use the Session Properties to see everything it’s doing (just as you can for all sessions created on your server, by any third-party tools):

clip_image002

Notice the highlighted event in the screenshot of the event session properties in Management Studio. This is bad! Even though the query_post_execution_showplan event has a duration filter greater than five seconds (5000000 microseconds) the performance impact of the event occurs for every single statement that gets executed inside of SQL Server because the event has to collect ALL of the default data elements, which include the showplan XML, before the predicate is evaluated. There is NO WAY to eliminate the performance overhead of this event because it collects the plan before filtering is applied. This event is one of a handful of events in Extended Events that have a warning about their performance impact in the event description:

Occurs after a SQL statement is executed. This event returns an XML representation of the actual query plan. Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.

There is no warning about the performance impact anywhere in the Diagnostic Manager UI and based on the information in Idera’s online forums, Query Monitor gets enabled by default for all new instances unless you remove a checkbox on the Add Server wizard in the UI. For existing servers after an upgrade to Diagnostic Manager that were not previously using the Query Monitor feature, it remains disabled. However, when you go to the Query Details page for that server, a warning message is shown that Query Monitor is disabled, prompting you to click on it to configure the Query Monitor for the server.

clip_image003

Why am I writing this blog post?

The purpose of this post is to warn users of the performance impact that this feature has when the defaults are used to enable the Query Monitor against a server, and to at the same time show users how to fix the issue so that Query Monitor can be used to collect important diagnostic information without severely affecting the performance of the servers. I’ve already raised this issue with Idera separately, and I hope to see this event removed from a future update to their product entirely. The performance effects of this event outweigh the cost of having it enabled on production servers, which is why I warn anyone I talk to about Extended Events about this specific events usage in production servers. I think the usage of Extended Events by third party vendors instead of SQL Trace for this type of diagnostic data collection is a good thing overall, but I don’t believe a monitoring tool for performance should ever use an event that can severely degrade performance of a production server.

If you use Diagnostic Manager to monitor your SQL Servers, check the configuration of the Query Monitor feature on ALL SQL Server 2012+ servers and disable the execution plan collection or you WILL have performance degradation from the usage of the query_post_execution_showplan event being included in the event session monitoring your server!