ALTER DATABASE failed. The default collation of database ‘%.*ls’ cannot be set to %.*ls.

Last week I was working with a client on upgrading one of their systems from SQL Server 2000 to SQL Server 2012, while also performing a collation change of the database and all of the table columns from Latin1_General_BIN to SQL_Latin1_General_CP1_CI_AS.  What started out as a straight forward upgrade, this actually became quite a challenge.  After upgrading to SQL Server 2008R2 since SQL Server doesn’t support direct upgrades from SQL Server 2000 to SQL Server 2012, I found metadata corruption.  We’ve seen and dealt with this before, so back to SQL Server 2000 to fix the orphaned entries, and then another upgrade attempt to SQL Server 2008R2.

At this point I had a corruption free database and started running the scripts I had generated to migrate from Latin1_General_BIN to SQL_Latin1_General_CP1_CI_AS.  When I got to the point of changing the database default collation I was dismayed to get the following error back from SQL Server:

Msg 1505, Level 16, State 1, Line 1
The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name ‘dbo.sysschobjs’ and the index name ‘nc1’. The duplicate key value is (0, 1, person).
Msg 5072, Level 16, State 1, Line 1
ALTER DATABASE failed. The default collation of database ‘TestCollationChange’ cannot be set to SQL_Latin1_General_CP1_CI_AS.

Thinking about the previous metadata corruption, I was certain that there was something wrong with the database still, but I couldn’t find anything with CHECKDB or CHECKCATALOG.  It turns out, there is nothing wrong with the database, there is something wrong with my expectations and assumptions.  To demonstrate this, consider the following example:

CREATE DATABASE [TestCollationChange]
ON  PRIMARY
( NAME = N'TestCollationChange', FILENAME = N'C:\SQLData\TestCollationChange.mdf')
LOG ON
( NAME = N'TestCollationChange_log', FILENAME = N'C:\SQLData\TestCollationChange_log.ldf')
COLLATE Latin1_General_BIN;
GO
USE [TestCollationChange];
GO
CREATE TABLE dbo.Person
(    RowID int NOT NULL IDENTITY (1, 1),
FirstName varchar(30) NOT NULL,
LastName varchar(30) NOT NULL);
GO
ALTER TABLE dbo.Person ADD CONSTRAINT PK_Person PRIMARY KEY CLUSTERED (RowID);
GO
CREATE TABLE dbo.person
(    RowID int NOT NULL IDENTITY (1, 1),
FirstName varchar(30) NOT NULL,
LastName varchar(30) NOT NULL);
GO
ALTER TABLE dbo.person ADD CONSTRAINT PK_person PRIMARY KEY CLUSTERED (RowID);
GO

Under the Latin1_General_BIN collation, this is a completely valid schema because case sensitivity is applied.  However, when you try and switch to a case insensitive collation with:

ALTER DATABASE [TestCollationChange] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE [TestCollationChange] COLLATE SQL_Latin1_General_CP1_CI_AS;

these immediately become duplicate objects.  So where do we go from here?  First, the error message tells us that the object name is ‘person’, so you might consider doing a query against sys.objects:

SELECT * FROM sys.objects where name = 'person';

The only problem is that this will return 1 row, remember we are still in Latin1_General_BIN so case sensitivity is being applied.  To get around this, we need to change our query to collate the name column using our new collation:

SELECT * FROM sys.objects where name COLLATE SQL_Latin1_General_CP1_CI_AS = 'person';

This will show us both of the objects and it becomes immediately clear why we have a duplication issue, different cases.In the case of the actual database I was working on, the duplicate objects were two stored procedures (actually four if you think about), and the duplicates had object definitions similar to the following:

CREATE PROCEDURE [TestProcedure] AS
BEGIN
SELECT …….
-- Lots more logic, etc
END
GO

CREATE PROCEDURE [TESTProcedure] AS RETURN;
GO

I have no idea what the intent of the second procedure was, but after consulting with the client, it was determined that these duplicate stubs could be dropped, which then allowed the database collation change to SQL_Latin1_General_CP1_CI_AS. This might not be a viable solution if the application actually relies on the case sensitive nature of the naming convention, though I wouldn’t personally ever build a database with duplicate object names regardless of the collation.

The Accidental DBA (Day 29 of 30): Troubleshooting Deadlocks

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. You can find all the other posts in this series at http://www.SQLskills.com/help/AccidentalDBA. Enjoy!

Deadlocks occur in SQL Server whenever two processes attempt to acquire locks on the same resources in reverse order creating a persistently blocked condition, where neither of the sessions can continue to execute until the other session releases its locks.  In SQL Server, the Lock Monitor background task is responsible for detecting and resolving deadlocks whenever they occur (basically terminating the session that has performed the least amount of work so far), and the resulting 1205 error can be a sign of problems that require further evaluation.  In a third-party vendor application, it may not be possible to make the changes necessary to eliminate deadlocks, but you can still collect information about the deadlocks to assist the third-party vendor in analysis and possibly identifying a solution to the problem.

Collecting Information

Prior to SQL Server 2008, collecting deadlock information from SQL Server required enabling a trace flag, configuring a SQL Trace, Event Notifications, or using a WMI Alert.  Trace Flags 1222, 1205, or 1204 write the deadlock information as text into the ERRORLOG.  SQL Trace, Event Notifications and  WMI Alerts allow collection of the deadlock information as XML. Since the introduction of Extended Events and the new system_health event session in SQL Server 2008, deadlock information has been captured by default in SQL Server and no longer requires enabling additional data collection for analysis.

Analysis

The definitive source for understanding the output from trace flag 1222 is a series of blog posts written by Bart Duncan. His three-part series uses the output from trace flag 1222 to demonstrate how to read the XML deadlock graph information, starting with Deadlock Troubleshooting, Part 1. The same method of analysis applies to deadlock graphs collected by SQL Trace, Event Notifications, WMI, and even Extended Events.  The format of the deadlock graph defines the deadlock victim(s), each of the processes involved in the deadlock (within the process-list node), and the resources contributing to the deadlock (within the resource-list node).  The processes each have an assigned processid that is used to uniquely identify each of the processes and the resources being locked or requested by the process in the graph.  Within each of the process’ information, the execution tsql_stack will show the most recently deadlocked statement backwards to the start of the execution call stack.

One of the key areas of focus for deadlock analysis is the resource-list portion of the graph, which contains all the information about the resources involved and the lock types being held and requested by each of the processes.  This will also contain the index and object names, or the allocation unit associated with the object, which can be used to determine the name of the object and index.  Understanding the locking order between the processes is essential for deadlock troubleshooting.

In addition to viewing the raw XML or text information for the deadlock, it is also possible to view the information graphically as explained in my blog post Graphically Viewing Extended Events Deadlock Graphs.  The graphical view in Management Studio will not show all of the same details as the XML or text, but can be a fast start for understanding the type of deadlock and locking order.  It may be necessary to look at the text or XML for further information in some scenarios, or you can also open the graph graphically in SQL Sentry’s excellent Plan Explorer Pro to get the full output parsed as a table as well.

Possible solutions

There are a lot of potential solutions to prevent deadlocks occurring, and the correct one will depend on the specific deadlock condition that is occurring. In some deadlock scenarios an index change to cover one of the queries executing may be all that is necessary to prevent the deadlock condition from being possible.  In other scenarios, it may be necessary to change isolation levels, or use locking hints to force a blocking lock that is incompatible with other locks to prevent the deadlock condition from being encountered.  Proper analysis of the deadlock graph will help with determining the appropriate solution to the problem, but in most cases simple error handling logic in Transact-SQL or .NET application code to handle the 1205 error and attempt to resubmit the victim transaction can prevent end users from being negatively affected by deadlocks occurring.

Summary

Troubleshooting deadlocks in SQL Server starts off with first collecting the deadlock graph information using one of the available methods.  Extended Events in SQL Server 2008 collect the information by default and eliminate the need to enable further collection and then waiting for the deadlocks to reoccur to gather the information.  Full details of how to configure deadlock graph collection and analysis of specific scenarios can be found in my SimpleTalk article Handling Deadlocks in SQL Server and in my Pluralsight online training course SQL Server: Deadlock Analysis and Prevention.

The Accidental DBA (Day 24 of 30): Virtualization High Availability

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. You can find all the other posts in this series at http://www.SQLskills.com/help/AccidentalDBA. Enjoy!

Virtualization has been popular for many years, and more and more businesses are moving low-latency line-of-business applications like SQL Server into virtual machines every day.  One of the common reasons that I’ve heard over the years for moving SQL Server to a virtual machine is that high availability is built-in.  Usually what this translates into is, “We don’t need to use SQL Server availability options because the VM already has HA.”  This may be the case for some scenarios but as the saying goes “there’s no such thing as a free lunch.”  In this post we’ll look at the high availability provided to virtual machines and the considerations that need to be taken into account when determining whether or not to implement SQL Server high availability while using virtual machines.

Basic Virtual Machine HA

The high availability provided through virtualization depends on the configuration of the host environment on which the VMs are running.  Typically for a high-availability configuration for virtualization, multiple host servers are clustered together using a shared-storage solution on a SAN, NFS, or NAS for the virtual machine hard disks.  This provides resilience against failure of one of the host servers by allowing the virtual machines to restart on one of the other hosts.  Both Hyper-V and VMware provide automated detection of guest failures in the event of a problem and will restart the VMs automatically on another host, provided that sufficient resources exist to meet any reservations configured for the individual VMs.

VMs also gain better availability over physical servers through features like Live Migration/vMotion and the ability to perform online storage migrations to move the virtual hard disks from one storage array to another one available to the host(s).  This can be very useful for planned maintenance windows, SAN upgrades, or for balancing load across the host servers to maximize performance in response to performance problems. The VM tools that are installed in the guest, to improve performance and integration with the host server, can also monitor availability of the guest through regular ‘heart-beats’ allowing the host to determine that a VM has crashed, for example a blue screen of death (BSOD), and automatically restart the guest VM in response.

VM Specific HA Features

Addition to the basic high availability provided by virtualization, there are VM-specific HA features that are offered by both VMware and Hyper-V for improving availability of individual VMs.  VMware introduced a feature for VM guests called Fault Tolerance in vSphere 4 that creates a synchronized secondary virtual machine on another host in the high-availability cluster that is lock stepped with the primary.  In the event of a host failure, guests that have Fault Tolerance enabled immediately failover to their secondary in a manner that is similar to a vMotion operation, preventing application downtime from occurring.  At the same time, a new secondary VM is created on another host inside of the cluster and synchronized with the new primary maintaining the fault tolerance of the guest inside of the environment. Unfortunately this is limited to a single virtual CPU, even in ESX 5.1 so it’s not likely to be used with SQL Server VMs.

Hyper-V does not currently provide an equivalent feature to VMware Fault Tolerance, even in Server 2012.  Hyper-V 2012 introduced Replica’s which are provide disaster recovery through replication to a remote data center with manual failover, but it doesn’t provide automated failover in a similar manner to Fault Tolerance.

SQL Server Considerations

The primary consideration I ask about when it comes to SQL Server high availability on virtualization is whether or not it is acceptable to incur planned down times associated with routine maintenance tasks like Windows Server OS patching, and SQL Server patching with Service Packs or Cumulative Updates. If a planned down time is possible to allow for patching then the high availability provided by virtualization may meet your business requirements.  However, I would always recommend testing a host failure to determine the amount of time required to detect the failure, and then restart the VM on another host, including the time required for Windows to boot, and SQL Server to perform crash recovery to make the databases available again.  This may take 3-5 minutes, or even longer depending on the environment, which may not fit within your downtime SLAs.

If planned down time for applying server patches is not possible, you will need to pick a SQL Server availability option using the same considerations as you would for a physical server implementation.  Support for Failover Clustering of SQL Server on SVVP-certified platforms was introduced in 2008, and Database Mirroring and Availability Groups are also supported under server virtualization.  However, none of the SQL Server high availability options are supported in conjunction with Hyper-V Replicas, so there are additional limitations that need to be considered whenever you combine features on top of server virtualization.  One of the limitations that should always be factored into the decision to virtualize SQL Server and use SQL native high availability options is the added complexity that exists by adding the virtualization layer to the configuration.