Upgrading SQL Server– Database Compatibility Levels

One important factor to contemplate during an upgrade and migration effort are your database compatibility levels, which control certain database behaviors to be compatible with a particular version of SQL Server. Unlike the database file version (which get changed when you restore a database backup in a newer version or attach an older database to a newer version of SQL Server), the database compatibility level can be changed back and forth to different values.

Here is an example of how to set the database compatibility level in T-SQL:

ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120;

Figure 1 shows the supported database compatibility levels by major SQL Server version.

Picture6

  Figure 1: Supported Database Compatibility Levels By Version

By default, new user databases that are created in an instance of SQL Server are set to the version of the SQL Server instance (so a new user database in SQL Server 2017 would have a database compatibility level of 140). The exception would be if the model database has a lower database compatibility level. Upgrading a database with a database compatibility level lower than the lowest allowed level, sets the database compatibility level to the lowest compatibility level allowed for that version (as shown in Figure 1).

The classic scenario for why you might want to use an older database compatibility level after an upgrade to a newer version of SQL Server is to use it as an interim migration aid to work around some application compatibility issues that might surface after a migration (which you hopefully discovered during your pre-migration testing efforts). After further testing and possible database/application changes, it is pretty common to eventually change the database compatibility level to match the database engine version of the instance, which lets you take advantage of some of the new features and behavior differences in that version.

With SQL Server 2014 and newer, the database compatibility level also controls whether the new cardinality estimator or the legacy cardinality estimator is used. The new cardinality estimator gives equivalent or better query performance for many to most queries, but it sometimes has very significant performance regressions. It is very important to do as much testing as possible before you change the compatibility level to 120 or higher. My post Upgrading SQL Server–Diagnosing Performance Regressions goes into much more detail about this issue.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Replacing Slow Hardware Part 2

Previously, we talked about the single-threaded performance, CPU capacity and hardware cost difference between an extremely poor server model and processor SKU choice and a much better server model and processor SKU choice from about three and a half years ago. In that case, the bad choice only had about 34% of the single-threaded CPU performance and about 34% of the CPU capacity of the good choice processor SKU.

There was a $746.00 increase in the cost of the processor itself, and perhaps a $500.00 difference in the cost of a base Dell PowerEdge R620 compared to a base Dell PowerEdge R320 server. The SQL Server license cost would have been identical in either case, based on the number of physical cores.

Since the legacy hardware choice was so poor, that makes it even easier to get a significant increase in single-threaded CPU performance and overall CPU capacity by purchasing a brand new model server, using the best available processor at the same physical core count. A new model server will also have higher memory density and capacity and more PCIe 3.0 lanes available. It will also have a number of manageability advantages.

Dell’s 14th generation servers support the new 14nm Intel Xeon Scalable Processors (Skylake-SP) processors that have anywhere from four to 28 physical cores. In our case, we want to replace a legacy Dell PowerEdge R320 server that had one Intel Xeon E5-2407 v2 processor that had four physical cores, a base clock speed of 2.4GHz, no Turbo Boost, no Hyper-Threading, a relatively small 10MB L3 cache, a slow 6.4 GT/s QPI speed, three channels of DDR3-1066 memory support and 24 PCIe 3.0 lanes.

The best quad-core Intel Xeon Scalable processor SKU for a two-socket server is the Intel Xeon Gold 5122 processor, which has a base clock speed of 3.6GHz, a Turbo clock speed of 3.7GHz, Hyper-Threading, 16.5MB of L3 cache, six channels of DDR4-2666 memory support, and 48 PCIe 3.0 lanes, and a price of $1,221.00.

A bad choice at the same physical core count is the Intel Xeon Silver 4112 processor that has a base clock speed of just 2.6Ghz, a Turbo clock speed of only 3.0GHz, Hyper-Threading, 8.25MB of L3 cache, six channels of slower DDR4-2400 memory support, 48 PCIe 3.0 lanes, and a price of $473.00. Saving $748.00 by getting the cheaper processor is foolish, false economy for SQL Server usage.

A good server for a Xeon Gold 5122 processor is the new Dell PowerEdge R740 two-socket server. Even though this is a two-socket server, you can choose to only populate one socket to start with (which is what we will do here). One trade-off when you only use one processor in a two-socket server is that you cut the overall memory capacity of the server in half (until you add another processor later).

The main point here is that you should do this type of analysis when looking at new servers and selecting a particular processor for SQL Server 2016/2017 usage. You should definitely avoid the Intel Xeon Silver and Bronze lines for SQL Server usage, since you give up so much performance and scalability for a relatively small hardware cost reduction.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Replacing Slow Hardware

One unfortunately common scenario is a situation where someone made a very poor processor choice sometime in the past for a database server that runs a mission critical workload. When I do health checks for SQLskills, I encounter both good processor choices and some pretty bad processor choices.

One recent example was a 22nm, quad-core Intel Xeon E5-2407 v2 processor (Ivy Bridge-EN) being used in a Dell PowerEdge R320 one-socket server. The Intel Xeon E5-2407 v2 was launched in Q1 of 2014, and it was a part of the Ivy Bridge-EN family. The EN suffix means lower base clock speeds, smaller L3 caches, and lower QPI speeds compared to the EP suffix processors, which is not a good thing for SQL Server usage. If you want to know how to decode Intel Xeon processor numbers, Intel explains it here.

This particular processor has a base clock speed of 2.4GHz, no Turbo Boost, no Hyper-Threading, a relatively small 10MB L3 cache, and a slow 6.4 GT/s QPI speed. This processor was an extremely bad choice for SQL Server usage, and it would have been a prime candidate for my laptop comparison method that I discussed here. It did have the virtue of only costing $250.00 for the processor though!

Since this processor has four physical cores, it would cost $1,858.00/per core for SQL Server 2016 Standard Edition (for a total of $7,432.00) and it would cost $7,128.00/core for SQL Server 2016 Enterprise Edition (for a total of $28,512.00). This means you could be paying anywhere from 30X to 114X your processor cost for the SQL Server 2016 licenses, depending on which edition of SQL Server 2016 you purchased.

A couple of things in defense of the person who made this choice. First, the single-socket Dell PowerEdge R320 only supported Intel Xeon E5-2400 v2 family processors, and the 2.4GHz Intel Xeon E5-2407 v2 was at least a better choice than the even slower 1.8GHz Intel Xeon E5-2403 v2 they could have picked! The primary motivation here was probably to pick a low cost server, with a low core count processor to minimize their hardware and SQL Server licensing costs.

Given that sort of goal, a much better choice at the time would have been a two-socket Dell PowerEdge R620 server with just one quad-core Intel Xeon E5-2637 v2 processor. The E5-2637 v2 has a base clock speed of 3.5GHz, a 3.8GHz Turbo Boost speed, Hyper-Threading, a larger 15MB L3 cache, and a faster 8.0 GT/s QPI speed. It did cost a little more for the processor, at a price of $996.00. A base two-socket PowerEdge R620 would also cost a little more than a base one-socket PowerEdge R320, but the overall difference in hardware cost would be pretty negligible compared to the SQL Server licensing costs.

Using an analysis based on TPC-E scores, I estimate one Intel Xeon E5-2407 v2 would have a system CPU capacity of 192.16 with a score/core of 48.04. A single Xeon E5-2637 v2 would have a system capacity of 559.77, with a score per core of 139.94. That is quite a difference!

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Compelling Upgrade Arguments

Over the course of my career, I can’t remember too many people coming to me and initiating the conversation about upgrading to a new version of SQL Server, a new version of Windows Server or a new set of server hardware and storage. As a DBA, Data Architect, or Consultant, I have generally had to take the lead, and make the business and technical case for an upgrade effort. Unfortunately, this is not always an easy effort, going against institutional inertia, budget limitations, and many other common arguments.

One proven way to start this process is to honestly assess what are your biggest issues in your current Production environment. Perhaps you have multiple servers or SQL Server instances that are underutilized and might be good candidates for consolidation. Perhaps you have multiple instances that are running on hardware that is out of warranty, on storage that is also out of warranty. Another possibility is persistent and worsening performance and scalability issues that are negatively affecting the business and also causing an increased workload for your I.T. staff.

Whatever the main issues are, you also need to ask yourself whether you have done as much as you can to alleviate the issue with your current resources. Have you done all you can to maintain, properly configure, and tune your environment, from top to bottom? Doing your job as a confident and effective DBA will usually improve the situation to a certain extent, perhaps enough postpone the immediate necessity for upgrade. This helps your organization in the short term and also builds your credibility.

It is also possible, that despite your best maintenance and tuning efforts, you may still be in a situation where an upgrade (hopefully a complete platform upgrade) is the best long-term answer. Pushing for an upgrade after you have gained knowledge and credibility by doing everything you can to improve your current environment is much easier than if a platform upgrade is the first thing you ask for!

Using Your Laptop or Workstation for a Hardware Comparison

One tactic that I have used on more than one occasion to help build the case for a hardware upgrade is to run a couple of quick, standardized synthetic CPU/Memory benchmarks, such as Geekbench, Cinebench, or CPU-Z, on both my laptop or workstation and a Production database server. Of course, you want to do this in an intelligent and safe way. You should have an idea (from previous analysis and research) how fast your workstation is (for both single-threaded and multi-threaded performance) compared to the database server in question. You also would want to run tests like this (if you actually run them) on a server that is not actually being used for a Production workload when you run the benchmarks.

What I mean by “if you actually run them” is that most of the well-known benchmarks have online databases where you can browse the results from other systems. Another source of comparative benchmark information are the server-oriented hardware review sites such as ServeTheHome and StorageReview. Using sources like these, you may be able to avoid actually running these benchmarks on your Production server.

Depending on the age and speed of your workstation and age and speed of your legacy database server, you may find that your workstation is significantly faster than your legacy database server for single-threaded CPU performance. You may even find that it has a higher multi-threaded score (which represents capacity) than your legacy database server.

Once you are sure of your facts (and have done as much other research and analysis as possible to back them up), you can choose a strategic time and method to disclose this information to your management. This might be in an email, or in a meeting, it depends on the situation and people involved.

For example, after reviewing the extensive efforts you and your team have made to improve performance on your current environment, you could casually mention that according to CPU-Z and Geekbench, your laptop (point at it) has 73% higher single-threaded CPU performance than your current Production database server. Often, this will get an initial laugh, but if you have all of your evidence and analysis ready to present, this can actually be a compelling point that can plant the seed in your boss’s mind.

Another general approach is to be persistent. Quite often, despite your best efforts and preparation, your management team will not approve of your upgrade plan. I have found that if you are determined, and continue to improve the quality of your arguments over time, you will usually prevail in the end. 

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Mitigating Performance Regressions

As I discussed yesterday, it is sometimes possible to experience performance regressions after upgrading to a new version of SQL Server, even when it is running on a new, presumably faster database server. This is much less likely to happen if you have done a good job of analysis, planning, testing, benchmarking, and post-migration configuration activity, but it is still possible.

Hardware Selection and Configuration

Depending on the age and hardware specifications of your new database server compared to your old database server, it is definitely possible for a new database server to be significantly slower (in terms of single-threaded CPU performance) than your old server was. For example, lets say that your old database server had two eight-core Intel Xeon E5-2667 v2 processors, while your new database server has two eight-core Intel Xeon Silver 4108 processors. Even though the Xeon Silver is three generations newer than your old Xeon E5 v2, there is a huge base clock speed difference between the two specific processors in question. The Xeon E5-2667 v2 has a base clock speed of 3.3 GHz, while the Xeon Silver 4108 has a base clock speed of 1.8 GHz, so even with the generational improvements, the newer processor is not going to be nearly as fast for single-threaded performance. That particular new processor was a very bad choice for SQL Server, but if you have been reading my blog, I don’t think you would make such a poor choice!

From a configuration perspective, you want to make sure that you have the appropriate BIOS settings on your new database server. You want to make sure that power management is set to OS control or disabled, and you want to make sure that NUMA has not been disabled. You also want to make sure that Turbo Boost is enabled and that Hyper-Threading is enabled (except in some types of reporting/DW workloads).

If your server is a virtualization host, you want to make sure that VT-x and VT-d are enabled so that you can take advantage of the virtualization support built into your processor. Speaking of that, newer generation Intel processors have much better virtualization performance compared to legacy processors, which is another reason to upgrade your hardware. You also want to make sure that your hypervisor is using the High Performance power policy.

You need to make sure that Windows Server is using the High Performance Power Plan rather than the default Balanced Power Plan. After doing all of this, you should run CPU-Z to confirm how fast your processors are running and whether Turbo Boost is being used or not. While you are at it, you can use the CPU benchmark feature in CPU-Z to run a quick CPU test on your old database server compared to your new database server.

SQL Server Configuration and Performance

After validating your hardware configuration and performance, you will need to start looking at how your new SQL Server instance is configured and how it is performing. Fortunately, SQL Server 2016 and newer do a much better job of instance configuration during the installation process, so there are less things that need to be changed after a typical installation compared to older versions of SQL Server.

Microsoft has a great document called “Recommended updates and configuration options for SQL Server 2012 and SQL Server 2014 with high-performance workloads” that is full of useful settings to consider and perhaps change based on your workload. Even though Microsoft claims that SQL Server 2016 doesn’t really need as much manual configuration as previous versions, looking at that document is still a useful exercise.

Microsoft also has a very useful Post-migration Validation and Optimization Guide which is a pretty good starting point for diagnosing and alleviating post-migration performance issues.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Diagnosing Performance Regressions

One fairly common issue you might encounter when upgrading to SQL Server 2014 or later are performance regressions with some queries. SQL Server 2014 introduced a new cardinality estimator (CE) that is used by the query optimizer by default when you change the compatibility level of a database to 120 or higher.

Most of the time, the new CE works well, and gives better query performance than the legacy CE. Unfortunately, the new CE sometimes leads to dramatically worse query performance for some queries. This particular issue has been the most frequent problem I have experienced and heard of with upgrades to SQL Server 2014 and newer over the past several years. Fortunately, there are a number of things you can do to prevent being surprised by this when you go into production.

First, you need to do a thorough job of testing query performance on the new version of SQL Server, with both cardinality estimators. This should start with manual testing of your more important stored procedures and queries, capturing their execution plans and io statistics. Then, during any functional or load testing, you should be collecting and analyzing diagnostic information using my DMV Diagnostic queries.

Ideally, you should be doing additional testing and analysis with the new Microsoft Database Experimentation Assistant (DEA). Database Experimentation Assistant is a new A/B testing solution for SQL Server upgrades. It will assist in evaluating a targeted version of SQL for a given workload. It can be used to compare performance between SQL Server 2005 and any later version of SQL Server.

If you do run into performance regressions with the new version of SQL Server, the very first thing you should do is read Joe Sack’s seminal whitepaper “Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator” to understand how the new CE works, and what mitigation options were available with SQL Server 2014.

These include changing the compatibility level to 110 for a particular database to get the legacy CE for the entire database, or using query-level hints to force the legacy or new CE for a particular query, regardless of the database compatibility level. You can also use an instance-level trace flag to use either CE for all databases.

SQL Server 2016 lets you use database-scoped configurations to change which CE will be used for each database, regardless of the database compatibility model. This gives you a lot more flexibility. SQL Server 2016 also has Query Store, which is very helpful for finding and correcting query plan regressions.

SQL Server 2017 adds automatic plan correction which leverages Query Store to try to force the last known good plan for query nplans that have performance regressions.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Delayed Durability

One of the interesting new features that was added in SQL Server 2014 was delayed durability. This feature lets you trade some transaction durability for increased log write performance. This is possible because delayed durable transaction commits are asynchronous and report a commit as successful before the log records for the transaction are flushed and actually written to the disk.

The price you pay for this possible performance boost is the possibility of some data loss, as explained in more detail here by Paul Randal.

The main reason you might want to try this feature is if your workload and infrastructure actually has performance bottlenecks writing data to your transaction log file (which is not that common). This will be characterized by high WRITELOG and PAGEIOLATCH_xx wait types.

The first thing I would check and correct if necessary would be the virtual log file (VLF) count for the database. Having an extremely high VLF count can hurt write performance to the log file. Setting your auto growth increment to an appropriately large size will reduce the rate of growth of your VLFs.

Other things I would consider trying before I used delayed durability would be to make sure I understood the I/O performance characteristics on the LUN where my transaction log file was located, using a synthetic storage benchmark tool such as Microsoft DiskSpd. If necessary, I would explore whether it is possible to do anything to improve the write performance of than LUN.

If I had multiple databases that all had their transaction log files on the same disk, I would want to make sure the LUN where they were located had very good random write performance. It makes a lot of difference whether you have a small number of magnetic drives using a parity-based RAID level (such as RAID 5), or high performance flash-based storage.

Parity-based RAID levels have an unavoidable write penalty, since they have to write the parity information after they write the actual striped data. Hardware RAID controllers with fast multi-core processors and a large hardware cache, dedicated to writes can help mask this problem.

Keep in mind that not all flash storage is created equal. There are major performance differences between traditional SATA AHCI drives and newer PCIe NVMe drives, and also between read-optimized and write-optimized devices. Another important factor is the capacity of the flash device you are using. Larger capacity devices usually have much better write performance than smaller capacity devices from the same manufacturer and product line.

Aaron Bertand wrote a great post about delayed durability, including some good performance benchmarks here. Personally, I have seen major performance gains with delayed durability when running in an Azure VM that has relatively poor storage performance. You cannot use delayed durability if you are using transactional replication, change tracking, or change data capture (CDC).

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Separate Version of SSMS

One upgrade opportunity that you can take advantage of pretty easily, whether or not you are actually upgrading your actual database servers, is to use a modern version of SQL Server Management Studio (SSMS). Starting with SQL Server 2016, SSMS was separated from the rest of the product, and was not even included with the standard installation media.

Instead, SSMS is available from a separate download site, that gets updated whenever there is a new SSMS release, which happens about once a month. The current release is SSMS 17.2, and the changes are detailed in the full changelog. One of the more interesting new features in SSMS 17.2 lets you search SQL Server graphical query plans using Cntl-F, pretty much any time you see a graphical execution plan.

Having SSMS separate from the rest of SQL Server lets Microsoft update and release new versions of SSMS as often as they like, which is typically more often than SQL Server Cumulative Updates. Microsoft recommends that you use SSMS 17.2, even with down-level versions of SQL Server. It is officially supported with SQL Server 2008 and newer, and it also works very well with Azure SQL Database.

One tangible pre-upgrade step you can take even long before you move to a modern version of SQL Server is to use SSMS 17.2. It can be installed side-by-side with SSMS 16.X and also with older versions of SSMS. It runs on Windows 7 SP1 and later, so there is one less barrier to using it on client workstations.

If you have internet connectivity, SSMS 16.X and newer can automatically check for newer versions of SSMS. You can also manually check for updates by going to the Tools menu in SSMS, and selecting “Check for Updates” (if you have internet connectivity). The update dialog looks like what you see in Figure 1.

image

Figure 1: SQL Server Management Studio Update Dialog

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

 

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Migrating with Database Mirroring

Another relatively quick and easy way to do a Production migration of data from one place to another or from an older version of SQL Server to a newer version of SQL Server is to use database mirroring. You have to have each user database in Full recovery model before you can set up a database mirroring partnership, and it needs to stay in full recovery model at all times. When you set up a database mirroring partnership, both the principal instance and the mirror instance need to be on the same edition of SQL Server (meaning Standard to Standard or Enterprise to Enterprise). It is possible to do a rolling edition upgrade with database mirroring, as I describe here.

Database mirroring was deprecated in SQL Server 2012, but it still works just fine in SQL Server 2016 and in SQL Server 2017. Early in the SQL Server 2017 development cycle, Microsoft had hinted that they were actually going to remove database mirroring from the product for SQL Server 2017, but they later changed their mind, and I think they may never actually remove it from the product.

Database mirroring is supported as an upgrade/migration method if the principal instance is running SQL Server 2008 SP3 or later, SQL Server 2008 R2 SP2 or later, SQL Server 2012 SP1 or later, or SQL Server 2014, and the secondary instance is running SQL Server 2016 or SQL Server 2017. The database failover is a one-time, one-way trip when you go from an older version of SQL Server to a newer version of SQL Server. Microsoft has some guidance about this upgrade process here.

Compared to log shipping, database mirroring is a little more complicated to get set up, and it has more restrictions. Database mirroring is easier to set up if both servers in the partnership are in the same Windows Domain, but if this is not the case, you can use certificates for your endpoint authentication.

Synchronous database mirroring lets you run a simple command to fail over each mirrored user database from the principal instance to the mirror instance (as long as each database is in a synchronized state). This fail over typically takes about 5-10 seconds.

Here are the basic steps for doing the final Production migration to a newer version of SQL Server:

  1. Stop the application(s) that are using the principal database(s) on the primary server
  2. Make sure each database is in a synchronized state and is using synchronous mirroring
  3. Fail over each mirrored user database, using a T-SQL script
  4. Remove the mirroring partnership, using a T-SQL script
  5. Redirect the applications to the new database server

Steps three and four can usually be done in less than a minute, depending on how many databases you have, how much transaction log activity has occurred, and your network and storage infrastructure performance.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

Here is a link to the complete series about upgrading SQL Server.

Upgrading SQL Server–Migrating with Log Shipping

One easy and quick way to do a Production migration of data from one place to another or from an older version of SQL Server to a newer version of SQL Server is to use log shipping. Log shipping may not be new and shiny like availability groups, but it has very few blockers for implementation, and it does not add any additional workload to your legacy databases after you get it running. You do have to have each database in Full or bulk-logged recovery model before you can use log shipping. If you have a database in simple recovery model, and you try to enable log shipping, you will get a message like what you see in Figure 1.

image

Figure 1: Log Shipping Error Message

 

Log shipping does not have automatic fail over out of the the box, but you can pretty easily write some T-SQL to run the proper SQL Server Agent jobs on the primary and secondary server, check the status of the secondary database, and then recover the secondary database and bring it online on the new server. Doing all of this can semi-automate your fail over process.

Here are the basic steps for doing the final Production migration:

  1. Stop the application(s) that are using the primary database(s) on the primary server
  2. Run the LSBackup Agent job on the primary server for each log shipped database
    1. EXEC msdb.dbo.sp_start_job N’LSBackup_AdventureWorks2014′;
  3. Run the LSCopy Agent job on the secondary server for each log shipped database
    1. EXEC msdb.dbo.sp_start_job N’LSCopy_LABDB03_AdventureWorks2014′;
  4. Run the LSRestore Agent job on the secondary server for each log shipped database
    1. EXEC msdb.dbo.sp_start_job N’LSRestore_LABDB03_AdventureWorks2014′;
  5. Check the recent history of the log shipped databases on the secondary server
    1. Query the log_shipping_monitor_history_detail table in msdb
  6. Restore each log shipped database on the secondary WITH RECOVERY
    1. RESTORE DATABASE AdventureWorks2014 WITH RECOVERY;
  7. Redirect the applications to the new database server

Steps two through six can usually be done in just a few minutes, depending on how many databases you have, how much transaction log activity has occurred, and your network and storage infrastructure performance.

 

Additional Resources

My new Pluralsight course, SQL Server: Upgrading and Migrating to SQL Server 2016 has just been published. This is my eleventh course for Pluralsight, but the complete list of my courses is here.

Building on this online course is a new three day class, IEUpgrade: Immersion Event on Upgrading SQL Server, taught by myself and Tim Radney. The first round of this course will be taught in Chicago from October 11-13, 2017.

Finally, I will be presenting a half-day session called Migrating to SQL Server 2017 at the PASS Summit 2017 in Seattle, WA from October 31- November 3, 2017.

 

Here is a link to the complete series about upgrading SQL Server.