Azure SQL Managed Instance – Business Critical Tier

Microsoft has announced the GA date for the Business-Critical tier. Azure SQL Managed Instance Business Critical tier will be generally available on December 1st 2018. For those customers who are needing the ‘Super-Fast’ storage capabilities provided with local SSDs on the instance, you now have a release date. Business Critical also provides the ability to have a readable secondary. Technically, you have three secondaries, two are non-readable and the third can be made available for Read-Scale. All you need to do is enable Read-Scale and update your connection string for read-intent workloads. There is no additional cost for using the readable secondary, so all customers should take advantage of this feature. Offloading some of your read workload to the secondary can free up resources on your primary.

Business critical designed for mission-critical business apps with high I/O requirements, it supports high availability with the highest level of storage and compute redundancy. Something to consider is that Business Critical leverages an Availability Group for HA. General Purpose is built upon Windows Failover Clustering. Both tiers provide HA, however you may experience a slight disruption for the General Purpose failover which still provides good HA, Business Critical you have almost seamless failover providing really good HA.

Mark your calendars for Dec 1st, until then, keep enjoying the preview pricing for your proof of concepts.

If you are considering a migration to Azure SQL Database or Managed Instance and need help, reach out.

Azure Training – London

I am very excited about my upcoming IEAzure class being held in London, September 10th and 11th at the Marriott in Kensington (London).

This is a content rich class covering SQL Server running on Azure Virtual Machines (Azure IaaS) as well as Azure SQL Database and Azure SQL Managed Instance (PaaS).

You read that correctly, we’ve expanded the IEAzure course to include Managed Instance. This module will cover what sets Managed Instance apart from on-premises SQL Server and Azure SQL Database. I’ll cover what all is needed to provision a Managed Instance (this isn’t difficult but many people don’t get it right the first time), how you can scale Managed Instance and migration strategies. You’ll learn about the performance tiers, cost of each, and what sets the tiers apart (hint: built-in HA options)

This is going to be a fun class full of demos and discussions.

If you are in the area, don’t miss this opportunity, register today

In addition to IEAzure, the SQLskills team will be in London for two weeks in September for IEPTO1,  IECAG, and IEPTO2.

Why You Need Baselines

Regularly when working with clients, I implement a basic baseline package to collect key metrics. At a minimum I like to capture CPU utilization, page life expectancy, disk I/O, wait statistics, and a database inventory. This allows me to track and trend what normal CPU utilization looks like, as well as to see if they are having memory pressure during the day by collecting PLE. Disk I/O metrics are good to have in order to know what normal latency looks like during certain intervals of the day. Capturing wait statistics at regular intervals is a must in order to start tracking down any perceived performance issues (please tell me where it hurts!).

A quick internet search will show several packages that others have created and blogged about. All the ones I’ve found collect some of the same metrics I am collecting but not completely the way I like to look at it. My process is simple, the way I like it. I manually create a database called SQLskillsMonitor, so that I can place it where should reside. I then run my script PopulateDB to create the tables and stored procedures. Next I run the script CreateJobs to create the SQL Server Agent jobs. Each job is prefixed with SQLskills-Monitoring-… I also include a purge job that you can modify to delete any data older than the specified time frame, I have it defaulted to 90 days. That is it. No fuss and no complicated requirements.

You are welcome to customize the process any way you like. You can change the DB name, add additional tables and stored procedures to collect anything else you find useful. I’ve included the key things I like to track for performance baselines as well as baselines for upgrades and migrations. If you do add additional metrics, come back and comment on this post and share what you’ve done so others can benefit too.

This in no way should replace or eliminate the need for a nice third party monitoring solution like SentryOne’s SQL Sentry solution, but in a pinch when my clients don’t have anything else in place, this helps.

Calling all user group leaders! I want to present on Managed Instances for you!

At SQLskills, we like presenting to user groups. It’s a great way to give back to our community and share things that we learn about.

Paul Randal put out a call to user group leaders earlier this year, but I’ve recently been digging deep into Azure SQL Database Managed Instance and would like to show your user group all about it. I put together a session covering some of the differences between Azure SQL Database and on-premises SQL Server to Azure SQL Database Managed Instance and presented that this week at SQLintersection. The audience really enjoyed the session and everyone was really impressed with all the functionality that Managed Instances give us.

If you would like to learn more about Azure SQL Database Managed Instance and can host a remote presentation (I can use our WebEx account), then reach out to me. I also have several other sessions I wouldn’t mind presenting to your user group. My list of sessions are:

An Introduction to Azure SQL Managed Instances
Common SQL Server Mistakes and How to Avoid Them
Azure SQL Database for the Production DBA
Understanding Backup and Restore

So, calling all user group leaders! If you’d like me to present remotely for you in 2018 (or maybe even multiple times), send me an email and be sure to include:

  • Details of which user group you represent (and if sending from a shared user group account, your name)
  • The usual day of the month, meeting time, and timezone of the user group
  • Which months you have available, starting in April 2018 (a list of available dates would be ideal)
  • Whether you’d like just one or multiple sessions

Thanks and please spread the word, Azure SQL Database Managed Instance is really cool!

SSMS 17.6 is available

If you are using Azure SQL Database Managed Instance, you may want to consider installing the latest SQL Server Management Studio.

There are bug fixes for Always On, Setup, Object Explorer, Data Classification (important for GDPR), SMO, and Database Mail.

Read the official release here or jump right to the download page here.

Three Years with SQLskills

It’s hard to believe that three years ago, I started my journey with SQLskills working with Kimberly, Paul, Jonathan, Erin, and Glenn. The time has just flown by, but that’s because I’ve been really busy these past three years.

During this time, I’ve completed countless health audits for clients all over the world, numerous upgrades, migrations, tuning engagements, corruption cases, and general SQL Server consulting. That is the day to day stuff. During all that consulting, I’ve written over  a dozen articles for SentryOne, had three Pluralsight courses published, spoken at SQLintersection in the Spring and Fall each year, created an IEAzure course and co-created our IEUpgrade course.  Our entire team supports our SQL Server community by speaking at user groups and other events. Since joining SQLskills I’ve had the honor of presenting over 100 sessions.

The past three years have been truly amazing and I look forward to many more.

Microsoft Database Experimentation Assistant with SQL Server 2017

If you’ve tried using the Microsoft Database Experimentation Assistant (DEA) with SQL Server 2017 prior to CU1 to replay a trace, then you’ve likely encountered an error like the one below telling you that the Preprocessed IRF path is empty or contains invalid files. The great news is that after applying CU1, this has been corrected.

image3

So how do you get started working with DEA? First you have to install it. You can find the install package here:

Once DEA is installed, if you are going to want to replay a trace, you’ll also have to install the Distributed Replay Controller. You’ll find this option in the installation of SQL Server. I’m typically using my controller server as a client too, so I install both.

Image1

I haven’t found it documented anywhere, but when you install the Distribute Replay Controller, it prompts you to provide a controller name. I’ve installed this on numerous occasions and unless I name the controller as the same name as the hostname of the server, I cannot replay a trace. For example, my demo VM for this test is named 2017DEA so that is the name I had to provide as the controller machine. Just an FYI.

Image2

Every time I have installed distributed replay, I’ve also needed to change the service to run as a user with privileges and I’ve needed to make some DCOM changes to allow remote launch.

I decided to put this to a test and setup a brand new virtual machine with SQL Server 2017 RTM, DEA, and configured Distributed Replay Controller and Client. I tried replaying a trace captured on 2012 and got the same IRF path error. I then installed CU1 for SQL Server 2017 and without having to reboot, I tried replaying the trace again by cloning the trace and clicking Start. It was successful and effortlessly replayed my 10 minute trace I had captured. I then applied CU2 and it worked as expected again. The image below shows it in progress after applying CU1.

image4

I’m personally very glad to know that Database Experimentation Assistant is now functional on SQL Server 2017 CU1 and above, and that I can use this for future migration testing with clients.

Stay tuned for future post with step by step instructions for installing, configuring, and using DEA.

New Course: Implementing a Hybrid Cloud Solution in Azure

My latest Pluralsight course – SQL Server: Implementing a Hybrid Cloud Solution in Azure – is well over two hours and long and is based on part of my popular workshop/precon that I have been presenting over the past 18 months. This was a fun and challenging course to develop and record.

The modules are:

  • Introduction
  • Understanding Azure IaaS
  • Understanding Hybrid Solutions
  • Understanding How to Create and Manage Azure VMs
  • Migrating Databases to Azure VMs and Scaling
  • Understanding Azure SQL Database and Stretch Databases
  • Monitoring Options with Azure Portal and SSMS

You can check it out here and I hope that you enjoy it.

SQLskills now has over 170 hours of online training available (see all our 56 courses here), all for as little as $29/month through Pluralsight (including more than 6,000 other developer and IT training courses). That’s an unbeatable value that you can’t afford to ignore.

SQLskills SQL101: Using Backup to URL for Azure VM Migrations

Beginning with SQL Server 2012 SP1 CU2, we’ve been able to backup SQL Server directly to the Windows Azure Blob service. This makes migrations to Azure virtual machines much easier for getting databases staged as well as actual migrations. I’ve used this method numerous times with client migrations due to its simplicity and ease of use.

In order to use backup to URL, you’ll need a few things. An active Azure subscription, a Windows Storage account, and its access key.

To get started, first log into your Azure account and click on Storage accounts. Next you will need to click add to create a new storage account.

image1

 

 

You’ll need to provide a name for your account, select the deployment model, the kind of account, your performance level, any replication options, select to enable secure transfer or not, select which subscription to use if you have more than one, which resource group to use, your location, and whether to configure a virtual network or not. For this exercise, I am naming my account myblogdemo (all lowercase) and choosing an existing resource group, and selected East US, the rest of the settings I am taking the defaults.

image2

Next you need to create a container and add some storage. Under storage accounts, click on the newly created account, for my demo, that would be myblogdemo. Next under blob service, click on containers. You’ll get a message stating “You don’t have any containers yet. Click ‘+ Container’ to get started.”. This is toward the top of the screen. Click the + Container to add a container. You’ll need to provide a name for the container as well as select the type of public access level. I like to set my containers to Private (no anonymous access) so that the container and blob data can be read by the Windows Azure account. I’m going to name my container blogdemo.

image3

You should now see the container you just created. In my demo, I see blogdemo. Click on your container and then container properties.

image4

To the right of the screen you will see the properties displayed. Under URL you will see the URL you will need to provide when using backup to URL. It will be similar to https://myblogdemo.blob.core.windows.net/blogdemo.

Next you need to create a credential on your SQL Server instance, this is located under Security. To create a credential, right click on Credentials and select New Credential. You can specify any name that you would like, your identity is your storage account name, and your password you will find in the Access keys for your storage account. You have two keys, you can select either one, simply copy and paste the value as your password and confirm and then click OK.

image5

image6

Now you should have a storage account, container, and a credential on your local SQL Server. You can now utilize the backup to URL feature and backup to the Windows Azure Storage service. You can demo this by backing up a database. If this is just a test and you are using a production database, I recommend using WITH COPY_ONLY so that you don’t potentially mess up a restore chain.

Your syntax would be similar to this:

image7

When I executed the above query, it took approx. 3 minutes and 58 seconds to backup the 351MB database which was 108MB compressed.

To restore this database, you just have to create a credential on your SQL Server instance on your Azure virtual machine and your syntax would be similar to:

image8

Leveraging backup to URL makes migrating to Azure virtual machines easy. You can stage a full restore and then just restore the needed logs during cut-over. Although there are a number of different ways to migrate to Azure, leveraging this method gives you much needed control and as you’ve just learned, it is very easy to setup and configure.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out: SQLskills.com/help/SQL101

 

 

SQLskills SQL101: Recovery Models Matter

I was recently working with a client when I noticed a job about shrinking transaction logs. I asked the client about the job and found out that they had a weekly scheduled processes to switch the recovery model of a database from full to simple, shrink the transaction log, and then switch back to the full recovery model. They also were not using native SQL Server backups, they were just relying on daily VM level backups.

I immediately explained how vulnerable the client was to potential data loss and how a proper backup back up process would allow them to meet their recovery SLAs. By switching recovery models and not having proper transaction log level backups, they did not have any point in time recoverability. I also shared how a database in full recovery without transaction log backups would cause the transaction log to continue to grow, by taking regular log level backups, the transaction log is able to clear empty VLFs and reuse that portion of the log.

The client was very receptive and we quickly implemented a backup plan that would meet their recovery needs.

recoverymodelSwitching recovery models have their purpose, but automating a process like this just to save space screams of a bigger issue. If your transaction logs are growing to an unruly size, there is a reason. Either you have long open transactions that are preventing VLFs from clearing, not taking log backups frequently enough to clear the log, index maintenance may need to be adjusted, or a number of other reasons. For most organizations, allocating the space that the logs need for normal business processes and maintenance is sufficient and an understood cost of doing business.

Every DBA who is responsible for backup and restores should be familiar with your restore process and validate that what you think you can do in terms of restoring, can actually be done. The time to find out that you don’t have proper backups is NOT when you are restoring production. If you inherited a system recently, don’t trust that you are protected. Verify and validate that maintenance and backups are being performed to meet the expected SLAs.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out: SQLskills.com/help/SQL101