Three Years with SQLskills

It’s hard to believe that three years ago, I started my journey with SQLskills working with Kimberly, Paul, Jonathan, Erin, and Glenn. The time has just flown by, but that’s because I’ve been really busy these past three years.

During this time, I’ve completed countless health audits for clients all over the world, numerous upgrades, migrations, tuning engagements, corruption cases, and general SQL Server consulting. That is the day to day stuff. During all that consulting, I’ve written over  a dozen articles for SentryOne, had three Pluralsight courses published, spoken at SQLintersection in the Spring and Fall each year, created an IEAzure course and co-created our IEUpgrade course.  Our entire team supports our SQL Server community by speaking at user groups and other events. Since joining SQLskills I’ve had the honor of presenting over 100 sessions.

The past three years have been truly amazing and I look forward to many more.

Microsoft Database Experimentation Assistant with SQL Server 2017

If you’ve tried using the Microsoft Database Experimentation Assistant (DEA) with SQL Server 2017 prior to CU1 to replay a trace, then you’ve likely encountered an error like the one below telling you that the Preprocessed IRF path is empty or contains invalid files. The great news is that after applying CU1, this has been corrected.


So how do you get started working with DEA? First you have to install it. You can find the install package here:

Once DEA is installed, if you are going to want to replay a trace, you’ll also have to install the Distributed Replay Controller. You’ll find this option in the installation of SQL Server. I’m typically using my controller server as a client too, so I install both.


I haven’t found it documented anywhere, but when you install the Distribute Replay Controller, it prompts you to provide a controller name. I’ve installed this on numerous occasions and unless I name the controller as the same name as the hostname of the server, I cannot replay a trace. For example, my demo VM for this test is named 2017DEA so that is the name I had to provide as the controller machine. Just an FYI.


Every time I have installed distributed replay, I’ve also needed to change the service to run as a user with privileges and I’ve needed to make some DCOM changes to allow remote launch.

I decided to put this to a test and setup a brand new virtual machine with SQL Server 2017 RTM, DEA, and configured Distributed Replay Controller and Client. I tried replaying a trace captured on 2012 and got the same IRF path error. I then installed CU1 for SQL Server 2017 and without having to reboot, I tried replaying the trace again by cloning the trace and clicking Start. It was successful and effortlessly replayed my 10 minute trace I had captured. I then applied CU2 and it worked as expected again. The image below shows it in progress after applying CU1.


I’m personally very glad to know that Database Experimentation Assistant is now functional on SQL Server 2017 CU1 and above, and that I can use this for future migration testing with clients.

Stay tuned for future post with step by step instructions for installing, configuring, and using DEA.

New Course: Implementing a Hybrid Cloud Solution in Azure

My latest Pluralsight course – SQL Server: Implementing a Hybrid Cloud Solution in Azure – is well over two hours and long and is based on part of my popular workshop/precon that I have been presenting over the past 18 months. This was a fun and challenging course to develop and record.

The modules are:

  • Introduction
  • Understanding Azure IaaS
  • Understanding Hybrid Solutions
  • Understanding How to Create and Manage Azure VMs
  • Migrating Databases to Azure VMs and Scaling
  • Understanding Azure SQL Database and Stretch Databases
  • Monitoring Options with Azure Portal and SSMS

You can check it out here and I hope that you enjoy it.

SQLskills now has over 170 hours of online training available (see all our 56 courses here), all for as little as $29/month through Pluralsight (including more than 6,000 other developer and IT training courses). That’s an unbeatable value that you can’t afford to ignore.

SQLskills SQL101: Using Backup to URL for Azure VM Migrations

Beginning with SQL Server 2012 SP1 CU2, we’ve been able to backup SQL Server directly to the Windows Azure Blob service. This makes migrations to Azure virtual machines much easier for getting databases staged as well as actual migrations. I’ve used this method numerous times with client migrations due to its simplicity and ease of use.

In order to use backup to URL, you’ll need a few things. An active Azure subscription, a Windows Storage account, and its access key.

To get started, first log into your Azure account and click on Storage accounts. Next you will need to click add to create a new storage account.




You’ll need to provide a name for your account, select the deployment model, the kind of account, your performance level, any replication options, select to enable secure transfer or not, select which subscription to use if you have more than one, which resource group to use, your location, and whether to configure a virtual network or not. For this exercise, I am naming my account myblogdemo (all lowercase) and choosing an existing resource group, and selected East US, the rest of the settings I am taking the defaults.


Next you need to create a container and add some storage. Under storage accounts, click on the newly created account, for my demo, that would be myblogdemo. Next under blob service, click on containers. You’ll get a message stating “You don’t have any containers yet. Click ‘+ Container’ to get started.”. This is toward the top of the screen. Click the + Container to add a container. You’ll need to provide a name for the container as well as select the type of public access level. I like to set my containers to Private (no anonymous access) so that the container and blob data can be read by the Windows Azure account. I’m going to name my container blogdemo.


You should now see the container you just created. In my demo, I see blogdemo. Click on your container and then container properties.


To the right of the screen you will see the properties displayed. Under URL you will see the URL you will need to provide when using backup to URL. It will be similar to

Next you need to create a credential on your SQL Server instance, this is located under Security. To create a credential, right click on Credentials and select New Credential. You can specify any name that you would like, your identity is your storage account name, and your password you will find in the Access keys for your storage account. You have two keys, you can select either one, simply copy and paste the value as your password and confirm and then click OK.



Now you should have a storage account, container, and a credential on your local SQL Server. You can now utilize the backup to URL feature and backup to the Windows Azure Storage service. You can demo this by backing up a database. If this is just a test and you are using a production database, I recommend using WITH COPY_ONLY so that you don’t potentially mess up a restore chain.

Your syntax would be similar to this:


When I executed the above query, it took approx. 3 minutes and 58 seconds to backup the 351MB database which was 108MB compressed.

To restore this database, you just have to create a credential on your SQL Server instance on your Azure virtual machine and your syntax would be similar to:


Leveraging backup to URL makes migrating to Azure virtual machines easy. You can stage a full restore and then just restore the needed logs during cut-over. Although there are a number of different ways to migrate to Azure, leveraging this method gives you much needed control and as you’ve just learned, it is very easy to setup and configure.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out:



SQLskills SQL101: Recovery Models Matter

I was recently working with a client when I noticed a job about shrinking transaction logs. I asked the client about the job and found out that they had a weekly scheduled processes to switch the recovery model of a database from full to simple, shrink the transaction log, and then switch back to the full recovery model. They also were not using native SQL Server backups, they were just relying on daily VM level backups.

I immediately explained how vulnerable the client was to potential data loss and how a proper backup back up process would allow them to meet their recovery SLAs. By switching recovery models and not having proper transaction log level backups, they did not have any point in time recoverability. I also shared how a database in full recovery without transaction log backups would cause the transaction log to continue to grow, by taking regular log level backups, the transaction log is able to clear empty VLFs and reuse that portion of the log.

The client was very receptive and we quickly implemented a backup plan that would meet their recovery needs.

recoverymodelSwitching recovery models have their purpose, but automating a process like this just to save space screams of a bigger issue. If your transaction logs are growing to an unruly size, there is a reason. Either you have long open transactions that are preventing VLFs from clearing, not taking log backups frequently enough to clear the log, index maintenance may need to be adjusted, or a number of other reasons. For most organizations, allocating the space that the logs need for normal business processes and maintenance is sufficient and an understood cost of doing business.

Every DBA who is responsible for backup and restores should be familiar with your restore process and validate that what you think you can do in terms of restoring, can actually be done. The time to find out that you don’t have proper backups is NOT when you are restoring production. If you inherited a system recently, don’t trust that you are protected. Verify and validate that maintenance and backups are being performed to meet the expected SLAs.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out:

SQLskills SQL101: Validating SQL Server Backups

At the Spring SQLintersection conference, I presented my session Understanding SQL Backups and Restores. During this session I primarily talk about how to create a solid recovery strategy and how to restore SQL Server backups. I cover full, differential, transaction log, and piece-meal restores. During the session, I stress how important it is to practice restoring your backups. There are multiple reasons for doing this, one is so that you know how to properly restore. You don’t want the first time you have to perform a restore with a full, differential, and 30 log files to be in production, with a boss looking over your shoulder. Another benefit is restore validation. This is the process of performing regular restores to make sure that your backups are valid.

Even though you may be an expert at restoring backups, a regular process to restore your backups for validation is still needed. I always tell my audience that if you are in a heavily regulated industry, that examiners and auditors love to see that you have a restore validation process. You can easily prove that restores are good, and how long restores take.

A couple of weeks after the conference, I had an attendee email me that when they tried to restore their backups from an offsite location to their DR site that the restores failed. They received an error “The media family on device ‘backup_file.bak’ is incorrectly formed”. I wrote back that this message usually indicates a bad backup file, or that they are trying to restore a backup from a higher version of SQL to a lesser version. For example, a 2016 backup to a 2014 instance. I was assured that both servers were the same version and that all backup files were failing, but that if they tried restoring the same file to the original server, that it works. At this point, I asked for them to explain their entire process. How are they taking the backups, getting the files offsite, and then to the DR site. It turns out that they FTP from production to offsite, and then FTP from offsite to production. The FTP is where the process was failing. They were transmitting binary from production to offsite, but transmitting text from offsite to DR. Transmitting the file using text was slightly modifying the file and was causing an issue. They re-transmitted the file using binary and was able to successfully restore.

Had they not tested this and a true production issue had occurred, they would have lost valuable time which could have also resulted in significant financial loss. I cannot stress enough how important it is to practice testing and validating recovery strategies. When an issue arises, you should have 100% confidence in your plan and your ability to recover. If anything in your recovery plan changes, make sure a full regression of your recovery strategy is validated as well.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out:

SQLskills SQL101: Database Maintenance for Azure SQL Database

I’ve been presenting on Azure SQL Database to groups all over the world. A common question that I’m asked is around maintenance on Azure SQL Databases. Since backups are handled by the platform, some think that index and statistics maintenance and consistency checks are also handled by the platform. This just isn’t the case. The customer is still responsible for regular maintenance. With that said, certain safeguards are in place to help contend with physical corruption, however logical corruption is up to the consumer to check for, therefore I still recommend clients to run their own consistency checks.

Azure SQL Database doesn’t come with a SQL Server Agent like Enterprise and Standard Editions. This means you have to find another way to schedule maintenance rather than manually running scripts on a daily or weekly basis. There is some good news, we have a number of different ways to schedule jobs.

  • Linked servers
  • Database Maintenance Plans
  • PowerShell
  • Azure Services
  • Elastic Jobs

Using a linked server on an existing on-premises server or an Azure VM is one of the most common methods to schedule jobs. Most of my clients use this method, they just reference the linked server in the job step to call the stored procedures.

Database Maintenance Plans are also a common method, you just reference the connection string in the step, provide login credentials and let it run.

Working with many aspects of Azure is simplified with PowerShell. Running database maintenance through PowerShell scripts isn’t much different than using T-SQL. With using PowerShell, you still need to utilize some type of scheduler to automate running the script.

Azure Services include Azure Automation, which to me is a fantastic method to create and run processes to manage your Azure platform. This service allows you to create runbooks to do everything from provisioning, monitoring, and more. There are over 100 built in runbooks that you can select from. All of this is driven by PowerShell too.

Elastic Jobs is another Azure based service that can be used with elastic pools. This option can be used to run processes against an entire pool, or custom created groups.

Regardless of your method of scheduling your tasks, for the most part, you treat Azure SQL Databases like any other database. They need regular maintenance and tuning just like your on-premises databases.

My latest course: Understanding and Using Azure SQL Database

SQLskills just launched our new training class on Azure that I will be teaching in Chicago, and my course on Azure SQL Database was just published on Pluralsight too!

My latest Pluralsight course – SQL Server: Understanding and Using Azure SQL Database – is just over two hours long and is based on my popular user group and conference session. This was a fun course to develop and record.

The modules are:

  • Introduction
  • Understanding Azure SQL Database Features
  • Exploring Unsupported Features
  • Understanding Azure SQL Database Pricing
  • Migrating Data to Azure SQL Database
  • Monitoring and Tuning Azure SQL Database

You can check it out here and I hope that you enjoy it.

SQLskills now has 150 hours of online training available (see all our 51 courses here), all for as little as $29/month through Pluralsight (including more than 5,000 other developer and IT training courses). That’s an unbeatable value that you can’t afford to ignore.

SQLskills SQL101: File Extensions for SQL Server Database Files

I recently had a client reach out to me about database performance dropping drastically. I asked if anything had changed recently on the server and they told me that all they had done was increase the number of files for tempdb from 1 to 8 per best practice.

Knowing that this shouldn’t have a negative impact on the system, we agreed for me to spend an hour or so troubleshooting. What I immediately noticed was that McAfee consuming a considerable amount of CPU. IO on tempdb was also considerably high. McAfee was constantly scanning portions of tempdb, and the reason why is because the dba named the additional data files as tempdev.002, tempdev.003, and so on.

McAfee had exclusions for the standard naming conventions of .mdf, .ndf, and .ldf. There was no exclusion for .00X. Although .mdf, .ndf, and .ldf are the default naming convention, SQL Server doesn’t really care what extensions you use. However, if you plan to deviate from the standards, make sure that any antivirus or driver based filters account for the new naming standard.

What happened for this client, is they performed a denial of service against tempdb with their antivirus software. Lesson learned, if you are going to deviate from a standard, make sure to test in a development or QA environment first.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out:

SQLskills SQL101: Azure SQL Database Pricing Models

Microsoft has implemented a database-as-a-service/platform-as-a-service environment known as Azure SQL Database. In this environment, the hardware and instance configuration is taken care of for you. In this environment, you manage and support the databases. Finally we have a production ready cloud database. When looking at pricing models, you will find DTUs, eDTUs, and DSU, which of these is correct for you?

What are DTUs?

Database Transaction Units are the unit of measure comprised of cpu, memory, and disk I/O that are available to a single Azure SQL database. Azure SQL databases have three service tiers made up of basic, standard, and premium. The basic tier supports 5 DTU and a database up to 2GB. The standard tier supports between 10 and 100 DTUs and a database up to 250GB. The premium tier supports between 125 and 4000 DTUs and a database up to 500GB. The top two premium levels P11 and P15 support a database up to 1TB.

What are eDTUs?

Elastic Database Transaction Units are just like DTUs, except these are for elastic pools. An elastic pool is a set of resources that are shared between a set of databases on an Azure SQL server. These are very helpful when you have to support multiple Azure SQL databases or a set of databases with unpredictable workloads. Much like the service tiers with DTUs, eDTUs service tiers are made up of basic, standard, and premium. The basic tier has pools from 100 – 1200 eDTUs, however no one database can consume more than 5 eDTUs. The standard pool also supports 100 – 1200 eDTUs however the databases can consume up to 100 eDTUs. The premium pool supports 125 – 1500 eDTUs. A database can consume up to the pool eDTU limit except for the 1500 eDTU pool. A database can only consume up to 1000 eDTU in that pool.

What are DSUs?

Database Stretch Units are the unit of measure for compute when using SQL Server Stretch Database. When you migrate warm or cold transactional data using Stretch Database, it stores the data in an Azure SQL database. DSU levels range from 100 DSUs up to 6000 DSUs.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out: