My latest course: Understanding and Using Azure SQL Database

SQLskills just launched our new training class on Azure that I will be teaching in Chicago, and my course on Azure SQL Database was just published on Pluralsight too!

My latest Pluralsight course – SQL Server: Understanding and Using Azure SQL Database – is just over two hours long and is based on my popular user group and conference session. This was a fun course to develop and record.

The modules are:

  • Introduction
  • Understanding Azure SQL Database Features
  • Exploring Unsupported Features
  • Understanding Azure SQL Database Pricing
  • Migrating Data to Azure SQL Database
  • Monitoring and Tuning Azure SQL Database

You can check it out here and I hope that you enjoy it.

SQLskills now has 150 hours of online training available (see all our 51 courses here), all for as little as $29/month through Pluralsight (including more than 5,000 other developer and IT training courses). That’s an unbeatable value that you can’t afford to ignore.

SQLskills SQL101: File Extensions for SQL Server Database Files

I recently had a client reach out to me about database performance dropping drastically. I asked if anything had changed recently on the server and they told me that all they had done was increase the number of files for tempdb from 1 to 8 per best practice.

Knowing that this shouldn’t have a negative impact on the system, we agreed for me to spend an hour or so troubleshooting. What I immediately noticed was that McAfee consuming a considerable amount of CPU. IO on tempdb was also considerably high. McAfee was constantly scanning portions of tempdb, and the reason why is because the dba named the additional data files as tempdev.002, tempdev.003, and so on.

McAfee had exclusions for the standard naming conventions of .mdf, .ndf, and .ldf. There was no exclusion for .00X. Although .mdf, .ndf, and .ldf are the default naming convention, SQL Server doesn’t really care what extensions you use. However, if you plan to deviate from the standards, make sure that any antivirus or driver based filters account for the new naming standard.

What happened for this client, is they performed a denial of service against tempdb with their antivirus software. Lesson learned, if you are going to deviate from a standard, make sure to test in a development or QA environment first.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out: SQLskills.com/help/SQL101

SQLskills SQL101: Azure SQL Database Pricing Models

Microsoft has implemented a database-as-a-service/platform-as-a-service environment known as Azure SQL Database. In this environment, the hardware and instance configuration is taken care of for you. In this environment, you manage and support the databases. Finally we have a production ready cloud database. When looking at pricing models, you will find DTUs, eDTUs, and DSU, which of these is correct for you?

What are DTUs?

Database Transaction Units are the unit of measure comprised of cpu, memory, and disk I/O that are available to a single Azure SQL database. Azure SQL databases have three service tiers made up of basic, standard, and premium. The basic tier supports 5 DTU and a database up to 2GB. The standard tier supports between 10 and 100 DTUs and a database up to 250GB. The premium tier supports between 125 and 4000 DTUs and a database up to 500GB. The top two premium levels P11 and P15 support a database up to 1TB.

What are eDTUs?

Elastic Database Transaction Units are just like DTUs, except these are for elastic pools. An elastic pool is a set of resources that are shared between a set of databases on an Azure SQL server. These are very helpful when you have to support multiple Azure SQL databases or a set of databases with unpredictable workloads. Much like the service tiers with DTUs, eDTUs service tiers are made up of basic, standard, and premium. The basic tier has pools from 100 – 1200 eDTUs, however no one database can consume more than 5 eDTUs. The standard pool also supports 100 – 1200 eDTUs however the databases can consume up to 100 eDTUs. The premium pool supports 125 – 1500 eDTUs. A database can consume up to the pool eDTU limit except for the 1500 eDTU pool. A database can only consume up to 1000 eDTU in that pool.

What are DSUs?

Database Stretch Units are the unit of measure for compute when using SQL Server Stretch Database. When you migrate warm or cold transactional data using Stretch Database, it stores the data in an Azure SQL database. DSU levels range from 100 DSUs up to 6000 DSUs.

I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out: SQLskills.com/help/SQL101

Spring 2017 SQLintersection

The Spring SQLintersection conference will be May 21st – May 24th. I attended my first SQLintersection conference Spring of 2015 and was blown away at how awesome a conference can be. There was an incredible lineup of speakers and sessions. What I found to be one of the best features of this conference is the attendee to speaker ratio. I have been to conferences where you have hundreds of attendees in each session, while the content of that session can be good, if you have any additional follow up questions, you couldn’t gain access to the speaker. At SQLintersection, the speakers were readily available and eager to share follow up answers to questions after their session.

Another great thing about SQLintersection is the ability to attend workshops. For the Spring conference there are two days of pre-conference workshops and one day of post conference workshops. I am fortunate to be presenting a full day pre-conference workshop on Azure as well as three general sessions.

If you are planning to attend the Spring 2017 SQLintersection conference, you can use the discount code RADNEY to save $50 off of registration.

The Spring 2017 conference has a stellar line-up of speakers to include (in alphabetical order)
Aaron BertrandSQL_Spr17_300x250_Radney
Tim Chapman
Jonathan Kehayias
Ben Miller
Brent Ozar
David Pless
Tim Radney
Paul Randal
Joe Sack
Shep Sheppard
Erin Stellato
Kimberly L. Tripp
Bob Ward
Buck Woody

Using the Azure SQL Database DTU Calculator

Over the past year I’ve been teaching and helping clients with Azure SQL Database. One of the most common questions I’m asked is, “What size tier do I need for my database?” I instinctively refer them to the DTU Calculator. This is a free tool that you can use to capture local instance metrics and upload to a website for an estimation of the service tier needed. The DTU calculator captures data on processor, disk reads and writes, and log bytes flushed.

Something to take into consideration about the DTU Calculator is that it collects the metrics for the entire instance. If you are looking to migrate a single database from an instance, then those metrics will be skewed. Depending on your situation, this may be fine, you’d know that the database in question would fit within the tier recommended by the DTU Calculator. If this is not acceptable, another option would be to isolate the database to it’s own instance, and then run the DTU Calculator against that instance.

Justin Henriksen, the creator of the DTU Calculator suggests that you could use SQL Server DMVs to isolate a database, but I personally have not tried this.

The DTU Calculator is offered in an executable with a configuration file and a PowerShell script. The two items I typically change are the location and name of my output file and the duration the script runs. The duration defaults to 3600 seconds, but when I’m conducting demos, I usually reduce this to 60 seconds. In the real world, I run the process for 1 hour increments, and continuously throughout the day so I get a full picture of the workload. I was excited to see the addition of the executable command line tool rather than just the PowerShell option.

If you’re looking to migrate multiple databases to Azure SQL Database, then an Elastic Database Pool is most likely a better option for you instead of managing multiple individual Azure SQL Database tiers. With an Elastic Database Pool, you’ll have a pool of eDTUs that each database can pull from (individual database limitation differences between Standard and Premium pools).

If you are considering Azure SQL Database, check out the DTU Calculator and if you need assistance migrating from On Premises to Azure or Azure SQL Database, let us know.

How to run Windows Server 2012 Hyper-V on VMware Workstation

I recently needed to create a new virtual machine, however with this virtual machine I needed to demo Windows Server 2012 with Hyper-V enabled. Since I run VMware Workstation as my hypervisor on my laptop, I needed to run a hypervisor within a hypervisor.

Installing Windows Server 2012 was no problem, however when I tried to add the Hyper-V server role I received an error

AddRoles1
Check Hyper-V and click next

AddRoles2
I then click Add Feature and…..

AddRoles3
This is where things failed. Hyper-V cannot be installed: A hypervisor is already running.

By default, you cannot install a hypervisor under another hypervisor. However there is a work around. First you to power down your VM and edit the .vmx file for that VM. You will need to add a two lines of code, save the file and then startup the VM.

hypervisor.cpuid.v0 = “FALSE”
mce.enable = “TRUE”

Edit_vmx_file

Once I edited the VM and restarted and tried to add the Hyper-V role, I received another error regarding my CPU type. Hyper-V cannot be installed: The processor does not have required virtualization capabilities.

Error

To correct this I needed to again power down the VM and edit the virtual machine settings, specifically, the processor values.

vmware_changes

Once I change the preferred mode to “Intel VT-x/EPT or AMD-V/RVI” and checked the box for “Virtualize Intel VT-x/EPT or AMD-V/RVI” I was able to successfully add the Hyper-V role to my VM and start provisioning additional VM’s under Hyper-V.

hyperv

If you experience any of these issues then I hope this post solves your issues like it did mine.

Configuring SIOS DataKeeper for multiple volume snapshots

Recently I had the opportunity to work with a client that was utilizing SIOS DataKeeper in a Windows Server Failover Cluster to create a readable secondary in order to offload reporting. SIOS DataKeeper is an application that synchronizes data between two volumes, allows you to present local storage to a WSFC as shared storage, and gives you the ability to take snapshots of the mirrored volumes.

Taking a snapshot will make the mirrored volume available to the OS on the secondary. You can then attach those databases to an instance of SQL Server and have your users to connect for reporting. This is all doable with Standard Edition of SQL Server which can save thousands of dollars on SQL Server licenses.

Installing and configuring DataKeeper to mirror volumes and adding them to the WSFC was very simple and well documented. To keep the snapshot current with the previous day’s data, I needed to schedule a process that would drop any existing snapshots and take a new snapshot. I did this by using the built in command line tool EMCmd. To take a snapshot of the E and G drive on my secondary server named AGNODE6, I needed to run

EMCmd AGNODE6 TAKESNAPSHOT E G

To delete the snapshot I would run

EMCmd AGNODE6 DROPSNAPSHOT E G

I put together a batch file that would navigate to the proper folder and issue a drop snapshot to clear any existing snapshots and then take a new snapshot. This was scheduled daily on the secondary server.

The problem I encountered was dropping the snapshots during a failover event on the cluster. In order for the SQL Server clustered resource to failover to the secondary, all snapshots had to be dropped first.

Refreshing the snapshot was straightforward, but how do I drop the snapshots in a timely fashion when the SQL Server clustered resource needs to failover from the primary server, in my case AGNODE5 to the AGNODE6.

SIOS provides a sample VBScript named DKSnapshotCleanup that can be copied and modify to drop the snapshots on the secondary. For this to work I needed a folder on each server and a copy of the modified DKSnapshotCleanup script. Next, I had to add a resource to the cluster and point to the VB Script and make my DataKeeper volume dependent on the new resource. The instructions in the VB script are detailed and easy to follow, however I had an issue when it stated to create a “Generic Application” and point to the script, I had to choose “Generic Script” instead.

clip_image001

In my environment, I had a drive E for data files and drive G for log files. In my DKSnapshotCleanup script, I needed to drop the snapshot for both volumes. To do this, I added two lines, one for each volume toward the bottom of the script.

clip_image002

I assigned the DKSnapshotCleanup Script resource as a dependency on drive E.

clip_image003

When my cluster failed over to the secondary node and back, things didn’t go as planned. Occasionally drive G would fail to come online which failed the cluster.

clip_image004

I was able to quickly bring drive G: back online however that isn’t an acceptable course of action in production. This has to work consistently each time the cluster fails over.

I did some more testing and validated with SIOS support that the best course of action was to create a dependency for drive G on drive E.

clip_image005

This makes drive G have to wait for drive E to come online before it starts. Once I configured the second dependency, I was able to consistently fail the cluster over with both snapshots being dropped and both volumes coming online.

My overall experience with SIOS DataKeeper was positive. The support team was very quick to respond and the product was easy to install. Although I had a couple of configuration issues, I was able to resolve them quickly. Another cool thing about SIOS DataKeeper is that it works on both Windows and Linux, it is a great way to build in HA for storage without having to have a high dollar SAN replication solution.

Issue publishing to SSRS 2012 with SSDT 2015

I recently was helping a client who was trying to use SQL Server Data Tools 2015 to update and generate reports that would be deployed to SQL Server Reporting Services 2012. When trying to deploy the report, the client would get the following error:

“[rsInvalidReportDefinition] The definition of this report is not valid or supported by this version of Reporting Services. The report definition may have been created with a later version of Reporting Services, or contain content that is not well-formed or not valid based on Reporting Services schemas.”

I knew that SSDT 2015 should be backwards compatible and able to work with multiple versions of SQL Server products, so I had to do a little research to figure out what exactly needed to be changed in order to publish a report to SSRS 2012.

In my lab, I created a new project named GetDatabases. This project was a basic report that would get query a system table. The first thing I tried to change in the project was the TargetServerVersion located under the project properties.

clip_image001

clip_image003

I updated the TargetServerVersion to “SQL Server 2008 R2, 2012, or 2014”, the default in was “SQL Server 2016”. I also updated the TargetServerURL to my development server running SSRS 2012. I made these changes to both the release and debug configuration options.

After making these changes, I was still unable to deploy the report. After a bit more research, I found a location to change the default deployment server version of the Business Intelligence Designer. This is located under ‘Tools and then Options’. The default was set to version 13.0, once I changed the version to 11.0 to match the server I was deploying to, I was able to successfully deploy my report to SSRS 2012.

clip_image004

It seems odd to me that I would have to change the overall behavior of SSDT instead of just the properties of my project, however I confirmed with my client that they couldn’t deploy until they modified the default deployment server version.

I hope this helps if you are having similar issues.

Calling all user group leaders! I want to present for you!

Late last year SQlskills made an offer to remotely present for user groups anywhere in the world. The response was incredible. The SQLskills team has been presenting like crazy so far this year. Since I didn’t join SQLskills until January I was not part of the initial listing of potential sessions that the chapter leaders could pick from but that hasn’t stopped me from getting in on some of the fun. Over the past couple of months I have been able to remotely present for user groups in Alaska, Indiana, and Puerto Rico.

Getting first-hand experience with how rewarding it is to remotely present to areas that I otherwise wouldn’t be able to share with has made me want to have my own call for user group leaders.

If you would like me to present remotely for you during the remainder of 2015, send me an email with:

  • Details of which user group you represent
  • The usual day of the month, time, and timezone of the user group
  • Which months you have available, starting in June 2015

What’s the catch? There is no catch. We’re just stepping up our community involvement this year and I am trying to contribute like the rest of my team, plus I love presenting :-)

I’m serious about anywhere in the world – it doesn’t matter what the timezone difference is – I’ll try to make it work.

Currently I have two sessions you can pick from.

Common SQL Server Mistakes and How to Avoid Them

Making mistakes is human nature; avoiding them comes from experience and shared wisdom from others. In this session, Tim Radney will share experiences he has encountered supporting multiple Fortune-level companies. Tim will share real-world experiences of environments he has worked on and the impact he has seen from administrators not knowing how to properly configure and administer SQL Server. In this session, you will learn many accepted best practices for installing, configuring, and supporting SQL Server to help your environment run as smooth as possible. When you leave this session, you will have a checklist of items to take back to your shop to make sure your environment is configured properly.

Understanding SQL Backups and Restores
Having backups is important for organizations, but – are you sure it will restore? Too often, Tim Radney has been brought in by organizations to help recover failed systems where administrators simply didn’t know how to properly restore SQL backups. Or, where their backups didn’t restore? In this session, Tim will go through the various types of backups that can be performed as well as demonstrate the proper process for restoring each of them. This isn’t your basic session as Tim will not only be covering the common full, differential and log backups, he will also be covering partial, file group, COPY_ONLY, a URL backup and restore, encryption as well as performing a demo of a “hack attach” for the ultimate tail of the log restore.

I hope to hear from you,

Be careful when playing with features

I recently performed a health audit for a client and found several transaction logs that had grown exponentially larger than the data files. This is a fairly common thing to find when databases are in full recovery model with no log backups, however that was not this client’s issue. The databases were in full recovery and log backups were being taken every 15 minutes, however the log was not truncating and reusing the existing log.

I looked for open transactions assuming that was the issue but that was a dead end.  I decided to do some digging and looked at the database properties in sys.databases and found that the log_reuse_wait_desc was set to ‘REPLICATION’. This was odd since the client stated they do not use replication. I reported what I found to my client and was told that a long time ago they toyed with replication trying to build a reporting server and ended up killing the project.

What didn’t happen was cleaning up and removing replication from the databases once the project was cancelled. I am not a big replication guy so this was a new area for me. With a quick search I was able to find sp_removedbreplication which removes all replication objects from the database. I was able to remove replication from each of the database by running the following:

sp_removedbreplication N’db_name’

Once replication was removed the log_reuse_wait_desc was updated to ‘nothing’ and the log was able to truncate. This didn’t solve all the problems with these logs though. An adverse effect of the logs growing out of control were extremely large log files with a lot of virtual log files. I followed Kimberly’s advice in her post (8 steps to better transaction log throughput) and was able to shrink the log file, resetting the number of virtual log files, and then properly grow the log files to efficient sizes with more appropriate auto growth sizes.

BTW, for an interesting twist on the REPLICATION value for log_reuse_wait_desc, see Paul’s post: REPLICATION preventing log reuse but no replication configured.