New Pluralsight course: SQL Server: Understanding Database Fundamentals (98-364)

On October 29th, 2019, Pluralsight published my latest course, SQL Server: Understanding Database Fundamentals (98-364). This makes four courses that I have done for Pluralsight. Here is the official course description:

Learn the fundamentals of designing, using, and maintaining a SQL Server database. This course is applicable to anyone preparing for the 98-364 exam: Understanding Database Fundamentals.

My goal for creating this course is to provide training for those who are just getting started with SQL Server, and as an added bonus, to help prepare individuals to pass the Microsoft certification exam 98-364.

The course starts with an overall introduction to SQL Server and the various versions and editions available. Next, I cover core database concepts that you’ll need to know when getting started with SQL Server. At this point, the viewer should have a solid understanding of what SQL Server is and how databases are used. I then cover how to create database objects and how to manipulate data. Then I shift over to data storage to discuss normalization and constraints. I conclude the course with sharing and demonstrating how to administer a database.

Skip-2.0 Backdoor Malware – SQL Server

There was a flutter of headlines this week about a new vulnerability/risk with SQL Server 2012 and SQL Server 2014. The malware was reported to allow an attacker steal a “magic password”. Of course the headlines made this sound really bad and the image of thousands of DBAs rushing to patch SQL Server came to mind.

After reading over the many headlines;

It quickly became clear that this threat isn’t as big of a deal as the headlines made it out to be. While this does target SQL Server 2012 and SQL Server 2014, in order for the malware to work, the attacker must already be an administrator on the server. If an attacker has already gotten to this point, then things are already really bad for you.

It is reported that a cyber-espionage group out of China called the Winnti Group is responsible.  As of now, there are no reports of this being used against an organization.

What should you be doing or how can you protect against this?

  • Stay current, patch your servers, both OS and SQL Server
  • Perform vulnerability scans to look for known issues “This is available in SSMS and Azure” and third party tools
  • Audit your servers and environments for suspicious activities

Skip-2.0 is just a reminder to organizations to keep their eyes open. Everyone should be keeping up with patching and securing their environments. Since skip-2.0 can only target an already compromised server, the only thing DBAs can really do is ensure their systems are patched.

Azure SQL Database Serverless

A new compute tier for single databases has been made available that allows single databases to automatically scale based upon workload demand. Azure SQL Database serverless (in preview) provides the ability for single databases to scale up and down based upon workload and only bills for the amount of compute used per second. Serverless also allows databases to pause during inactive periods. During the time a database is paused, only storage is being billed. Databases are automatically resumed when activity returns.

Customers get to select a compute autoscaling range and an autopause delay parameter. This is a price-performance optimized tier for single databases that have intermittent, unpredictable usage patterns that can handle some delay in compute warm-up after idle usage periods. For databases with higher average usage, elastic pools is the better option.

I see this feature as a great option for low utilized databases or for those with unpredictable workloads that need to be able to automatically scale when the workload demands it.

I’m looking forward to seeing how this feature matures and develops during preview. If you’ve played with the preview, share your experience in the comments.

SQL Database Instance Pools

A new (in-preview) resource in Azure SQL database was just announced that delivers instance pools for providing a cost-efficient way to migrate smaller SQL Server instances to the cloud.

Many departmental SQL Servers are virtual and run on a smaller scale. It is not uncommon to find 2-4 vCPU SQL Servers running business critical workloads. Many of these workloads contain multiple user databases which makes them a candidate for Azure SQL managed instance. Currently the smallest vCore option for managed instance is 4.

With the introduction of instance pools, a customer can pre-provision compute resources based according to their total migration requirement. For example, if a customer needed 8 vCores, they could then deploy a 4 vCore and two 2 vCore instances.

Prior to instance pools, a customer would have to consolidate smaller workloads into larger instances. This could be problematic due to a number of factors. In many cases, workloads were isolated due to security concerns, elevated privileges that a vendor required, business continuity reasons, or any number of factors. Now customers can keep the same level of isolation that they’ve had on-premises with these smaller VMs.

I see this as a big win for customers that have been wanting to migrate to Azure SQL managed instance that have smaller workloads. This essentially eliminates the concerns about having to consolidate workloads for migrations.

SQLintersection Spring 2019 Conference

I am very excited to be speaking at my ninth consecutive SQLintersection conference. The Spring show this year is at Walt Disney World Swan Resort. I’m honored to be co-presenting two workshops with good friend David Pless as well as presenting three sessions.

David and I start our week on Monday with a full day workshop on Performance Tuning and Optimization for Modern Workloads (SQL Server 2017+, Azure SQL Database, and Managed Instance).

Over the next three days I present sessions covering An Introduction to Azure Managed Instances, Getting Started with Azure Infrastructure as a Service, and Migration Strategies.

David and I end our week on Friday with an all day workshop on SQL Server Reporting Services and Power BI Reporting Solutions.

SQLintersection is one of my favorite conferences that focuses on the Microsoft Data Platform. The speakers and sponsors are all approachable and willing to talk to you about your issues and offer advice. As a speaker and attendee, I always learn something new and make new friendships and connections.

I hope to see you there.

What is Azure SQL Database Hyperscale

1024px-Microsoft_Azure_LogoAzure SQL Database has a new service tier called Hyperscale. Hyperscale is currently in public preview and offers the ability to scale past the 4TB limit for Azure SQL Database. Hyperscale is only available in the vCore-based purchasing model.

Hyperscale offers customers a highly scalable storage and computer performance tier that is built on the Azure architecture in order to scale out the storage and compute for an Azure SQL Database. By separating out the storage and compute, Hyperscale allows for scaling out storage limits well beyond what is available in the General Purpose and Business Critical service tiers.

You’ve probably already figured out that Hyperscale is primarily intended for those customers who are using or would like to use Azure SQL Database but have massive storage requirements. Currently Hyperscale has been tested with a database up to 100TB. That’s correct, you can have up to a 100TB Azure SQL Database, well currently in preview only right now. While Hyperscale is primarily optimized for OLTP workloads, it also supports hybrid and analytical workloads.

With Hyperscale offering databases up to 100TB (this is what Microsoft has tested up to so far), backups could be problematic to make. Microsoft offers near instantaneous database backups for Hyperscale leveraging file snapshots stored in Azure Blob storage. This is done with no IO impact on compute and regardless of the size of the database. This also offers fast database restores. I’ve seen a 40+ TB restore that took minutes!

Hyperscale offers rapid scale out, meaning within the Azure Portal you can configure up to 4 read-only nodes for offloading your read workload, these can also be used as hot-standbys. At the same time, you can scale up compute resources to handle heavy workloads. This can be done in constant time. When you no longer need the scaled-up compute resources, scale back down. You can also expect higher overall performance with Hyperscale due to higher log throughput and much faster transaction commit times no matter the size of your database.

While Hyperscale is in public preview, it is strongly recommended to not run any production workload yet. The reason for this is because current once you migrate to Hyperscale, you can move back to General Purpose or Business Critical tiers. For testing Hyperscale you should make a copy of your production database and migrate it to the Hyperscale service tier.

How to create a linked server to Azure SQL Database via SQL Server Management Studio

Often, I need to create a linked server to an Azure SQL Database to run queries against it or schedule maintenance. Creating a linked server to an Azure SQL Database is slightly different than how you’ve likely been creating linked servers to other SQL Servers in your environment.

When expanding ‘Server Objects’ and right clicking ‘Linked Servers’ and selecting ‘New Linked Server…’ to create a linked server, most dbas initial instinct is to list the linked server name as the Azure server name and to use the server type of SQL Server. This would be incorrect, although it would still let you create the linked server with those values. Instead, use a friendly name for the Azure SQL Database as the linked server name. Then choose ‘other database source’ and list your Azure server as the ‘data source’. Next to catalog you’ll need to specify the Azure SQL Database name.

AzureDBlinkedServer

Next click on the security page and choose ‘Be made using this security context’ if you want to persist the connection information. Type in your username and password and click OK.

AzureDBlinkedServer2

 

You should now see your linked server under ‘Server Objects’, ‘Linked Servers’. Expand your linked server and you’ll be able to browse the catalog to see your tables and views. You can now reference the linked server as needed.

AzureDBlinkedServer3

Azure SQL Managed Instance – Business Critical Tier

Microsoft has announced the GA date for the Business-Critical tier. Azure SQL Managed Instance Business Critical tier will be generally available on December 1st 2018. For those customers who are needing the ‘Super-Fast’ storage capabilities provided with local SSDs on the instance, you now have a release date. Business Critical also provides the ability to have a readable secondary. Technically, you have three secondaries, two are non-readable and the third can be made available for Read-Scale. All you need to do is enable Read-Scale and update your connection string for read-intent workloads. There is no additional cost for using the readable secondary, so all customers should take advantage of this feature. Offloading some of your read workload to the secondary can free up resources on your primary.

Business critical designed for mission-critical business apps with high I/O requirements, it supports high availability with the highest level of storage and compute redundancy. Something to consider is that Business Critical leverages an Availability Group for HA. General Purpose is built upon Windows Failover Clustering. Both tiers provide HA, however you may experience a slight disruption for the General Purpose failover which still provides good HA, Business Critical you have almost seamless failover providing really good HA.

Mark your calendars for Dec 1st, until then, keep enjoying the preview pricing for your proof of concepts.

If you are considering a migration to Azure SQL Database or Managed Instance and need help, reach out.

Azure Training – London

I am very excited about my upcoming IEAzure class being held in London, September 10th and 11th at the Marriott in Kensington (London).

This is a content rich class covering SQL Server running on Azure Virtual Machines (Azure IaaS) as well as Azure SQL Database and Azure SQL Managed Instance (PaaS).

You read that correctly, we’ve expanded the IEAzure course to include Managed Instance. This module will cover what sets Managed Instance apart from on-premises SQL Server and Azure SQL Database. I’ll cover what all is needed to provision a Managed Instance (this isn’t difficult but many people don’t get it right the first time), how you can scale Managed Instance and migration strategies. You’ll learn about the performance tiers, cost of each, and what sets the tiers apart (hint: built-in HA options)

This is going to be a fun class full of demos and discussions.

If you are in the area, don’t miss this opportunity, register today

In addition to IEAzure, the SQLskills team will be in London for two weeks in September for IEPTO1IECAG, and IEPTO2.

Why You Need Baselines

Regularly when working with clients, I implement a basic baseline package to collect key metrics. At a minimum I like to capture CPU utilization, page life expectancy, disk I/O, wait statistics, and a database inventory. This allows me to track and trend what normal CPU utilization looks like, as well as to see if they are having memory pressure during the day by collecting PLE. Disk I/O metrics are good to have in order to know what normal latency looks like during certain intervals of the day. Capturing wait statistics at regular intervals is a must in order to start tracking down any perceived performance issues (please tell me where it hurts!).

A quick internet search will show several packages that others have created and blogged about. All the ones I’ve found collect some of the same metrics I am collecting but not completely the way I like to look at it. My process is simple, the way I like it. I manually create a database called SQLskillsMonitor, so that I can place it where should reside. I then run my script PopulateDB to create the tables and stored procedures. Next I run the script CreateJobs to create the SQL Server Agent jobs. Each job is prefixed with SQLskills-Monitoring-… I also include a purge job that you can modify to delete any data older than the specified time frame, I have it defaulted to 90 days. That is it. No fuss and no complicated requirements.

You are welcome to customize the process any way you like. You can change the DB name, add additional tables and stored procedures to collect anything else you find useful. I’ve included the key things I like to track for performance baselines as well as baselines for upgrades and migrations. If you do add additional metrics, come back and comment on this post and share what you’ve done so others can benefit too.

This in no way should replace or eliminate the need for a nice third party monitoring solution like SentryOne’s SQL Sentry solution, but in a pinch when my clients don’t have anything else in place, this helps.