Pluralsight Courses

I’ve recently submitted my proposal for my 5th Pluralsight course and am anxiously waiting to hear back before I can being working on the new course. This past week, Paul sent out the numbers to the team letting us know the hours our courses have been viewed during November. Being a data guy, I decided that after nearly 3.5 years of having courses available, I should crunch some numbers. I currently have 4 courses available.

My goal is to record at least two courses, three if possible in 2020. Creating the content, recording, and publishing a course is a lot of work. After crunching some numbers and realizing that my 4 courses have been viewed nearly 12,000 hours, I’m even more excited to record in 2020. I regularly get emails from viewers thanking me for the content and sharing with me how the courses have helped their career.

You can follow me on Pluralsight to get new notifications of courses that I develop as well as see my entire list of courses. I am also on Twitter and share new course announcements there as well.

I also plan to create more YouTube videos that I can use as references to user group and conference sessions that I give. I generally make my slide decks available, however many times, attendees would like to see the demos again or be able to be able to show someone on their team the demo. I guess you can call these my 2020 resolutions – generate more training videos period!

Never stop learning folks!

Capturing Throughput Usage in SQL Server

I recently posted an article at sqlperformance.com about the importance of selecting the proper size Azure VM because of limits placed on throughput based on the VM size. I was sharing this article during my “Introduction to Azure Infrastructure as a Service” session during SQLintersection when Milan Ristovic asked how best to calculate throughput. I told Milan and the audience how I do it and that I would write a quick blog post about it. Thanks for the inspiration to write another blog Milan!

Most data professionals are familiar with collecting latency information using sys.dm_io_virtual_file_stats or by using Paul Randal’s script on his “Capturing IO latencies for a period of time” that uses sys.dm_io_virtual_file_stats and captures a snapshot, uses a wait for delay, captures another snapshot and computes the differences. Capturing latency information lets you know the time delay between a request for data and the return of the data.

I’ve been using Paul’s script from his blog post for over 5 years to determine disk latency. A few years ago I made a few changes to also calculate disk throughput. This is important to know for migrations to different storage arrays and especially when moving to the cloud. If you simply benchmark your existing storage, that is telling you what your storage subsystem is capable of. That is great information to have, however, you also need to know what your workload is currently using. Just because your on-premises storage supports 16,000 IOPS and 1000MB of throughput, you probably aren’t consuming that many resources. Your workload may only be consuming 70MB of throughput during peak times. You need a baseline to know what your actual needs are.

When I’m capturing latency and throughput data, I like to capture small increments. Paul’s script defaults to 30 minutes, I like 5 minute segments to have a more granular view. What I’ve added in Paul’s script to capture throughput is to capture the num_of_bytes_written and divide by 1,048,576 (1024 bytes * 1024) to calculate the MB value. I do the same for num_of_bytes_read. I also do the same again and divide by the number of seconds I am waiting between the two snapshots. In this case since I am waiting 5 minutes, I’ll use 300, for example: (num_of_bytes_written/1,048,576)/300 AS mb_per_sec_written.

I add my changes at the end of Paul’s script before the final FROM statement, just after [mf].[physical_name]

Modifications to Paul’s filestats script

With these changes I can easily see how many MB the workload is reading and writing to disk during the capture time, as well as the MB/s to tell me the overall throughput.

I hope this helps you to baseline your existing SQL Server workload. For my customers, I’ve further modified this script to write the results to a table with a date and time stamp so I can better analyze the workload.

New Pluralsight course: SQL Server: Understanding Database Fundamentals (98-364)

On October 29th, 2019, Pluralsight published my latest course, SQL Server: Understanding Database Fundamentals (98-364). This makes four courses that I have done for Pluralsight. Here is the official course description:

Learn the fundamentals of designing, using, and maintaining a SQL Server database. This course is applicable to anyone preparing for the 98-364 exam: Understanding Database Fundamentals.

My goal for creating this course is to provide training for those who are just getting started with SQL Server, and as an added bonus, to help prepare individuals to pass the Microsoft certification exam 98-364.

The course starts with an overall introduction to SQL Server and the various versions and editions available. Next, I cover core database concepts that you’ll need to know when getting started with SQL Server. At this point, the viewer should have a solid understanding of what SQL Server is and how databases are used. I then cover how to create database objects and how to manipulate data. Then I shift over to data storage to discuss normalization and constraints. I conclude the course with sharing and demonstrating how to administer a database.

Skip-2.0 Backdoor Malware – SQL Server

There was a flutter of headlines this week about a new vulnerability/risk with SQL Server 2012 and SQL Server 2014. The malware was reported to allow an attacker steal a “magic password”. Of course the headlines made this sound really bad and the image of thousands of DBAs rushing to patch SQL Server came to mind.

After reading over the many headlines;

It quickly became clear that this threat isn’t as big of a deal as the headlines made it out to be. While this does target SQL Server 2012 and SQL Server 2014, in order for the malware to work, the attacker must already be an administrator on the server. If an attacker has already gotten to this point, then things are already really bad for you.

It is reported that a cyber-espionage group out of China called the Winnti Group is responsible.  As of now, there are no reports of this being used against an organization.

What should you be doing or how can you protect against this?

  • Stay current, patch your servers, both OS and SQL Server
  • Perform vulnerability scans to look for known issues “This is available in SSMS and Azure” and third party tools
  • Audit your servers and environments for suspicious activities

Skip-2.0 is just a reminder to organizations to keep their eyes open. Everyone should be keeping up with patching and securing their environments. Since skip-2.0 can only target an already compromised server, the only thing DBAs can really do is ensure their systems are patched.

Azure SQL Database Serverless

A new compute tier for single databases has been made available that allows single databases to automatically scale based upon workload demand. Azure SQL Database serverless (in preview) provides the ability for single databases to scale up and down based upon workload and only bills for the amount of compute used per second. Serverless also allows databases to pause during inactive periods. During the time a database is paused, only storage is being billed. Databases are automatically resumed when activity returns.

Customers get to select a compute autoscaling range and an autopause delay parameter. This is a price-performance optimized tier for single databases that have intermittent, unpredictable usage patterns that can handle some delay in compute warm-up after idle usage periods. For databases with higher average usage, elastic pools is the better option.

I see this feature as a great option for low utilized databases or for those with unpredictable workloads that need to be able to automatically scale when the workload demands it.

I’m looking forward to seeing how this feature matures and develops during preview. If you’ve played with the preview, share your experience in the comments.

SQL Database Instance Pools

A new (in-preview) resource in Azure SQL database was just announced that delivers instance pools for providing a cost-efficient way to migrate smaller SQL Server instances to the cloud.

Many departmental SQL Servers are virtual and run on a smaller scale. It is not uncommon to find 2-4 vCPU SQL Servers running business critical workloads. Many of these workloads contain multiple user databases which makes them a candidate for Azure SQL managed instance. Currently the smallest vCore option for managed instance is 4.

With the introduction of instance pools, a customer can pre-provision compute resources based according to their total migration requirement. For example, if a customer needed 8 vCores, they could then deploy a 4 vCore and two 2 vCore instances.

Prior to instance pools, a customer would have to consolidate smaller workloads into larger instances. This could be problematic due to a number of factors. In many cases, workloads were isolated due to security concerns, elevated privileges that a vendor required, business continuity reasons, or any number of factors. Now customers can keep the same level of isolation that they’ve had on-premises with these smaller VMs.

I see this as a big win for customers that have been wanting to migrate to Azure SQL managed instance that have smaller workloads. This essentially eliminates the concerns about having to consolidate workloads for migrations.

SQLintersection Spring 2019 Conference

I am very excited to be speaking at my ninth consecutive SQLintersection conference. The Spring show this year is at Walt Disney World Swan Resort. I’m honored to be co-presenting two workshops with good friend David Pless as well as presenting three sessions.

David and I start our week on Monday with a full day workshop on Performance Tuning and Optimization for Modern Workloads (SQL Server 2017+, Azure SQL Database, and Managed Instance).

Over the next three days I present sessions covering An Introduction to Azure Managed Instances, Getting Started with Azure Infrastructure as a Service, and Migration Strategies.

David and I end our week on Friday with an all day workshop on SQL Server Reporting Services and Power BI Reporting Solutions.

SQLintersection is one of my favorite conferences that focuses on the Microsoft Data Platform. The speakers and sponsors are all approachable and willing to talk to you about your issues and offer advice. As a speaker and attendee, I always learn something new and make new friendships and connections.

I hope to see you there.

What is Azure SQL Database Hyperscale

1024px-Microsoft_Azure_LogoAzure SQL Database has a new service tier called Hyperscale. Hyperscale is currently in public preview and offers the ability to scale past the 4TB limit for Azure SQL Database. Hyperscale is only available in the vCore-based purchasing model.

Hyperscale offers customers a highly scalable storage and computer performance tier that is built on the Azure architecture in order to scale out the storage and compute for an Azure SQL Database. By separating out the storage and compute, Hyperscale allows for scaling out storage limits well beyond what is available in the General Purpose and Business Critical service tiers.

You’ve probably already figured out that Hyperscale is primarily intended for those customers who are using or would like to use Azure SQL Database but have massive storage requirements. Currently Hyperscale has been tested with a database up to 100TB. That’s correct, you can have up to a 100TB Azure SQL Database, well currently in preview only right now. While Hyperscale is primarily optimized for OLTP workloads, it also supports hybrid and analytical workloads.

With Hyperscale offering databases up to 100TB (this is what Microsoft has tested up to so far), backups could be problematic to make. Microsoft offers near instantaneous database backups for Hyperscale leveraging file snapshots stored in Azure Blob storage. This is done with no IO impact on compute and regardless of the size of the database. This also offers fast database restores. I’ve seen a 40+ TB restore that took minutes!

Hyperscale offers rapid scale out, meaning within the Azure Portal you can configure up to 4 read-only nodes for offloading your read workload, these can also be used as hot-standbys. At the same time, you can scale up compute resources to handle heavy workloads. This can be done in constant time. When you no longer need the scaled-up compute resources, scale back down. You can also expect higher overall performance with Hyperscale due to higher log throughput and much faster transaction commit times no matter the size of your database.

While Hyperscale is in public preview, it is strongly recommended to not run any production workload yet. The reason for this is because current once you migrate to Hyperscale, you can move back to General Purpose or Business Critical tiers. For testing Hyperscale you should make a copy of your production database and migrate it to the Hyperscale service tier.

How to create a linked server to Azure SQL Database via SQL Server Management Studio

Often, I need to create a linked server to an Azure SQL Database to run queries against it or schedule maintenance. Creating a linked server to an Azure SQL Database is slightly different than how you’ve likely been creating linked servers to other SQL Servers in your environment.

When expanding ‘Server Objects’ and right clicking ‘Linked Servers’ and selecting ‘New Linked Server…’ to create a linked server, most dbas initial instinct is to list the linked server name as the Azure server name and to use the server type of SQL Server. This would be incorrect, although it would still let you create the linked server with those values. Instead, use a friendly name for the Azure SQL Database as the linked server name. Then choose ‘other database source’ and list your Azure server as the ‘data source’. Next to catalog you’ll need to specify the Azure SQL Database name.

AzureDBlinkedServer

Next click on the security page and choose ‘Be made using this security context’ if you want to persist the connection information. Type in your username and password and click OK.

AzureDBlinkedServer2

 

You should now see your linked server under ‘Server Objects’, ‘Linked Servers’. Expand your linked server and you’ll be able to browse the catalog to see your tables and views. You can now reference the linked server as needed.

AzureDBlinkedServer3

Azure SQL Managed Instance – Business Critical Tier

Microsoft has announced the GA date for the Business-Critical tier. Azure SQL Managed Instance Business Critical tier will be generally available on December 1st 2018. For those customers who are needing the ‘Super-Fast’ storage capabilities provided with local SSDs on the instance, you now have a release date. Business Critical also provides the ability to have a readable secondary. Technically, you have three secondaries, two are non-readable and the third can be made available for Read-Scale. All you need to do is enable Read-Scale and update your connection string for read-intent workloads. There is no additional cost for using the readable secondary, so all customers should take advantage of this feature. Offloading some of your read workload to the secondary can free up resources on your primary.

Business critical designed for mission-critical business apps with high I/O requirements, it supports high availability with the highest level of storage and compute redundancy. Something to consider is that Business Critical leverages an Availability Group for HA. General Purpose is built upon Windows Failover Clustering. Both tiers provide HA, however you may experience a slight disruption for the General Purpose failover which still provides good HA, Business Critical you have almost seamless failover providing really good HA.

Mark your calendars for Dec 1st, until then, keep enjoying the preview pricing for your proof of concepts.

If you are considering a migration to Azure SQL Database or Managed Instance and need help, reach out.