3D Printing Ear Savers Update

I started 3D printing ear savers on April 5th. I am fast approaching the 2 month anniversary and I wanted to share a few stats from this adventure.

First, when I started out, I had a single Ender 3 Pro printer. I quickly realized that with the number of request for ear savers, that I needed more printers. This was an investment that I was comfortable making due to how badly our healthcare people needed these. I was able to find 3 more Ender 3 Pro’s available at Quip Print, a disabled Veteran run business in Tampa Florida. Also to date, I’ve purchased 45 spools of PETG filament from them too.

Quickly I had 4 printers up and operational and running mostly 24/7. The first month I was getting up every few hours to take prints off the printer and start up more batches. As I got caught up with printing, I’ve been able to skip a cycle at night and get decent sleep.

  • Since April 5th, here are some stats:
  • 12,005 ear savers printed
  • 43 kg of PETG filament used – 94.80 lbs for us Americans
  • 14,190 meters of filament used or 46,555 ft – 8.81 miles for us Americans
  • 1,982 meters – length of each ear saver laid end to end.
  • 6,502 feet – length of each ear saver laid end to end – approx 1.23 miles
  • 307 – number of USPS stamps used
  • 22 – number of small flat rate USPS shipping boxes sent
  • 7 – number of medium flat rate USPS shipping boxes sent
  • 50 – US states I’ve shipped ear savers too (Plus one box to the UK)
  • 173 – Number of unique request for ear savers
  • 270 – Approx number of ear savers I can print per spool of filament
  • 1 – smallest request for ear savers
  • 3000 – largest request for ear savers

As of June 1st, I am completely caught up on shipping out ear savers. I haven’t slowed down much on printing and have a decent amount in inventory. Please reach out if you or someone you know needs ear savers to take the pressure off their ears. I am printing with a material that is water proof and food safe, meaning, they can be sterilized with disinfectant. They are perfect for healthcare workers or anyone that is having to wear a mask.

These are printed and shipped within the US free of charge and I can ship outside the US too, I simply ask that you cover the cost of shipping, it’s 3-4x more expensive.

I would like to give thanks to all who have chipped in to help me fund this project. Without the support of my SQL family, Scouting family, and friends, I would not have been able to scale like I did. Also thanks to Paul Randal, Kathi Kellenberger, Angela Tidwell, and Ben Miller for joining in printing these. Paul started the same time I did and has printed around 5k ear savers. I’m confident if we added it all up, the SQL Community have contributed over 20k ear savers to those in need.

As long as people are still needing the ear savers, I can get access to material to print them, and have the funds to ship, I’ll keep printing.

Azure SQL Managed Instance Default Values for Query Store and TDE

A common saying about Azure is that it is always changing. This is very true too. Microsoft is always improving the environment by adding new features, tweaking interfaces, tweaking features, and much more.

A couple of recent changes I came across is that now, any newly created database in Managed Instance have Query Store enabled by default. Previously the behavior was modeled after on-premises SQL Server which is off by default. Query Store is a database level feature, and had to be enabled per database. For any newly created databases, this is enabled by default and can be turned off if the customer doesn’t want it on.

Another recent change is enabled TDE by default on any newly created databases. By default, if you haven’t created your own key, it will use the service-managed key. A bug I’ve found is that in SSMS, even though you set ‘Encryption Enabled’ to false, it will still enable TDE. This is not the behavior using T-SQL.

A side effect of TDE being enabled by default and using the service-managed key, is that you can no longer utilize COPY_ONLY level backups. You’ll have to remove TDE or switch to using a customer managed key.

With these two changes, default behavior is becoming more in-line with Azure SQL Database where Query Store and TDE have been enable by default for several years.

A key point to mention, is that currently this new behavior is only for newly created databases. Any database that is migrated to Managed Instance, inherits the prior settings, so if TDE was not in use, it will not be turned on, likewise for Query Store as well.

Are you considering a move to Azure SQL Managed Instance or Azure SQL Database and have questions, feel free to send me an email with your thoughts or concerns.

Gaining Access to SQL Server When Locked Out

Getting locked out of a SQL Server instance can happen due to a number of situations. One of the most common scenarios I’ve encountered is when a SQL Server is moved from one domain to another without the domain being trusted or having a local SQL admin account. I recently encountered another incident where a DBA was attempting to fail over a log-shipped instance to another instance. Part of their run book included a script to disable a set of users to block production access to the instance. The problem was, the production instance was on a cluster, unlike the development and QA systems where the script had been tested. Disabling the users in production took down the instance preventing the tail log backups from occurring. We had to get the instance back up in order to take those final backups.

What do you do if you find that you’re locked out of a SQL Server instance? The process I’ve been using for the past 10 years is to start SQL Server in single user mode, launch SQLCMD and create a new user with the system admin role. In the recent case I just mentioned, we just had to run an ALTER statement to enable the needed account. Once I have the proper account, I can restart the SQL Server Service and do what I need to do.

What are the steps to start SQL Server in single user mode and create a new sysadmin account?

I open a command prompt as an administrator with a Windows account that is a local admin. For a default instance I would then go through the following steps minus the (” “)
“net stop mssqlserver” and press enter
“net start mssqlserver /m” and press enter
“sqlcmd” and press enter
“CREATE LOGIN security WITH PASSWORD = ‘Rec0very123’ ” and press enter
“GO” and press enter
“sp_addsrvrolemember ‘security’, ‘sysadmin’ ” and press enter
“GO” and press enter
“EXIT” and press enter
“net stop mssqlserver” and press enter
“net start mssqlserver” and press enter

I’ve now stopped and restarted mssql in single user mode, created a new login called security and given the user security system admin rights. SQL Server Service was then stopped and started and put back in multi user mode. At this point, I can log in with the security user and complete whatever task that needs to be completed.

How is a named instance any different? You’ll need to look at the services to determine the actual service name. MSSQL$SQL2017 for example.
For a named instance I would then go through the following steps minus the (” “)

“net stop MSSQL$SQL2017 ” and press enter
“net start MSSQL$SQL2017 /m” and press enter
“sqlcmd -Slocalhost\sql2017” and press enter – localhost can also be the server name
“CREATE LOGIN security WITH PASSWORD = ‘Rec0very123’ ” and press enter
“GO” and press enter
“sp_addsrvrolemember ‘security’, ‘sysadmin’ ” and press enter
“GO” and press enter
“EXIT” and press enter
“net stop MSSQL$SQL2017 ” and press enter
“net start MSSQL$SQL2017 ” and press enter

I’ve included a screenshot of going through the steps on a default instance. For a named instance, you’ll just have to specify the named instance service name to start/stop the service, as well as the machine and instance name when connecting via SQLCMD.

When Updating Statistics is too Expensive

Hopefully by now, anyone working with SQL Server as a profession is aware of how important statistics are. If you aren’t sure why, or you need a refresher, check out Kimberly’s blog post “Why are Statistics so Important“. Here’s a hint, the query optimizer uses statistics to help make decisions when creating an execution plan. Out-of-date statistics can mean a much less optimal plan.

Erin Stellato wrote an excellent blog post “Updating Statistics with Ola Hallengren’s Script” where she covers updating statistics with database maintenance plans, T-SQL, and using Ola Hallengren’s Index Optimize script. In this article Erin discusses various options for updating statistics and how using an approach of only updating statistics that have had row modifications is less invasive. She further explains how in SQL Server 2008R2 SP2 and SQL Server 2012 SP1, Microsoft introduced the sys.dm_db_stats_properties which tracks modifications for each statistic.

This new DMV allows users to build logic into their processes to only update statistics after a specific percentage of change has occurred. This can further reduce the overhead of updating statistics.

I’ve been using Ola Hallengren’s Index Optimize procedure for over decade as well as recommend his process for my clients. The Index Optimize procedure has logic built in to deal with index fragmentation based on the percentage of fragmentation. This allows you to not worry about minimal fragmentation, reorganize if the indexes aren’t heavily fragmented, and rebuild if the fragmentation level is over a certain threshold. It is a better process than blindly reorganizing or rebuilding all indexes regardless of their fragmentation level.

Over the years, I’ve found that many clients as well as DBA’s aren’t aware that reorganize does not update statistics, whereas an index rebuild does. If you are using logic to reorganize and rebuild, without a separate statistics update process, your statistics may be slowly aging and could become problematic. Fortunately, Ola’s Index Optimize procedure allows for passing parameters to update statistics. These include:


Typically, I’ve only had to configure @UpdateStatics = ‘ALL’ and @OnlyModifiedStatistics = ‘Y’. This would update all statistics if there have been any row modifications. This has worked for me, my previous employer, and my clients for many years.

Recently I encountered an issue with a very large database with numerous tables with 100’s of millions of rows each as well as hundreds of other tables with a million or less rows. Every table would have some level of data change per day. This was causing statistics to be updated nightly on nearly every table, even those with minimal change. This took extra time and was generating additional IO that the customer needed to minimize. I updated the process to change @OnlyModifiedStatistics to @StatisticsModificationLevel = ‘5’ for 5%. Now statistics will only be updated after a 5% data change, or so I thought.

When I made this change and reviewed what would now be updated, I was surprised to see that some large tables were going to have statistics updated. I noticed that several large tables were listed, even though the modification counter was well below the 5% threshold. It turns out that statistics will be updated when the number of modified rows has reached a decreasing, dynamic threshold, SQRT(number of rows * 1000). SQRT = Square Root. For example, a table with 9,850,010 rows at 5% would be 492,500 rows, however the modification counter was only 134,017 rows. If we plug 9,850,010 into SQRT(9,850,010 * 1000) = 99,247, which is well below the 492k value. Is this a bad thing? Absolutely not, 99,247 is still much larger than 1. By only use @OnlyModifiedStatistics, this nearly 10M row table would have statistics updated after a single modification.

Having @StatisticsModificationLevel is a nice tool to have at your disposal for those situations where you need to fine tune your maintenance process.

Managing Virtual Log Files in Azure Managed Instance

Maintaining the number of virtual log files (VLFs) in a transaction log is a task that is routinely performed in analysis of SQL Server instances. Numerous blog post have been dedicated to covering the issue of maintaining an efficient number of VLFs. Kimberly wrote about the impact of too few or too many files and Paul wrote about how Microsoft changed the algorithm in SQL Server 2014 for how many VLFs are created when a log file is grown. Way back in 2005 Kimberly shared 8 steps to better transaction log throughput. This post is where Kimberly told us how to reset the VLFs in a transaction log by backing up the log, shrinking the log, and then likely having to repeat the process several more times.

Since Azure Managed Instance manages the backups for you, are you still able to manage VLF fragmentation? Let’s find out.

I connected to my managed instance and created a test database. Next I created a table with three columns and made them each a uniqueidentifer.

[ID1] [uniqueidentifier] NULL,
[ID2] [uniqueidentifier] NULL,
[ID3] [uniqueidentifier] NULL

I modified the database log file to auto grow at 1 MB instead of the default 16 MB and then ran DBCC LOGINFO() to see that I only have 4 VLFs.

I inserted 150k records to cause the database and log file to grow by inserting NEWID into each of the three columns.

GO 150000

I checked for the VLF count again and had 247 VLFs. While 247 VLFs may not produce any noticeable performance impact, the size of the log file was proportionally small for 200+ VLFs. In this exercise, the log file grew to approx 250 MB.

How many VLFs are too many? I’m personally not concerned until I start seeing 1000+, unless the log file is small. In this case, with the log file in MB being close to the number of VLFs, I would recommend resetting the VLFs and then manually grow the log file to 256 MB or possibly 512 MB. I would also recommend increasing the auto grow value from the 1 MB I configured to 64 MB to 128 MB. Keep in mind, if it had grown to 250 MB during normal operation, these would be decent values. If the log file was 20 GB, I would recommend a larger value.

The known method to reset VLFs in SQL Server is to backup the log, shrink the log file and then repeat a few more times, if needed. In Azure Managed Instance, backups are handled for you. You can make COPY_ONLY backups, however that will not have the same effect.

So how can we reset VLFs? I decided to just try and shrink the log file since I know log backups are happening automatically. I did this by running DBCC SHRINKFILE (2,1)

The VLFs were reduced to 29.

I waited for a short period (5 or so minutes) to allow an additional log backup to occur and ran DBCC SHRINKFILE (2,1) again. This time the VLFs were reduced to 8.

At this point, I could have waited a little longer and try to shrink again to reduce to a smaller number, or I could manually grow the log file to make sure I have a good balance of VLFs to the size of the file.

For brand new databases, setting a proper size auto grow setting can help minimize having too many VLF files. At the same time, if you have an idea of how big the log file should be, you can follow Kimberly’s guidance by growing the file by 4 GB or 8 GB increments to get a good balance of VLFs across your transaction log.

Pluralsight Courses

I’ve recently submitted my proposal for my 5th Pluralsight course and am anxiously waiting to hear back before I can being working on the new course. This past week, Paul sent out the numbers to the team letting us know the hours our courses have been viewed during November. Being a data guy, I decided that after nearly 3.5 years of having courses available, I should crunch some numbers. I currently have 4 courses available.

My goal is to record at least two courses, three if possible in 2020. Creating the content, recording, and publishing a course is a lot of work. After crunching some numbers and realizing that my 4 courses have been viewed nearly 12,000 hours, I’m even more excited to record in 2020. I regularly get emails from viewers thanking me for the content and sharing with me how the courses have helped their career.

You can follow me on Pluralsight to get new notifications of courses that I develop as well as see my entire list of courses. I am also on Twitter and share new course announcements there as well.

I also plan to create more YouTube videos that I can use as references to user group and conference sessions that I give. I generally make my slide decks available, however many times, attendees would like to see the demos again or be able to be able to show someone on their team the demo. I guess you can call these my 2020 resolutions – generate more training videos period!

Never stop learning folks!

Capturing Throughput Usage in SQL Server

I recently posted an article at sqlperformance.com about the importance of selecting the proper size Azure VM because of limits placed on throughput based on the VM size. I was sharing this article during my “Introduction to Azure Infrastructure as a Service” session during SQLintersection when Milan Ristovic asked how best to calculate throughput. I told Milan and the audience how I do it and that I would write a quick blog post about it. Thanks for the inspiration to write another blog Milan!

Most data professionals are familiar with collecting latency information using sys.dm_io_virtual_file_stats or by using Paul Randal’s script on his “Capturing IO latencies for a period of time” that uses sys.dm_io_virtual_file_stats and captures a snapshot, uses a wait for delay, captures another snapshot and computes the differences. Capturing latency information lets you know the time delay between a request for data and the return of the data.

I’ve been using Paul’s script from his blog post for over 5 years to determine disk latency. A few years ago I made a few changes to also calculate disk throughput. This is important to know for migrations to different storage arrays and especially when moving to the cloud. If you simply benchmark your existing storage, that is telling you what your storage subsystem is capable of. That is great information to have, however, you also need to know what your workload is currently using. Just because your on-premises storage supports 16,000 IOPS and 1000MB of throughput, you probably aren’t consuming that many resources. Your workload may only be consuming 70MB of throughput during peak times. You need a baseline to know what your actual needs are.

When I’m capturing latency and throughput data, I like to capture small increments. Paul’s script defaults to 30 minutes, I like 5 minute segments to have a more granular view. What I’ve added in Paul’s script to capture throughput is to capture the num_of_bytes_written and divide by 1,048,576 (1024 bytes * 1024) to calculate the MB value. I do the same for num_of_bytes_read. I also do the same again and divide by the number of seconds I am waiting between the two snapshots. In this case since I am waiting 5 minutes, I’ll use 300, for example: (num_of_bytes_written/1,048,576)/300 AS mb_per_sec_written.

I add my changes at the end of Paul’s script before the final FROM statement, just after [mf].[physical_name]

Modifications to Paul’s filestats script

With these changes I can easily see how many MB the workload is reading and writing to disk during the capture time, as well as the MB/s to tell me the overall throughput.

I hope this helps you to baseline your existing SQL Server workload. For my customers, I’ve further modified this script to write the results to a table with a date and time stamp so I can better analyze the workload.

New Pluralsight course: SQL Server: Understanding Database Fundamentals (98-364)

On October 29th, 2019, Pluralsight published my latest course, SQL Server: Understanding Database Fundamentals (98-364). This makes four courses that I have done for Pluralsight. Here is the official course description:

Learn the fundamentals of designing, using, and maintaining a SQL Server database. This course is applicable to anyone preparing for the 98-364 exam: Understanding Database Fundamentals.

My goal for creating this course is to provide training for those who are just getting started with SQL Server, and as an added bonus, to help prepare individuals to pass the Microsoft certification exam 98-364.

The course starts with an overall introduction to SQL Server and the various versions and editions available. Next, I cover core database concepts that you’ll need to know when getting started with SQL Server. At this point, the viewer should have a solid understanding of what SQL Server is and how databases are used. I then cover how to create database objects and how to manipulate data. Then I shift over to data storage to discuss normalization and constraints. I conclude the course with sharing and demonstrating how to administer a database.

Skip-2.0 Backdoor Malware – SQL Server

There was a flutter of headlines this week about a new vulnerability/risk with SQL Server 2012 and SQL Server 2014. The malware was reported to allow an attacker steal a “magic password”. Of course the headlines made this sound really bad and the image of thousands of DBAs rushing to patch SQL Server came to mind.

After reading over the many headlines;

It quickly became clear that this threat isn’t as big of a deal as the headlines made it out to be. While this does target SQL Server 2012 and SQL Server 2014, in order for the malware to work, the attacker must already be an administrator on the server. If an attacker has already gotten to this point, then things are already really bad for you.

It is reported that a cyber-espionage group out of China called the Winnti Group is responsible.  As of now, there are no reports of this being used against an organization.

What should you be doing or how can you protect against this?

  • Stay current, patch your servers, both OS and SQL Server
  • Perform vulnerability scans to look for known issues “This is available in SSMS and Azure” and third party tools
  • Audit your servers and environments for suspicious activities

Skip-2.0 is just a reminder to organizations to keep their eyes open. Everyone should be keeping up with patching and securing their environments. Since skip-2.0 can only target an already compromised server, the only thing DBAs can really do is ensure their systems are patched.

Azure SQL Database Serverless

A new compute tier for single databases has been made available that allows single databases to automatically scale based upon workload demand. Azure SQL Database serverless (in preview) provides the ability for single databases to scale up and down based upon workload and only bills for the amount of compute used per second. Serverless also allows databases to pause during inactive periods. During the time a database is paused, only storage is being billed. Databases are automatically resumed when activity returns.

Customers get to select a compute autoscaling range and an autopause delay parameter. This is a price-performance optimized tier for single databases that have intermittent, unpredictable usage patterns that can handle some delay in compute warm-up after idle usage periods. For databases with higher average usage, elastic pools is the better option.

I see this feature as a great option for low utilized databases or for those with unpredictable workloads that need to be able to automatically scale when the workload demands it.

I’m looking forward to seeing how this feature matures and develops during preview. If you’ve played with the preview, share your experience in the comments.