For Day 12 of this series, we start out with Query #27, which is Database Properties. This query retrieves information from the sys.databases catalog view, and from the sys.dm_os_performance_counters dynamic management view. Query #27 is shown in Figure 1.
Figure 1: Query #27 Volume Info
This query gives you a lot of very useful information about all of your databases. I pay special attention to Log Reuse Wait Description and Log Used %, to spot problems that are preventing the space in the transaction log from being reused, which will eventually cause the log file to fill up and grow. I also look at the compatibility level (which is much more important in SQL Server 2014, with the new cardinality estimator) and the page verify option, which should always be CHECKSUM.
It is important to check the statistics-related properties, and to also make sure that auto shrink and auto close are not enabled. I don’t think I have ever seen an instance of SQL Server where I did not notice some database properties for multiple databases that were simply incorrect. This query is great for quickly spotting possible issues that need to be corrected.
Query #28 is Missing Indexes All Databases. This query retrieves information from the sys.dm_db_missing_index_group_stats dynamic management view, the sys.dm_db_missing_index_groups dynamic management view, and the sys.dm_db_missing_index_details dynamic management view about “missing” indexes that the SQL Server Query Optimizer thinks that it would like to have. Query #28 is shown in Figure 2.
Figure 2: Query #28 Missing Indexes All Databases
This query is very useful, but also very easy to misinterpret and misuse. I have seen many novice DBAs and developers use the results of this query to pretty badly over-index their databases, which affects their database size and hurts insert, update, and delete performance. I like to focus on the last_user_seek column, and see how long ago that was. Was it a few seconds or minutes ago, or was it days or weeks ago?
I then start looking at the user_seeks, avg_total_user_cost, and avg_user_impact columns to get a sense for how often SQL Server thinks it wants this proposed index, how expensive it is not to have the index, and how much the query optimizer thinks the cost of the query would be reduced if it did have this index that it is requesting.
Next, I’ll look at any other proposed indexes on the same table to see if I can come up with a wider, consolidated index that covers multiple requested indexes. Finally, I’ll look at the existing indexes on that table, and look at the index usage metrics for that table to have a better idea of whether a new index would be a good idea, based on the volatility of that table.
Query #29 is VLF Counts. This query calls the DBCC LogInfo command for each on of your system and user databases to get your current virtual log file (VLF) counts for your transaction logs. Query #29 is shown in Figure 3
Figure 3: Query #29 VLF Counts
Every time a SQL Server log file grows, a certain number of VLFs are added to the log file. By the way, the formula for this was changed in SQL Server 2014, as discussed by Paul Randal here.
The reason why you care about this is because having a higher number of VLFs affects how long the recovery portion of a database restore takes. If you have ever watched the progress update in SSMS while you were doing a restore of a large database, and noticed that it finally got to 100%, but then kept grinding away for quite a while after that, you have seen this effect in action. It is very frustrating!
It also affects how long “crash recovery” takes, which occurs for every database on the instance whenever the SQL Server Service starts. This comes into play whether it is just a standalone instance starting or when you failover to another node with a traditional SQL Server failover cluster instance (FCI). If you have ever been frustrated about how long a FCI failover takes, keeping your VLF counts under control is one thing you can do to help speed up the process.
If you have a high VLF count in a database, it is usually pretty easy to correct. If you are in the FULL recovery model, you will want to run a transaction log backup. Then, you shrink the transaction log file. Often, you will have to repeat this sequence more than once. Sometimes you will even need to generate some log activity (by doing something like reorganizing an index or two) to get the active portion of the log to a place where a shrink works.
Once you have done all of this, your transaction log file is likely to be extremely small, perhaps just a few MB in size. You will want to immediately manually grow it in relatively large increments (such as 1000MB, 2000MB, or 4000MB) until you have it at the desired size. You will also want to make sure the auto growth increment is set to a fixed size in MB (such as 1000MB, 2000MB, or 4000MB) so that any future auto growths don’t add an excessive number of VLFs.