sqlskills-logo-2015-white.png

SQL Server Diagnostic Information Queries Detailed, Day 12

For Day 12 of this series, we start out with Query #27, which is Database Properties. This query retrieves information from the sys.databases catalog view, and from the sys.dm_os_performance_counters dynamic management view. Query #27 is shown in Figure 1.

   1: -- Recovery model, log reuse wait description, log file size, log usage size  (Query 27) (Database Properties)

   2: -- and compatibility level for all databases on instance

   3: SELECT db.[name] AS [Database Name], db.recovery_model_desc AS [Recovery Model], db.state_desc, db.containment_desc,

   4: db.log_reuse_wait_desc AS [Log Reuse Wait Description], 

   5: CONVERT(DECIMAL(18,2), ls.cntr_value/1024.0) AS [Log Size (MB)], CONVERT(DECIMAL(18,2), lu.cntr_value/1024.0) AS [Log Used (MB)],

   6: CAST(CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)AS DECIMAL(18,2)) * 100 AS [Log Used %], 

   7: db.[compatibility_level] AS [DB Compatibility Level], db.page_verify_option_desc AS [Page Verify Option], 

   8: db.is_auto_create_stats_on, db.is_auto_update_stats_on, db.is_auto_update_stats_async_on, db.is_parameterization_forced, 

   9: db.snapshot_isolation_state_desc, db.is_read_committed_snapshot_on, db.is_auto_close_on, db.is_auto_shrink_on, 

  10: db.target_recovery_time_in_seconds, db.is_cdc_enabled, db.is_published, db.is_distributor, db.is_encrypted,

  11: db.group_database_id, db.replica_id,db.is_memory_optimized_elevate_to_snapshot_on, 

  12: db.delayed_durability_desc, db.is_auto_create_stats_incremental_on,

  13: db.is_query_store_on, db.is_sync_with_backup, 

  14: db.is_supplemental_logging_enabled, db.is_remote_data_archive_enabled      

  15: FROM sys.databases AS db WITH (NOLOCK)

  16: INNER JOIN sys.dm_os_performance_counters AS lu WITH (NOLOCK)

  17: ON db.name = lu.instance_name

  18: INNER JOIN sys.dm_os_performance_counters AS ls WITH (NOLOCK)

  19: ON db.name = ls.instance_name

  20: WHERE lu.counter_name LIKE N'Log File(s) Used Size (KB)%' 

  21: AND ls.counter_name LIKE N'Log File(s) Size (KB)%'

  22: AND ls.cntr_value > 0 OPTION (RECOMPILE);

  23:  

  24: -- Things to look at:

  25: -- How many databases are on the instance?

  26: -- What recovery models are they using?

  27: -- What is the log reuse wait description?

  28: -- How full are the transaction logs?

  29: -- What compatibility level are the databases on? 

  30: -- What is the Page Verify Option? (should be CHECKSUM)

  31: -- Is Auto Update Statistics Asynchronously enabled?

  32: -- Make sure auto_shrink and auto_close are not enabled!

Figure 1: Query #27 Volume Info

This query gives you a lot of very useful information about all of your databases. I pay special attention to Log Reuse Wait Description and Log Used %, to spot problems that are preventing the space in the transaction log from being reused, which will eventually cause the log file to fill up and grow. I also look at the compatibility level (which is much more important in SQL Server 2014, with the new cardinality estimator) and the page verify option, which should always be CHECKSUM.

It is important to check the statistics-related properties, and to also make sure that auto shrink and auto close are not enabled. I don’t think I have ever seen an instance of SQL Server where I did not notice some database properties for multiple databases that were simply incorrect. This query is great for quickly spotting possible issues that need to be corrected.

 

Query #28 is Missing Indexes All Databases. This query retrieves information from the sys.dm_db_missing_index_group_stats dynamic management view, the sys.dm_db_missing_index_groups dynamic management view, and the sys.dm_db_missing_index_details dynamic management view about “missing” indexes that the SQL Server Query Optimizer thinks that it would like to have. Query #28 is shown in Figure 2.

   1: -- Missing Indexes for all databases by Index Advantage  (Query 28) (Missing Indexes All Databases)

   2: SELECT CONVERT(decimal(18,2), user_seeks * avg_total_user_cost * (avg_user_impact * 0.01)) AS [index_advantage], 

   3: migs.last_user_seek, mid.[statement] AS [Database.Schema.Table],

   4: mid.equality_columns, mid.inequality_columns, mid.included_columns,

   5: migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact

   6: FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)

   7: INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK)

   8: ON migs.group_handle = mig.index_group_handle

   9: INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK)

  10: ON mig.index_handle = mid.index_handle

  11: ORDER BY index_advantage DESC OPTION (RECOMPILE);

  12:  

  13: -- Getting missing index information for all of the databases on the instance is very useful

  14: -- Look at last user seek time, number of user seeks to help determine source and importance

  15: -- Also look at avg_user_impact and avg_total_user_cost to help determine importance

  16: -- SQL Server is overly eager to add included columns, so beware

  17: -- Do not just blindly add indexes that show up from this query!!!

Figure 2: Query #28 Missing Indexes All Databases

This query is very useful, but also very easy to misinterpret and misuse. I have seen many novice DBAs and developers use the results of this query to pretty badly over-index their databases, which affects their database size and hurts insert, update, and delete performance. I like to focus on the last_user_seek column, and see how long ago that was. Was it a few seconds or minutes ago, or was it days or weeks ago?

I then start looking at the user_seeks, avg_total_user_cost, and avg_user_impact columns to get a sense for how often SQL Server thinks it wants this proposed index, how expensive it is not to have the index, and how much the query optimizer thinks the cost of the query would be reduced if it did have this index that it is requesting.

Next, I’ll look at any other proposed indexes on the same table to see if I can come up with a wider, consolidated index that covers multiple requested indexes. Finally, I’ll look at the existing indexes on that table, and look at the index usage metrics for that table to have a better idea of whether a new index would be a good idea, based on the volatility of that table.

 

Query #29 is VLF Counts. This query calls the DBCC LogInfo command for each on of your system and user databases to get your current virtual log file (VLF) counts for your transaction logs. Query #29 is shown in Figure 3

   1: -- Get VLF Counts for all databases on the instance (Query 29) (VLF Counts)

   2: -- (adapted from Michelle Ufford) 

   3: CREATE TABLE #VLFInfo (RecoveryUnitID int, FileID  int,

   4:                        FileSize bigint, StartOffset bigint,

   5:                        FSeqNo      bigint, [Status]    bigint,

   6:                        Parity      bigint, CreateLSN   numeric(38));

   7:      

   8: CREATE TABLE #VLFCountResults(DatabaseName sysname, VLFCount int);

   9:      

  10: EXEC sp_MSforeachdb N'Use [?]; 

  11:  

  12:                 INSERT INTO #VLFInfo 

  13:                 EXEC sp_executesql N''DBCC LOGINFO([?])''; 

  14:      

  15:                 INSERT INTO #VLFCountResults 

  16:                 SELECT DB_NAME(), COUNT(*) 

  17:                 FROM #VLFInfo; 

  18:  

  19:                 TRUNCATE TABLE #VLFInfo;'

  20:      

  21: SELECT DatabaseName, VLFCount  

  22: FROM #VLFCountResults

  23: ORDER BY VLFCount DESC;

  24:      

  25: DROP TABLE #VLFInfo;

  26: DROP TABLE #VLFCountResults;

  27:  

  28: -- High VLF counts can affect write performance 

  29: -- and they can make full database restores and crash recovery take much longer

  30: -- Try to keep your VLF counts under 200 in most cases (depending on log file size)

  31:  

  32: -- Important change to VLF creation algorithm in SQL Server 2014

  33: -- https://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/

Figure 3: Query #29 VLF Counts

Every time a SQL Server log file grows, a certain number of VLFs are added to the log file. By the way, the formula for this was changed in SQL Server 2014, as discussed by Paul Randal here.

The reason why you care about this is because having a higher number of VLFs affects how long the recovery portion of a database restore takes. If you have ever watched the progress update in SSMS while you were doing a restore of a large database, and noticed that it finally got to 100%, but then kept grinding away for quite a while after that, you have seen this effect in action. It is very frustrating!

It also affects how long “crash recovery” takes, which occurs for every database on the instance whenever the SQL Server Service starts. This comes into play whether it is just a standalone instance starting or when you failover to another node with a traditional SQL Server failover cluster instance (FCI). If you have ever been frustrated about how long a FCI failover takes, keeping your VLF counts under control is one thing you can do to help speed up the process.

If you have a high VLF count in a database, it is usually pretty easy to correct. If you are in the FULL recovery model, you will want to run a transaction log backup. Then, you shrink the transaction log file. Often, you will have to repeat this sequence more than once. Sometimes you will even need to generate some log activity (by doing something like reorganizing an index or two) to get the active portion of the log to a place where a shrink works.

Once you have done all of this, your transaction log file is likely to be extremely small, perhaps just a few MB in size. You will want to immediately manually grow it in relatively large increments (such as 1000MB, 2000MB, or 4000MB) until you have it at the desired size. You will also want to make sure the auto growth increment is set to a fixed size in MB (such as 1000MB, 2000MB, or 4000MB) so that any future auto growths don’t add an excessive number of VLFs.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.