SQL Server Diagnostic Information Queries Detailed, Day 25

For Day 25 of this series, we start out with Query #57, which is Buffer Usage. This query retrieves information from the sys.allocation_units object catalog view, the sys.dm_os_buffer_descriptors dynamic management view and the sys.partitions object catalog view about buffer pool usage in the current database. Query #57 is shown in Figure 1.

   1: -- Breaks down buffers used by current database by object (table, index) in the buffer cache  (Query 57) (Buffer Usage)

   2: -- Note: This query could take some time on a busy instance

   3: SELECT OBJECT_NAME(p.[object_id]) AS [Object Name], p.index_id, 

   4: CAST(COUNT(*)/128.0 AS DECIMAL(10, 2)) AS [Buffer size(MB)],  

   5: COUNT(*) AS [BufferCount], p.Rows AS [Row Count],

   6: p.data_compression_desc AS [Compression Type]

   7: FROM sys.allocation_units AS a WITH (NOLOCK)

   8: INNER JOIN sys.dm_os_buffer_descriptors AS b WITH (NOLOCK)

   9: ON a.allocation_unit_id = b.allocation_unit_id

  10: INNER JOIN sys.partitions AS p WITH (NOLOCK)

  11: ON a.container_id = p.hobt_id

  12: WHERE b.database_id = CONVERT(int,DB_ID())

  13: AND p.[object_id] > 100

  14: GROUP BY p.[object_id], p.index_id, p.data_compression_desc, p.[Rows]

  15: ORDER BY [BufferCount] DESC OPTION (RECOMPILE);

  16:  

  17: -- Tells you what tables and indexes are using the most memory in the buffer cache

  18: -- It can help identify possible candidates for data compression

Figure 1: Query #57 Buffer Usage

This query shows you which tables and indexes are using the most buffer pool space in the current database. This is very important information to understand if you are under internal memory pressure, or you are seeing high read latency for your data file(s). The query also displays the data compression status for the index.

If you see an index that is using a lot of space in the SQL Server buffer pool (because it shows up near the top of this query), then I would investigate whether SQL Server data compression might make sense for that index. What you want to look for are indexes that are highly compressible, on relatively static data, at least as far as UPDATES are concerned.

 

Query #58 is Table Sizes. This query retrieves information from the the sys.partitions object catalog view about the table sizes and clustered index (or heap) data compression status in the current database. Query #58 is shown in Figure 2.

   1: -- Get Table names, row counts, and compression status for clustered index or heap  (Query 58) (Table Sizes)

   2: SELECT OBJECT_NAME(object_id) AS [ObjectName], 

   3: SUM(Rows) AS [RowCount], data_compression_desc AS [CompressionType]

   4: FROM sys.partitions WITH (NOLOCK)

   5: WHERE index_id < 2 --ignore the partitions from the non-clustered index if any

   6: AND OBJECT_NAME(object_id) NOT LIKE N'sys%'

   7: AND OBJECT_NAME(object_id) NOT LIKE N'queue_%' 

   8: AND OBJECT_NAME(object_id) NOT LIKE N'filestream_tombstone%' 

   9: AND OBJECT_NAME(object_id) NOT LIKE N'fulltext%'

  10: AND OBJECT_NAME(object_id) NOT LIKE N'ifts_comp_fragment%'

  11: AND OBJECT_NAME(object_id) NOT LIKE N'filetable_updates%'

  12: AND OBJECT_NAME(object_id) NOT LIKE N'xml_index_nodes%'

  13: AND OBJECT_NAME(object_id) NOT LIKE N'sqlagent_job%'  

  14: AND OBJECT_NAME(object_id) NOT LIKE N'plan_persist%'  

  15: GROUP BY object_id, data_compression_desc

  16: ORDER BY SUM(Rows) DESC OPTION (RECOMPILE);

  17:  

  18: -- Gives you an idea of table sizes, and possible data compression opportunities

Figure 2: Query #58 Table Sizes

This query simply shows you the row counts and data compression status for the clustered index or heap for each table in the current database. I use this query to look for tables that might be good candidates of SQL Server data compression. Again, what you are looking for are large tables, that are relatively static, that compress well.

SQL Server Diagnostic Information Queries Detailed, Day 24

For Day 24 of this series, we start out with Query #54, which is Bad NC Indexes. This query retrieves information from the sys.dm_db_index_usage_stats dynamic management view and the sys.indexes object catalog view about non-clustered indexes that have more writes than reads in the current database. Query #54 is shown in Figure 1.

   1: -- Possible Bad NC Indexes (writes > reads)  (Query 54) (Bad NC Indexes)

   2: SELECT OBJECT_NAME(s.[object_id]) AS [Table Name], i.name AS [Index Name], i.index_id, 

   3: i.is_disabled, i.is_hypothetical, i.has_filter, i.fill_factor,

   4: user_updates AS [Total Writes], user_seeks + user_scans + user_lookups AS [Total Reads],

   5: user_updates - (user_seeks + user_scans + user_lookups) AS [Difference]

   6: FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK)

   7: INNER JOIN sys.indexes AS i WITH (NOLOCK)

   8: ON s.[object_id] = i.[object_id]

   9: AND i.index_id = s.index_id

  10: WHERE OBJECTPROPERTY(s.[object_id],'IsUserTable') = 1

  11: AND s.database_id = DB_ID()

  12: AND user_updates > (user_seeks + user_scans + user_lookups)

  13: AND i.index_id > 1

  14: ORDER BY [Difference] DESC, [Total Writes] DESC, [Total Reads] ASC OPTION (RECOMPILE);

  15:  

  16: -- Look for indexes with high numbers of writes and zero or very low numbers of reads

  17: -- Consider your complete workload, and how long your instance has been running

  18: -- Investigate further before dropping an index!

Figure 1: Query #54 Bad NC Indexes

What you are looking for with this query are indexes that have high numbers of writes and very few or even zero reads. If you are paying the cost to maintain an index as the data changes in your table, but the index is never used for reads, then you are placing unneeded stress on your storage subsystem that is not providing any benefits to the system. Having unused indexes also makes your database larger, and makes index maintenance more time consuming and resource intensive.

One key point to keep in mind before you start dropping indexes that appear to be unused is how long your SQL Server instance has been running. Before you drop an index, consider whether you have seen your complete normal business cycle. Perhaps there are monthly reports that actually do use an index that normally does not see any read activity with your regular workload.

 

 

Query #55 is Missing Indexes. This query retrieves information from the sys.dm_db_missing_index_group_stats dynamic management view, the sys.dm_db_missing_index_groups dynamic management view, the sys.dm_db_missing_index_details dynamic management view and the sys.partitions catalog view about “missing” indexes that the SQL Server Query Optimizer thinks that it would like to have in the current database. Query #55 is shown in Figure 2.

   1: -- Missing Indexes for current database by Index Advantage  (Query 55) (Missing Indexes)

   2: SELECT DISTINCT CONVERT(decimal(18,2), user_seeks * avg_total_user_cost * (avg_user_impact * 0.01)) AS [index_advantage], 

   3: migs.last_user_seek, mid.[statement] AS [Database.Schema.Table],

   4: mid.equality_columns, mid.inequality_columns, mid.included_columns,

   5: migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact,

   6: OBJECT_NAME(mid.[object_id]) AS [Table Name], p.rows AS [Table Rows]

   7: FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)

   8: INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK)

   9: ON migs.group_handle = mig.index_group_handle

  10: INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK)

  11: ON mig.index_handle = mid.index_handle

  12: INNER JOIN sys.partitions AS p WITH (NOLOCK)

  13: ON p.[object_id] = mid.[object_id]

  14: WHERE mid.database_id = DB_ID() 

  15: ORDER BY index_advantage DESC OPTION (RECOMPILE);

  16:  

  17: -- Look at index advantage, last user seek time, number of user seeks to help determine source and importance

  18: -- SQL Server is overly eager to add included columns, so beware

  19: -- Do not just blindly add indexes that show up from this query!!!

Figure 2: Query #55 Missing Indexes

This query is very useful, but also very easy to misinterpret and misuse. I have seen many novice DBAs and developers use the results of this query to pretty badly over-index their databases, which affects their database size and hurts insert, update, and delete performance. I like to focus on the last_user_seek column, and see how long ago that was. Was it a few seconds or minutes ago, or was it days or weeks ago?

I then start looking at the user_seeks, avg_total_user_cost, and avg_user_impact columns to get a sense for how often SQL Server thinks it wants this proposed index, how expensive it is not to have the index, and how much the query optimizer thinks the cost of the query would be reduced if it did have this index that it is requesting.

Next, I’ll look at any other proposed indexes on the same table to see if I can come up with a wider, consolidated index that covers multiple requested indexes. Finally, I’ll look at the existing indexes on that table, and look at the index usage metrics for that table to have a better idea of whether a new index would be a good idea, based on the volatility of that table.

This query is very similar to Query #28, but this one is only for the current database. It also pulls back the number of rows in a table, which is useful information when you are considering creating a new index, especially when you are using SQL Server Standard Edition, which does not have online index operations.

 

Query #56 is Missing Index Warnings. This query retrieves information from the sys.dm_exec_cached_plans dynamic management view, the sys.dm_exec_query_plan dynamic management function about missing index warnings in the plan cache for the current database. Query #56 is shown in Figure 3.

   1: -- Find missing index warnings for cached plans in the current database  (Query 56) (Missing Index Warnings)

   2: -- Note: This query could take some time on a busy instance

   3: SELECT TOP(25) OBJECT_NAME(objectid) AS [ObjectName], 

   4:                query_plan, cp.objtype, cp.usecounts, cp.size_in_bytes

   5: FROM sys.dm_exec_cached_plans AS cp WITH (NOLOCK)

   6: CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qp

   7: WHERE CAST(query_plan AS NVARCHAR(MAX)) LIKE N'%MissingIndex%'

   8: AND dbid = DB_ID()

   9: ORDER BY cp.usecounts DESC OPTION (RECOMPILE);

  10:  

  11: -- Helps you connect missing indexes to specific stored procedures or queries

  12: -- This can help you decide whether to add them or not

Figure 3: Query #56 Missing Index Warnings

This query (which can be time consuming on a busy instance with a large plan cache) shows you where you have cached query plans with missing index warnings. This is very helpful, since it can often help you tie requested “missing” indexes to a particular stored procedure or prepared query plan.

SQL Server Diagnostic Information Queries Detailed, Day 23

For Day 23 of this series, we start out with Query #52, which is SP Logical Writes. This query retrieves information from the sys.procedures object catalog view and the sys.dm_exec_procedure_stats dynamic management view about the cached stored procedures that have the highest cumulative total logical writes in the current database. Query #52 is shown in Figure 1.

   1: -- Top Cached SPs By Total Logical Writes (Query 52) (SP Logical Writes)

   2: -- Logical writes relate to both memory and disk I/O pressure 

   3: SELECT TOP(25) p.name AS [SP Name], qs.total_logical_writes AS [TotalLogicalWrites], 

   4: qs.total_logical_writes/qs.execution_count AS [AvgLogicalWrites], qs.execution_count,

   5: ISNULL(qs.execution_count/DATEDIFF(Minute, qs.cached_time, GETDATE()), 0) AS [Calls/Minute],

   6: qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], 

   7: qs.cached_time

   8: FROM sys.procedures AS p WITH (NOLOCK)

   9: INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)

  10: ON p.[object_id] = qs.[object_id]

  11: WHERE qs.database_id = DB_ID()

  12: AND qs.total_logical_writes > 0

  13: ORDER BY qs.total_logical_writes DESC OPTION (RECOMPILE);

  14:  

  15: -- This helps you find the most expensive cached stored procedures from a write I/O perspective

  16: -- You should look at this if you see signs of I/O pressure or of memory pressure

Figure 1: Query #52 SP Logical Writes

This query lets you see which cached stored procedures have the highest number cumulative logical writes in this database. This helps you see which stored procedures are causing the most write I/O pressure for this database. If you are seeing any signs of high write I/O latency on your instance of SQL Server (and if this database is causing a lot of I/O activity, as shown in Query #31), then the results of this query can help you figure out which stored procedures are the biggest offenders.

 

Query #53 is Top IO Statements. This query retrieves information from the sys.dm_exec_query_stats dynamic management and the sys.dm_exec_sql_text dynamic management function about the cached query statements that have the highest average I/O activity in the current database. Query #53 is shown in Figure 2.

   1: -- Lists the top statements by average input/output usage for the current database  (Query 53) (Top IO Statements)

   2: SELECT TOP(50) OBJECT_NAME(qt.objectid, dbid) AS [SP Name],

   3: (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO], qs.execution_count AS [Execution Count],

   4: SUBSTRING(qt.,qs.statement_start_offset/2, 

   5:     (CASE 

   6:         WHEN qs.statement_end_offset = -1 

   7:      THEN LEN(CONVERT(nvarchar(max), qt.)) * 2 

   8:         ELSE qs.statement_end_offset 

   9:      END - qs.statement_start_offset)/2) AS [Query Text]    

  10: FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)

  11: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  12: WHERE qt.[dbid] = DB_ID()

  13: ORDER BY [Avg IO] DESC OPTION (RECOMPILE);

  14:  

  15: -- Helps you find the most expensive statements for I/O by SP

Figure 2: Query #53 Top IO Statements

This query shows you which query statements (which are often inside of stored procedures are causing the highest average I/O activity in the current database. Again, if you are under internal memory pressure, or if you are seeing high I/O latency for reads or for writes, the results of this query can point you in the right direction for further investigation.

Perhaps you have a query that is doing a clustered index scan because it is missing a useful non-clustered index. Perhaps a query is pulling back more rows or columns of data than it really needs (although can be hard for you to confirm this as a DBA). Perhaps the table(s) that are involved in this query might have indexes that would be good candidates for SQL Server Data Compression. There are many, many possible issues and actions that you can investigate in this area!

SQL Server Diagnostic Information Queries Detailed, Day 22

For Day 22 of this series, we start out with Query #50, which is SP Logical Reads. This query retrieves information from the sys.procedures object catalog view and the sys.dm_exec_procedure_stats dynamic management view about the cached stored procedures that have the highest cumulative total logical reads in the current database. Query #50 is shown in Figure 1.

   1: -- Top Cached SPs By Total Logical Reads. Logical reads relate to memory pressure  (Query 50) (SP Logical Reads)

   2: SELECT TOP(25) p.name AS [SP Name], qs.total_logical_reads AS [TotalLogicalReads], 

   3: qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads],qs.execution_count, 

   4: ISNULL(qs.execution_count/DATEDIFF(Minute, qs.cached_time, GETDATE()), 0) AS [Calls/Minute], 

   5: qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count 

   6: AS [avg_elapsed_time], qs.cached_time

   7: FROM sys.procedures AS p WITH (NOLOCK)

   8: INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)

   9: ON p.[object_id] = qs.[object_id]

  10: WHERE qs.database_id = DB_ID()

  11: ORDER BY qs.total_logical_reads DESC OPTION (RECOMPILE);

  12:  

  13: -- This helps you find the most expensive cached stored procedures from a memory perspective

  14: -- You should look at this if you see signs of memory pressure

Figure 1: Query #50 SP Logical Reads

This query lets you see which cached stored procedures have the highest number cumulative logical reads in this database. This helps you see which stored procedures are causing the most memory pressure (and indirectly, read I/O pressure) for this database. If you are seeing any signs of memory pressure on your instance of SQL Server (and if this database is using a lot of memory in the SQL Server Buffer pool, as shown in Query #32), then the results of this query can help you figure out which stored procedures are the biggest offenders.

 

Query #51 is SP Physical Reads. This query retrieves information from the sys.procedures object catalog view and the sys.dm_exec_procedure_stats dynamic management view about the cached stored procedures that have the highest cumulative total physical reads in the current database. Query #51 is shown in Figure 2.

   1: -- Top Cached SPs By Total Physical Reads. Physical reads relate to disk read I/O pressure  (Query 51) (SP Physical Reads)

   2: SELECT TOP(25) p.name AS [SP Name],qs.total_physical_reads AS [TotalPhysicalReads], 

   3: qs.total_physical_reads/qs.execution_count AS [AvgPhysicalReads], qs.execution_count, 

   4: qs.total_logical_reads,qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count 

   5: AS [avg_elapsed_time], qs.cached_time 

   6: FROM sys.procedures AS p WITH (NOLOCK)

   7: INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)

   8: ON p.[object_id] = qs.[object_id]

   9: WHERE qs.database_id = DB_ID()

  10: AND qs.total_physical_reads > 0

  11: ORDER BY qs.total_physical_reads DESC, qs.total_logical_reads DESC OPTION (RECOMPILE);

  12:  

  13: -- This helps you find the most expensive cached stored procedures from a read I/O perspective

  14: -- You should look at this if you see signs of I/O pressure or of memory pressure

Figure 2: Query #51 SP Physical Reads

This query lets you see which cached stored procedures have the highest number cumulative physical reads in this database. This helps you see which stored procedures are causing the most read I/O pressure (and indirectly, memory pressure) for this database. If you are seeing any signs of high read I/O latency on your instance of SQL Server (and if this database is causing a lot of I/O activity, as shown in Query #31), then the results of this query can help you figure out which stored procedures are the biggest offenders.

SQL Server Diagnostic Information Queries Detailed, Day 21

For Day 21 of this series, we start out with Query #48, which is SP Avg Elapsed Time. This query retrieves information from the sys.procedures object catalog view and the sys.dm_exec_procedure_stats dynamic management view about the cached stored procedures that have the highest average elapsed time in the current database. Query #48 is shown in Figure 1.

   1: -- Top Cached SPs By Avg Elapsed Time (Query 48) (SP Avg Elapsed Time)

   2: SELECT TOP(25) p.name AS [SP Name], qs.min_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], 

   3: qs.max_elapsed_time, qs.last_elapsed_time, qs.total_elapsed_time, qs.execution_count, 

   4: ISNULL(qs.execution_count/DATEDIFF(Minute, qs.cached_time, GETDATE()), 0) AS [Calls/Minute], 

   5: qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], 

   6: qs.total_worker_time AS [TotalWorkerTime], qs.cached_time

   7: FROM sys.procedures AS p WITH (NOLOCK)

   8: INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)

   9: ON p.[object_id] = qs.[object_id]

  10: WHERE qs.database_id = DB_ID()

  11: ORDER BY avg_elapsed_time DESC OPTION (RECOMPILE);

  12:  

  13: -- This helps you find high average elapsed time cached stored procedures that

  14: -- may be easy to optimize with standard query tuning techniques

Figure 1: Query #48 SP Avg Elapsed Time

This query gives you a chance to look like a super hero. It shows you the cached stored procedures that have the highest average elapsed time in the current database. This basically gives you a list of stored procedures to look at much more closely, to see if you can do any query optimization or index tuning to make them dramatically faster. If you are able to do your DBA magic and make a long-running stored procedure run much, much faster, people are going to notice, and perhaps think you are some sort of evil genius.

 

Query #49 is SP Worker Time. This query retrieves information from the sys.procedures object catalog view and the sys.dm_exec_procedure_stats dynamic management view about the cached stored procedures that have the highest cumulative worker time in the current database. Query #49 is shown in Figure 2.

   1: -- Top Cached SPs By Total Worker time. Worker time relates to CPU cost  (Query 49) (SP Worker Time)

   2: SELECT TOP(25) p.name AS [SP Name], qs.total_worker_time AS [TotalWorkerTime], 

   3: qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.execution_count, 

   4: ISNULL(qs.execution_count/DATEDIFF(Minute, qs.cached_time, GETDATE()), 0) AS [Calls/Minute],

   5: qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count 

   6: AS [avg_elapsed_time], qs.cached_time

   7: FROM sys.procedures AS p WITH (NOLOCK)

   8: INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)

   9: ON p.[object_id] = qs.[object_id]

  10: WHERE qs.database_id = DB_ID()

  11: ORDER BY qs.total_worker_time DESC OPTION (RECOMPILE);

  12:  

  13: -- This helps you find the most expensive cached stored procedures from a CPU perspective

  14: -- You should look at this if you see signs of CPU pressure

Figure 2: Query #49 SP Worker Time

This query shows you which cached stored procedures have the highest cumulative total worker time in the current database. Worker time means CPU cost. If your instance or server is under CPU pressure, than looking at the stored procedures that show up at the top of this diagnostic query should be a high priority. Even if you are not under sustained CPU pressure, keeping an eye on the top offenders on this query is a good idea. Quite often, you will find the same stored procedures showing up on several of these different “Top SP cost” queries, which means that the SP in question is expensive from multiple perspectives.

SQL Server Diagnostic Information Queries Detailed, Day 20

For Day 20 of this series, we start out with Query #46, which is Query Execution Counts. This query retrieves information from the sys.dm_exec_query_stats dynamic management view, the sys.dm_exec_sql_text dynamic management function, and the sys.dm_exec_query_plan dynamic management function about the most frequently executed cached queries in the current database. Query #46 is shown in Figure 1.

   1: -- Get most frequently executed queries for this database (Query 46) (Query Execution Counts)

   2: SELECT TOP(50) LEFT(t., 50) AS [Short Query Text], qs.execution_count AS [Execution Count],

   3: qs.total_logical_reads AS [Total Logical Reads],

   4: qs.total_logical_reads/qs.execution_count AS [Avg Logical Reads],

   5: qs.total_worker_time AS [Total Worker Time],

   6: qs.total_worker_time/qs.execution_count AS [Avg Worker Time], 

   7: qs.total_elapsed_time AS [Total Elapsed Time],

   8: qs.total_elapsed_time/qs.execution_count AS [Avg Elapsed Time], 

   9: qs.creation_time AS [Creation Time]

  10: --,t. AS [Complete Query Text], qp.query_plan AS [Query Plan] -- uncomment out these columns if not copying results to Excel

  11: FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)

  12: CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS t 

  13: CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp 

  14: WHERE t.dbid = DB_ID()

  15: ORDER BY qs.execution_count DESC OPTION (RECOMPILE);

Figure 1: Query #46 Query Execution Counts

This query shows you which cached queries (which might be part of a stored procedure or not) are being called the most often. This is useful as a part of understanding the nature of your workload. Keep in mind that just because a query is called a lot does not necessarily mean that it is a key part of your workload. It might be, but it could be that it is not actually that expensive for individual calls or cumulatively. You will need to look at the other metrics for that query to determine that.

You may notice that I have one line of this query commented out. This is because Excel does not deal very well with large quantities of text or XML. If you are working with this in real time, you should probably uncomment that line, so you see the extra information that it retrieves.

 

Query #47 is SP Execution Counts. This query retrieves information from the sys.procedures object catalog view and the sys.dm_exec_procedure_stats dynamic management view about the most frequently executed cached stored procedures in the current database. Query #47 is shown in Figure 2.

   1: -- Top Cached SPs By Execution Count (Query 47) (SP Execution Counts)

   2: SELECT TOP(100) p.name AS [SP Name], qs.execution_count,

   3: ISNULL(qs.execution_count/DATEDIFF(Minute, qs.cached_time, GETDATE()), 0) AS [Calls/Minute],

   4: qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.total_worker_time AS [TotalWorkerTime],  

   5: qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time],

   6: qs.cached_time

   7: FROM sys.procedures AS p WITH (NOLOCK)

   8: INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)

   9: ON p.[object_id] = qs.[object_id]

  10: WHERE qs.database_id = DB_ID()

  11: ORDER BY qs.execution_count DESC OPTION (RECOMPILE);

  12:  

  13: -- Tells you which cached stored procedures are called the most often

  14: -- This helps you characterize and baseline your workload

Figure 2: Query #47 SP Execution Counts

This query shows you which stored procedures with cached query plans are being called the most often. This helps you understand the nature and magnitude of your workload. Ideally, you should have a general idea of what your normal workload looks like, in terms of how many calls/minute or per second you are seeing for your top stored procedures.

If this rate suddenly changes, you would want to investigate further to understand what might have happened. Understanding which stored procedures are called the most often, can also help you identify possible candidates for middle-tier caching.

SQL Server Diagnostic Information Queries Detailed, Day 19

After eighteen days of queries, we have made it through all of the instance-level queries in this set. Now, we move on to the database-specific queries in the set. For these queries, you need to be connected to a particular database that you are concerned with, rather than the master system database.

For Day 19 of this series, we start out with Query #44, which is File Sizes and Space. This query retrieves information from the sys.database_files system catalog view and the sys.data_spaces system catalog view about The sizes and available space for all of your database files. Query #44 is shown in Figure 1.

   1: -- Individual File Sizes and space available for current database  (Query 44) (File Sizes and Space)

   2: SELECT f.name AS [File Name] , f.physical_name AS [Physical Name], 

   3: CAST((f.size/128.0) AS DECIMAL(15,2)) AS [Total Size in MB],

   4: CAST(f.size/128.0 - CAST(FILEPROPERTY(f.name, 'SpaceUsed') AS int)/128.0 AS DECIMAL(15,2)) 

   5: AS [Available Space In MB], [file_id], fg.name AS [Filegroup Name],

   6: f.is_percent_growth, f.growth

   7: FROM sys.database_files AS f WITH (NOLOCK) 

   8: LEFT OUTER JOIN sys.data_spaces AS fg WITH (NOLOCK) 

   9: ON f.data_space_id = fg.data_space_id OPTION (RECOMPILE);

  10:  

  11: -- Look at how large and how full the files are and where they are located

  12: -- Make sure the transaction log is not full!!

Figure 1: Query #44 File Sizes and Space

This query lets you see how large each of your database files are, plus how much space is available in each of your database files. For data files, you can also see what file group each file is in. You can also see exactly where each file is located in the file system. This is all extremely useful information.

 

Query #45 is IO Stats By File. This query retrieves information from the sys.dm_io_virtual_file_stats dynamic management function and the sys.database_files system catalog view about the cumulative I/O usage by database file. Query #45 is shown in Figure 2.

   1: -- I/O Statistics by file for the current database  (Query 45) (IO Stats By File)

   2: SELECT DB_NAME(DB_ID()) AS [Database Name], df.name AS [Logical Name], vfs.[file_id], df.type_desc,

   3: df.physical_name AS [Physical Name], CAST(vfs.size_on_disk_bytes/1048576.0 AS DECIMAL(10, 2)) AS [Size on Disk (MB)],

   4: vfs.num_of_reads, vfs.num_of_writes, vfs.io_stall_read_ms, vfs.io_stall_write_ms,

   5: CAST(100. * vfs.io_stall_read_ms/(vfs.io_stall_read_ms + vfs.io_stall_write_ms) AS DECIMAL(10,1)) AS [IO Stall Reads Pct],

   6: CAST(100. * vfs.io_stall_write_ms/(vfs.io_stall_write_ms + vfs.io_stall_read_ms) AS DECIMAL(10,1)) AS [IO Stall Writes Pct],

   7: (vfs.num_of_reads + vfs.num_of_writes) AS [Writes + Reads], 

   8: CAST(vfs.num_of_bytes_read/1048576.0 AS DECIMAL(10, 2)) AS [MB Read], 

   9: CAST(vfs.num_of_bytes_written/1048576.0 AS DECIMAL(10, 2)) AS [MB Written],

  10: CAST(100. * vfs.num_of_reads/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)) AS [# Reads Pct],

  11: CAST(100. * vfs.num_of_writes/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)) AS [# Write Pct],

  12: CAST(100. * vfs.num_of_bytes_read/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)) AS [Read Bytes Pct],

  13: CAST(100. * vfs.num_of_bytes_written/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)) AS [Written Bytes Pct]

  14: FROM sys.dm_io_virtual_file_stats(DB_ID(), NULL) AS vfs

  15: INNER JOIN sys.database_files AS df WITH (NOLOCK)

  16: ON vfs.[file_id]= df.[file_id] OPTION (RECOMPILE);

  17:  

  18: -- This helps you characterize your workload better from an I/O perspective for this database

  19: -- It helps you determine whether you has an OLTP or DW/DSS type of workload

Figure 2: Query #45 IO Stats By File

This query lets you see all of the cumulative file activity for each of the files in the current database, since SQL Server was last started. This includes your normal workload activity, plus any other activity that touches your data and log files. This would include things like database backups, index maintenance, DBCC CHECKDB activity, and HA-related activity from things like transactional replication, database mirroring, and AlwaysOn AG-related activity.

Looking at the results of this query helps you understand what kind of I/O workload activity you are seeing on each of your database files. This helps you do a better job when it comes to designing and configuring your storage subsystem.

CPU-Z Benchmark Survey

The latest version of the free CPU-Z utility has a quick CPU benchmark test that just takes a couple of minutes to run. As part of a personal project that I am working on (which I think will be very interesting and beneficial to the SQL Server community), I am trying to collect as many CPU-Z CPU benchmark results as possible, covering as many different families and models of processors as possible. If you have about five minutes of spare time, perhaps you can help me with this!

With CPU-Z 1.75, simply click on the “Bench CPU” button on the Bench tab (as shown in Figure 2). Once the test finishes in a couple of minutes, take a screenshot of that tab (ALT-Print Screen in Windows), and paste it in an e-mail. Then take a screenshot of the CPU tab (so I can easily identify your processor), like you see in Figure 1, and include that in your e-mail. Another way to get these screenshots is to hit the F5 key, while you are on those two tabs, which will save a .bmp file in the same directory as CPU-Z.

I am mainly looking for results for bare-metal, non-virtualized machines right now. If possible, make sure the Windows High Performance power plan is enabled, and that your machine is plugged in (if it is a laptop or tablet). Ideally, you would do this while your machine is relatively idle, so that all of the processing power is available for the test.

If you run this on a server, please don’t do it while it is in Production!

 

clip_image002

Figure 1: CPU-Z CPU Tab

 

Make sure to only click the Bench CPU tab, not the Stress CPU tab!

clip_image002[5]

Figure 2: CPU-Z Bench Tab

Once you are done, simply send me your screenshots by e-mail. Please don’t try to return any results by comments on this blog post.

If you would like to do this for multiple machines, that would be great!  Thanks!

SQL Server Diagnostic Information Queries Detailed, Day 18

For Day 18 of this series, we start out with Query #41, which is Memory Clerk Usage. This query retrieves information from the sys.dm_os_memory_clerks dynamic management view about total memory usage by your active memory clerks. Query #41 is shown in Figure 1.

   1: -- Memory Clerk Usage for instance  (Query 41) (Memory Clerk Usage)

   2: -- Look for high value for CACHESTORE_SQLCP (Ad-hoc query plans)

   3: SELECT TOP(10) mc.[type] AS [Memory Clerk Type], 

   4:        CAST((SUM(mc.pages_kb)/1024.0) AS DECIMAL (15,2)) AS [Memory Usage (MB)] 

   5: FROM sys.dm_os_memory_clerks AS mc WITH (NOLOCK)

   6: GROUP BY mc.[type]  

   7: ORDER BY SUM(mc.pages_kb) DESC OPTION (RECOMPILE);

   8:  

   9: -- MEMORYCLERK_SQLBUFFERPOOL was new for SQL Server 2012. It should be your highest consumer of memory

  10:  

  11: -- CACHESTORE_SQLCP  SQL Plans         

  12: -- These are cached SQL statements or batches that aren't in stored procedures, functions and triggers

  13: -- Watch out for high values for CACHESTORE_SQLCP

  14:  

  15: -- CACHESTORE_OBJCP  Object Plans      

  16: -- These are compiled plans for stored procedures, functions and triggers

Figure 1: Query #41 PLE by NUMA Node

This query shows you which memory clerks are using the most memory on your instance. With SQL Server 2012 or newer, your top memory clerk by memory usage should be MEMORYCLERK_SQLBUFFERPOOL, meaning memory usage by the SQL Server Buffer Pool. It is very common to see a high value for the CACHESTORE_SQLCP memory clerk, indicating that you have multiple GB of cached ad hoc or prepared query plans in the plan cache. If you see that, then you should look at the next query more closely, for several things you can do to help mitigate this issue.

 

Query #42 is Ad hoc Queries. This query retrieves information from the sys.dm_exec_cached_plans dynamic management view and the sys.dm_exec_sql_text dynamic management function about the single-use ad hoc and prepared query plans. Query #42 is shown in Figure 2.

   1: -- Find single-use, ad-hoc and prepared queries that are bloating the plan cache  (Query 42) (Ad hoc Queries)

   2: SELECT TOP(50)  AS [QueryText], cp.cacheobjtype, cp.objtype, cp.size_in_bytes/1024 AS [Plan Size in KB]

   3: FROM sys.dm_exec_cached_plans AS cp WITH (NOLOCK)

   4: CROSS APPLY sys.dm_exec_sql_text(plan_handle) 

   5: WHERE cp.cacheobjtype = N'Compiled Plan' 

   6: AND cp.objtype IN (N'Adhoc', N'Prepared') 

   7: AND cp.usecounts = 1

   8: ORDER BY cp.size_in_bytes DESC OPTION (RECOMPILE);

   9:  

  10: -- Gives you the text, type and size of single-use ad-hoc and prepared queries that waste space in the plan cache

  11: -- Enabling 'optimize for ad hoc workloads' for the instance can help (SQL Server 2008 and above only)

  12: -- Running DBCC FREESYSTEMCACHE ('SQL Plans') periodically may be required to better control this

  13: -- Enabling forced parameterization for the database can help, but test first!

  14:  

  15: -- Plan cache, adhoc workloads and clearing the single-use plan cache bloat

  16: -- http://www.sqlskills.com/blogs/kimberly/plan-cache-adhoc-workloads-and-clearing-the-single-use-plan-cache-bloat/

Figure 2: Query #42 Ad hoc Queries

This query will show you which single-use ad hoc or prepared query plans are using the most space in the plan cache. Once you know who the culprits are, you can start investigating them more closely. Perhaps these queries can be converted to stored procedures or parameterized SQL. At the very least, I think you should enable “optimize for ad hoc workloads” at the instance level pretty much as a default setting. On top of this, it is usually a good idea to periodically flush that particular cache, using the DBCC FREESYSTEMCACHE (‘SQL Plans’); command.

 

Query #43 is Top Logical Reads Queries. This query retrieves information from the sys.dm_exec_query_stats dynamic management view, the sys.dm_exec_sql_text dynamic management function and the sys.dm_exec_query_plan dynamic management function about the cached query plans that have the highest total logical reads. Query #43 is shown in Figure 3.

   1: -- Get top total logical reads queries for entire instance (Query 43) (Top Logical Reads Queries)

   2: SELECT TOP(50) DB_NAME(t.[dbid]) AS [Database Name], LEFT(t., 50) AS [Short Query Text],

   3: qs.total_logical_reads AS [Total Logical Reads],

   4: qs.min_logical_reads AS [Min Logical Reads],

   5: qs.total_logical_reads/qs.execution_count AS [Avg Logical Reads],

   6: qs.max_logical_reads AS [Max Logical Reads],   

   7: qs.min_worker_time AS [Min Worker Time],

   8: qs.total_worker_time/qs.execution_count AS [Avg Worker Time], 

   9: qs.max_worker_time AS [Max Worker Time], 

  10: qs.min_elapsed_time AS [Min Elapsed Time], 

  11: qs.total_elapsed_time/qs.execution_count AS [Avg Elapsed Time], 

  12: qs.max_elapsed_time AS [Max Elapsed Time],

  13: qs.execution_count AS [Execution Count], qs.creation_time AS [Creation Time]

  14: --,t. AS [Complete Query Text], qp.query_plan AS [Query Plan] -- uncomment out these columns if not copying results to Excel

  15: FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)

  16: CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS t 

  17: CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp 

  18: ORDER BY qs.total_logical_reads DESC OPTION (RECOMPILE);

  19:  

  20:  

  21: -- Helps you find the most expensive queries from a memory perspective across the entire instance

  22: -- Can also help track down parameter sniffing issues

Figure 3: Query #40 Top Logical Reads Queries

Having logical reads means that you are finding the data you need to satisfy a query in the SQL Server Buffer Pool rather than having to go out to the storage subsystem, which is a good thing. Queries that have high numbers of logical reads are creating extra internal memory pressure on your system. They also indirectly create read I/O pressure, since the data that is in the buffer pool has to be initially read from the storage subsystem. If you are seeing signs of memory pressure, then knowing which cached queries (across the entire instance) that have the highest number of total logical reads can help you understand which queries are causing the most memory pressure.

Once you understand this, then you can start looking at individual queries in more detail. Perhaps there is a missing index that is causing a clustered index scan that is causing high numbers of logical reads in a query. Perhaps there is an implicit conversion in a JOIN or in a WHERE clause that is causing SQL Server to ignore a useful index. Maybe someone is pulling back more columns than they need for a query. There are lots of possibilities here.

These three Pluralsight Courses go into even more detail about how to run these queries and interpret the results:

SQL Server 2014 DMV Diagnostic Queries – Part 1

SQL Server 2014 DMV Diagnostic Queries – Part 2

SQL Server 2014 DMV Diagnostic Queries – Part 3

SQL Server Diagnostic Information Queries Detailed, Day 17

For Day 17 of this series, we start out with Query #39, which is PLE by NUMA Node. This query retrieves information from the sys.dm_os_performance_counters dynamic management view about your page life expectancy (PLE) by NUMA node. Query #39 is shown in Figure 1.

   1: -- Page Life Expectancy (PLE) value for each NUMA node in current instance  (Query 39) (PLE by NUMA Node)

   2: SELECT @@SERVERNAME AS [Server Name], [object_name], instance_name, cntr_value AS [Page Life Expectancy]

   3: FROM sys.dm_os_performance_counters WITH (NOLOCK)

   4: WHERE [object_name] LIKE N'%Buffer Node%' -- Handles named instances

   5: AND counter_name = N'Page life expectancy' OPTION (RECOMPILE);

   6:  

   7: -- PLE is a good measurement of memory pressure

   8: -- Higher PLE is better. Watch the trend over time, not the absolute value

   9: -- This will only return one row for non-NUMA systems

  10:  

  11: -- Page Life Expectancy isn’t what you think…

  12: -- http://www.sqlskills.com/blogs/paul/page-life-expectancy-isnt-what-you-think/

Figure 1: Query #39 PLE by NUMA Node

I think that page life expectancy (PLE) is probably one of the best ways to gauge whether you are under internal memory pressure, as long as you think about it correctly. What you should do is monitor your PLE value ranges over time so that you know what your typical minimum, average, and maximum PLE values are at different times and on different days of the week. They will usually vary quite a bit according to your workload.

The ancient guidance that a PLE measurement of 300 or higher is good, is really not relevant with modern database servers with much higher amounts of physical RAM compared to 10-12 years ago. Basically, higher PLE values are always better. You want to watch the ranges and trends over time, rather than focus on a single measurement.

 

 

Query #40 is Memory Grants Pending. This query retrieves information from the sys.dm_os_performance_counters dynamic management view about the current value of the Memory Grants Pending performance counter. Query #40 is shown in Figure 2.

   1: -- Memory Grants Pending value for current instance  (Query 40) (Memory Grants Pending)

   2: SELECT @@SERVERNAME AS [Server Name], [object_name], cntr_value AS [Memory Grants Pending]                                                                                                       

   3: FROM sys.dm_os_performance_counters WITH (NOLOCK)

   4: WHERE [object_name] LIKE N'%Memory Manager%' -- Handles named instances

   5: AND counter_name = N'Memory Grants Pending' OPTION (RECOMPILE);

   6:  

   7: -- Run multiple times, and run periodically is you suspect you are under memory pressure

   8: -- Memory Grants Pending above zero for a sustained period is a very strong indicator of internal memory pressure

Figure 2: Query #40 Memory Grants Pending

This query is another way to determine whether you are under severe internal memory pressure. The value of this query will change from second to second, so you will want to run it multiple times when you suspect you are under memory pressure. Any sustained value above zero is not a good sign. In fact, it is a very bad sign, showing that you are under pretty extreme memory pressure.