What Events are in a Trace?

Yes…I’m writing a post about SQL Trace…specifically events in a trace. There are still lots of folks that use Profiler and run server-side traces, and when I work with those customers I like to understand how the traces are configured. I want to see what events are in a trace, but I also want to see the filters and if the traces are writing to a file or rowset provider. I wrote this query ages ago and went looking for it today, which means I really should write a post so it’s easier to find 🙂

Where to start

In order to determine the events in the trace, you first need to know the ID for the trace, which is automatically generated when you create the trace.  You can find the ID in sys.traces:

SELECT *
FROM sys.traces;
GO

If I run this on my local instance, I get the output below:

sys.traces output, part I

sys.traces output, part I

sys.traces output, part II

sys.traces output, part II

What to review

I broke the output into two images because there are a few things to point out.  The trace with id = 1 is, almost always, represents the default trace.  I yet to find a system where it is not the default trace, but it’s possible (e.g. someone disables the default trace, or manages to get another trace to start up before the default trace).  If it’s writing to the LOG folder where SQL Server is installed, then I’m confident it’s the default trace and I’m not worried about it.

The second trace, with id = 2, is one I created as a server-side trace. Notice that for the trace with id = 3, the path is NULL.  This indicates that the trace is not writing to a file, but to a rowset provider (this is why is_rowset = 1 for that row).  The rowset provider is typically, but not always, the Profiler UI.  When I see this for a client, I ask if they use a third-party monitoring tool (as this is how traces from third-party tools can appear).  Even if the customer has a monitoring tool, I want to see what events are being collected, and the filter(s).

Some other items of interest in the output:

  • start_time (when the trace started)
  • last_event_time (when the last event was captured by the trace)
  • event_count (number of events captured)
  • dropped_event_count (number of events not captured)

Notice that for the traces which write to a file, dropped_event_count is NULL.  When writing to a file, SQL Trace guarantees that it will not lose events.  This might sound great – you don’t want to miss any data, right?  But…that also means that if the volume of events captured exceeds SQL Trace’s ability to write those events out to the file, then the activities in SQL Server that generate those events might have to wait.  That is not a great thing.  Note that with the rowset provider, you can lose data.  If there is a 20 second stall where SQL Trace cannot push the events to the provider, it will start dropping events.

What events are in the trace

The third trace is the one in which we are interested.  It is writing to a rowset provider, so we want to determine the events in the trace and who is running it.  The query below lists the events and columns that are being captured, along with any filters that exist:

SELECT
     e.name,
     c.name,
     CASE
          WHEN f.logical_operator = 0 THEN 'AND'
          ELSE 'OR'
     END logical_operator,
     CASE
          WHEN f.comparison_operator = 0 THEN 'EQUAL'
          WHEN f.comparison_operator = 1 THEN 'NOT EQUAL'
          WHEN f.comparison_operator = 2 THEN 'GREATER THAN'
          WHEN f.comparison_operator = 3 THEN 'LESS THAN'
          WHEN f.comparison_operator = 4 THEN 'GREATER THAN OR EQUAL'
          WHEN f.comparison_operator = 5 THEN 'LESS THAN OR EQUAL'
          WHEN f.comparison_operator = 6 THEN 'LIKE'
          WHEN f.comparison_operator = 7 THEN 'NOT LIKE'
     END comparison_operator,
     f.value
FROM sys.fn_trace_geteventinfo(@traceID) t
JOIN sys.trace_events e
     ON t.eventid = e.trace_event_id
JOIN sys.trace_columns c
     ON t.columnid = c.trace_column_id
LEFT OUTER JOIN sys.fn_trace_getfilterinfo(@traceID) f
     ON c.trace_column_id = f.columnid
ORDER BY e.name, c.name;
GO

Here is a subset of the output:

Events, columns, and filters for a trace

Events, columns, and filters for a trace

The good news is that there is a filter applied for the SP: StmtCompleted and SQL: StmtCompleted events.  The not-so-good news is that the Showplan XML event is being captured.  This event adds tremendous overhead to a system and is not recommended for a production environment.  Now I really want to know who is running this trace.

If you remember, back in the sys.traces output we had reader_spid, with a value of 78 for id = 3.  We can take that and query sys.dm_exec_sessions to get more information:

SELECT host_name, program_name, login_name
FROM sys.dm_exec_sessions
WHERE session_id = 78;
GO
Output from sys.dm_exec_sessions

Output from sys.dm_exec_sessions

I now know the machine from which the rowset provider (in this case, Profiler) is running, and I know who is running it.  I can also look through the ERRORLOG to see who last started a trace with that ID, but I’m lazy and the query is easier.  In the ERRORLOG you will see a message:

SQL Trace ID 3 was started by login "HEDWIG\Erin"

A similar message is added when the trace is stopped.

Summary

If SQL Trace is used in production environments, it is important to be able to identify not just what events are captured in a trace, but also the filters and to a lesser extent, the columns.  The query above is the quickest way to determine the components of a trace, as they are not available in any UI.  The exception is if you have started Profiler, and then you can only see the definition of the trace you have started.

Database Compatibility Level and Query Store

A question I’ve gotten a few times when teaching relates to database compatibility level and Query Store. I was talking to a client yesterday about post-upgrade plans and implementing Query Store, and the topic came again. They wanted to know what compatibility level the database needed in order to use Query Store.

The quick answer: it doesn’t matter.

Query Store is an engine feature that was introduced in SQL Server 2016 (and available in Azure SQL Database since Q4 of 2015) and continues to evolve with each release. It is a feature that has to be enabled at the database level, using ALTER DATABASE SET, and it is compatibility level agnostic.

However, Query Store does track both engine version and compatibility level at the plan level in sys.query_store_plan. This allows you to use Query Store data for comparing plan performance for a query before and after upgrades, both at the engine level and database compatibility level.

Query Store Demo Setup

I have a copy of WideWorldImporters restored on a SQL Server 2019 instance, and I’ll mention that it’s not the standard copy you get from GitHib, it’s one I’ve run the DataLoadSimulation.DailyProcessToCreateHistory stored procedure against to add more data and introduce some skew. If you recreate this against a copy of WideWorldImporters that you have, you may not see the same results.

The first step after restoring is to enable Query Store, and then clear out any old QS data. This is something I do only for demos; I don’t recommend it for a production database unless you’re ok with removing all Query Store data.

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE = ON;
GO
ALTER DATABASE [WideWorldImporters] SET QUERY_STORE (
	OPERATION_MODE = READ_WRITE, INTERVAL_LENGTH_MINUTES = 10
	);
GO

ALTER DATABASE [WideWorldImporters] SET QUERY_STORE CLEAR;
GO

Next, set the compatibility level to 110, and set the database to use the legacy Cardinality Estimator, just as if we had upgraded this database from SQL Server 2012 to SQL Server 2019.

USE [master];
GO
ALTER DATABASE [WideWorldImporters] 
	SET COMPATIBILITY_LEVEL = 110;
GO

USE [WideWorldImporters];
GO
ALTER DATABASE SCOPED CONFIGURATION 
	SET LEGACY_CARDINALITY_ESTIMATION = ON;
GO

Compatibility Level 120

We have a very simple query that we’ll execute 10 times against WideWorldImporters, and display the actual execution plan.

USE [WideWorldImporters];
GO

SELECT 
	[ol].[StockItemID], 
	[ol].[Description], 
	[ol].[UnitPrice],
	[o].[CustomerID], 
	[o].[SalespersonPersonID],
	[o].[OrderDate]
FROM [Sales].[OrderLines] [ol]
JOIN [Sales].[Orders] [o]
	ON [ol].[OrderID] = [o].[OrderID]
WHERE [ol].[UnitPrice] = 240.00
AND [o].[OrderDate] = '2016-04-16';
GO 10
Execution Plan with Compatibility Level=110

Execution Plan with Compatibility Level = 110

The plan has two nested loops, with a clustered index scan against the Orders table, and an index seek against the FK_Sales_OrdersLines_OrderID nonclustered index.  Within Query Store, the sys.query_store_plan table shows a compatibility_level of 110, and an engine_version of 15.0.1900.25.

Output from sys.query_store_plan

Output from sys.query_store_plan

Compatibility Level 150

Without making any changes to the Query Store configuration, change the compatibility level to 150. The Cardinality Estimator version stays set to Legacy.

USE [master];
GO
ALTER DATABASE [WideWorldImporters] 
	SET COMPATIBILITY_LEVEL = 150;
GO

Next, re-run the same query against Orders and OrderLines and note the plan change:

Execution Plan with Compatibility Level=150

Execution Plan with Compatibility Level = 150

We still have the clustered index scan against Order, but it feeds into a Hash Mash that uses a Columnstore index scan against OrderLines to probe the matching rows – it’s a different plan. We see this in Query Store as a different plan_id (4), with a compatibility_level of 150.

SELECT 
    [qst].[query_text_id],
	[qsq].[query_id],  
	[qst].[query_sql_text], 
	[rs].[count_executions],
	[qsp].[plan_id], 
	[qsp].[compatibility_level],
	[qsp].[engine_version]
FROM [sys].[query_store_query] [qsq] 
JOIN [sys].[query_store_query_text] [qst]
	ON [qsq].[query_text_id] = [qst].[query_text_id]
JOIN [sys].[query_store_plan] [qsp] 
	ON [qsq].[query_id] = [qsp].[query_id]
JOIN [sys].[query_store_runtime_stats] [rs] 
	ON [qsp].[plan_id] = [rs].[plan_id] 
WHERE [qst].[query_text_id] = 1; 
GO
Output from sys.query_store_plan after change to compatibility level 150

Output from sys.query_store_plan after change to compatibility level 150

Leveraging Query Store

One of the many ways you can take advantage of the data in Query Store, beyond just identifying problematic queries, is to use it for A/B testing.  This entirely the purpose of the Query Tuning Assistant, which will compare plan performance you, as well as stabilize query performance.  If you are unable to run a representative workload against your database, or just want to investigate specific, critical queries, you have all the data at your disposal in Query Store.  You simply need to be running SQL Server 2016 or higher, or Azure SQL Database and have Query Store enabled with compatibility level set to any supported value (100 or higher).