What Events are in a Trace?

Yes…I’m writing a post about SQL Trace…specifically events in a trace. There are still lots of folks that use Profiler and run server-side traces, and when I work with those customers I like to understand how the traces are configured. I want to see what events are in a trace, but I also want to see the filters and if the traces are writing to a file or rowset provider. I wrote this query ages ago and went looking for it today, which means I really should write a post so it’s easier to find 🙂

Where to start

In order to determine the events in the trace, you first need to know the ID for the trace, which is automatically generated when you create the trace.  You can find the ID in sys.traces:

FROM sys.traces;

If I run this on my local instance, I get the output below:

sys.traces output, part I

sys.traces output, part I

sys.traces output, part II

sys.traces output, part II

What to review

I broke the output into two images because there are a few things to point out.  The trace with id = 1 is, almost always, represents the default trace.  I yet to find a system where it is not the default trace, but it’s possible (e.g. someone disables the default trace, or manages to get another trace to start up before the default trace).  If it’s writing to the LOG folder where SQL Server is installed, then I’m confident it’s the default trace and I’m not worried about it.

The second trace, with id = 2, is one I created as a server-side trace. Notice that for the trace with id = 3, the path is NULL.  This indicates that the trace is not writing to a file, but to a rowset provider (this is why is_rowset = 1 for that row).  The rowset provider is typically, but not always, the Profiler UI.  When I see this for a client, I ask if they use a third-party monitoring tool (as this is how traces from third-party tools can appear).  Even if the customer has a monitoring tool, I want to see what events are being collected, and the filter(s).

Some other items of interest in the output:

  • start_time (when the trace started)
  • last_event_time (when the last event was captured by the trace)
  • event_count (number of events captured)
  • dropped_event_count (number of events not captured)

Notice that for the traces which write to a file, dropped_event_count is NULL.  When writing to a file, SQL Trace guarantees that it will not lose events.  This might sound great – you don’t want to miss any data, right?  But…that also means that if the volume of events captured exceeds SQL Trace’s ability to write those events out to the file, then the activities in SQL Server that generate those events might have to wait.  That is not a great thing.  Note that with the rowset provider, you can lose data.  If there is a 20 second stall where SQL Trace cannot push the events to the provider, it will start dropping events.

What events are in the trace

The third trace is the one in which we are interested.  It is writing to a rowset provider, so we want to determine the events in the trace and who is running it.  The query below lists the events and columns that are being captured, along with any filters that exist:

          WHEN f.logical_operator = 0 THEN 'AND'
          ELSE 'OR'
     END logical_operator,
          WHEN f.comparison_operator = 0 THEN 'EQUAL'
          WHEN f.comparison_operator = 1 THEN 'NOT EQUAL'
          WHEN f.comparison_operator = 2 THEN 'GREATER THAN'
          WHEN f.comparison_operator = 3 THEN 'LESS THAN'
          WHEN f.comparison_operator = 4 THEN 'GREATER THAN OR EQUAL'
          WHEN f.comparison_operator = 5 THEN 'LESS THAN OR EQUAL'
          WHEN f.comparison_operator = 6 THEN 'LIKE'
          WHEN f.comparison_operator = 7 THEN 'NOT LIKE'
     END comparison_operator,
FROM sys.fn_trace_geteventinfo(@traceID) t
JOIN sys.trace_events e
     ON t.eventid = e.trace_event_id
JOIN sys.trace_columns c
     ON t.columnid = c.trace_column_id
LEFT OUTER JOIN sys.fn_trace_getfilterinfo(@traceID) f
     ON c.trace_column_id = f.columnid
ORDER BY e.name, c.name;

Here is a subset of the output:

Events, columns, and filters for a trace

Events, columns, and filters for a trace

The good news is that there is a filter applied for the SP: StmtCompleted and SQL: StmtCompleted events.  The not-so-good news is that the Showplan XML event is being captured.  This event adds tremendous overhead to a system and is not recommended for a production environment.  Now I really want to know who is running this trace.

If you remember, back in the sys.traces output we had reader_spid, with a value of 78 for id = 3.  We can take that and query sys.dm_exec_sessions to get more information:

SELECT host_name, program_name, login_name
FROM sys.dm_exec_sessions
WHERE session_id = 78;
Output from sys.dm_exec_sessions

Output from sys.dm_exec_sessions

I now know the machine from which the rowset provider (in this case, Profiler) is running, and I know who is running it.  I can also look through the ERRORLOG to see who last started a trace with that ID, but I’m lazy and the query is easier.  In the ERRORLOG you will see a message:

SQL Trace ID 3 was started by login "HEDWIG\Erin"

A similar message is added when the trace is stopped.


If SQL Trace is used in production environments, it is important to be able to identify not just what events are captured in a trace, but also the filters and to a lesser extent, the columns.  The query above is the quickest way to determine the components of a trace, as they are not available in any UI.  The exception is if you have started Profiler, and then you can only see the definition of the trace you have started.

Filtering Events in Trace/Profiler and Extended Events

It seems very wrong to write a post that talks about Trace, after all I’ve done to advocate that everyone start using Extended Events (XE). However, I know there are a lot of you who still use Trace because you’re running SQL Server 2008R2 and earlier, so you all get a free pass. Until you upgrade. If you’re running SQL Server 2012 or higher, I strongly recommend that you use XE. But that’s the not the main point of this post. What I really want to do is step through filtering a .trc or .xel file to remove selected events.


Now, if you’ve worked with Trace for a long time, you may be wondering why you would ever filter events because, let’s be honest, you might not know you can do that (I didn’t for ages).  You can! The Profiler UI isn’t where you typically perform data analysis, but one reason you might filter out events is if you’re using Distributed Replay and you need to remove events to avoid generating errors during the reply. To do this, open the .trc file in the Profiler UI, then select File | Properties… Within the Trace File Properties window, go to the Events Selection tab, then select Column Filters… Within the Edit Filter window, you can chose a column (or multiple columns) on which to filter your data. If I want to remove all events before a specific time, I would edit the EndTime:

EndTime filter in Trace

EndTime filter in Trace

This removes all events before 11:31PM on November 12, 2015. After you have removed the events, you can save the remaining data as a new .trc file through File | Save As | Trace File…

Extended Events

Filtering events from an Extended Events file is even easier. Open the .xel file within Management Studio, then select Extended Events | Filters (you can also select the Filters icon in the Extended Events toolbar). Within the Filters window, you can choose to filter on time, and/or any of the other fields captured for the events:

Date and logical_reads filter in Extended Events

Date and logical_reads filter in Extended Events

Once you select Apply, all events before 11:31PM will be removed, as well as those with less than 1000 logical_reads. The remaining events can again be saved to a new file (Extended Events | Export to | XEL File…), or you can just run analysis against the filtered data. You can remove the filter at any time by going back to the Filters window and selecting Clear All.


Hopefully this helps if you ever need to remove events from a .trc or .xel file.  Note that I always save the filtered data as a new file – I prefer to keep the original just in case I need all the events for some reason.