#TSQL2sday: My Life Hacks

 

T-SQL Tuesday

T-SQL Tuesday

Jess Pomfret (newly awarded MVP!) is hosting month’s T-SQL Tuesday, and asked us to share our favorite life hacks – something that makes our day easier.  I have two that I consider indispensable.

The first is Toggl, a free app that Kendra Little introduced me to years ago.  As a consultant, I have to track my time so we can accurately bill customers.  Initially I tried to write it down on paper, but I ended up with MANY pieces of paper scattered across my desk.  Organizing those at the end of the week was not fun.  Then I tried Excel.  Nope, too much work.  Enter Toggl:

Tracking Time in Toggl

Tracking Time in Toggl

 

It’s extremely easy to use, it synchronizes with my account so the information is available online if don’t have my laptop for some reason and need to know the time spent on a project, and as I said, it’s free.  There is a paid version and it has a lot of cool features that I don’t even use or know about it.  It just works.

My second hack is movies.  If I’m not on a call or dial in to a customer, I have a movie playing in the background.  It’s always a movie I know well, and a movie I really like.  A lot.  I mean, I’ve listened to some of these hundreds of times.

A few movies...

A few movies…

When it’s completely quiet, I find it harder to focus.  I need something in the background to keep some part of my brain just a little bit distracted, which somehow makes everything else clear.  I just play it on my phone, no TV, no fancy speakers.  There is probably science behind it – sometimes music works, but mostly you can find me listening to any Marvel movie, Apollo 13, The Martian, Miracle, Top Gun, A Few Good Men, or Band of Brothers (not a movie, but close enough).

While typing, I thought of a third hack…my standing desk mat.  I have a “plain” one, but my husband got me this one a year or so ago and I really like because it has different shapes and textures for my feet.  Sometimes it’s the little things.

 

Are Changes to Query Store Logged?

The Query Store feature is a bit unique in that its status can change without user interference, which means it is important to understand how changes to Query Store are logged.  A couple weeks ago John Deardurff posted a question on Twitter asking specifically whether the SQL Server ERRORLOG is written to when the OPERATION_MODE changes to READ_ONLY because  MAX_STORAGE_SIZE_MB is exceeded.  I had never tested to confirm, but I know there is an event in Extended Events that will fire when the limit is reached.  I also know that when a user makes a change to a Query Store setting, it is logged in the ERRORLOG.

User Changes

With a copy of WideWorldImporters restored in my SQL Server 2019 VM, I will create an Extended Events with a few events I think will be relevant:

CREATE EVENT SESSION [XEStatusChange]
ON SERVER
ADD EVENT qds.query_store_db_settings_changed,
ADD EVENT qds.query_store_disk_size_info,
ADD EVENT qds.query_store_disk_size_over_limit
ADD TARGET package0.event_file(SET filename=N'C:\temp\XEStatus',max_file_size=(256))
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY=30 SECONDS, MAX_EVENT_SIZE=0 KB, MEMORY_PARTITION_MODE=NONE,  TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF);
GO

 

Next, we will change the MAX_STORAGE_SIZE_MB to just 5MB.  Note that this is an extremely low value that I do not recommend.

USE [master];
GO
ALTER DATABASE [WideWorldImporters]
SET QUERY_STORE (
OPERATION_MODE = READ_WRITE,
MAX_STORAGE_SIZE_MB = 5
);
GO

 

If we look at the output from Extended Events, we can see that my change to Query Store is logged:

Extended Events output after Query Store MAX_STORAGE_SIZE changed to 5MB via TSQL

Extended Events output after Query Store MAX_STORAGE_SIZE changed to 5MB via TSQL

 

If we check the ERRORLOG, information about the change exists there as well:

ERRORLOG entry showing Query Store setting change

ERRORLOG entry showing Query Store setting change

Finally, a quick peek of the Query Store pane in SSMS confirms the settings:

Query Store Settings

Query Store Settings

Another quick note: these are not the values I would recommend for a production system.  For more information on settings, check out my Query Store Settings post.

System Changes

Now we will explore what gets logged when the OPERATION_MODE setting changes from READ_WRITE to READ_ONLY.  We will introduce this by running a very ad hoc workload that generates thousands of queries with different literal values.  The code exists in the post, Examining the Performance Impact of an Ad Hoc Workload, and we will run usp_RandomSelects from multiple threads with the Extended Events session still running.

Within a few minutes, the following events show up in the Live Data View:

Extended Events output after MAX_STORAGE_SIZE is exceeded

Extended Events output after MAX_STORAGE_SIZE is exceeded

 

If we check the ERRORLOG, there are no new entries related to Query Store settings.

Summary

As you can see, changes to Query Store are logged differently, depending on the type of change.  Any Query Store option that is changed by a user using the ALTER DATABASE command is captured in the ERRORLOG.  The query_store_db_settings_changed event also fires. When the OPERATION_MODE is changed by the system, because the storage allocated to Query Store is exceeded, the change is not captured in the ERRORLOG, but it is captured with the query_store_disk_size_over_limit event. In addition, the query_store_disk_size_info event, which fires on a regular basis, will also report the max and current size values. If you want to know when Query Store changes to a READ_ONLY state, you will need to set up an Extended Events session to capture it. To be more proactive, I recommend setting up an Agent job that runs on a regular basis to monitor the space used, and sends an email if it exceeds a certain percentage.

What Events are in a Trace?

Yes…I’m writing a post about SQL Trace…specifically events in a trace. There are still lots of folks that use Profiler and run server-side traces, and when I work with those customers I like to understand how the traces are configured. I want to see what events are in a trace, but I also want to see the filters and if the traces are writing to a file or rowset provider. I wrote this query ages ago and went looking for it today, which means I really should write a post so it’s easier to find 🙂

Where to start

In order to determine the events in the trace, you first need to know the ID for the trace, which is automatically generated when you create the trace.  You can find the ID in sys.traces:

SELECT *
FROM sys.traces;
GO

If I run this on my local instance, I get the output below:

sys.traces output, part I

sys.traces output, part I

sys.traces output, part II

sys.traces output, part II

What to review

I broke the output into two images because there are a few things to point out.  The trace with id = 1 is, almost always, represents the default trace.  I yet to find a system where it is not the default trace, but it’s possible (e.g. someone disables the default trace, or manages to get another trace to start up before the default trace).  If it’s writing to the LOG folder where SQL Server is installed, then I’m confident it’s the default trace and I’m not worried about it.

The second trace, with id = 2, is one I created as a server-side trace. Notice that for the trace with id = 3, the path is NULL.  This indicates that the trace is not writing to a file, but to a rowset provider (this is why is_rowset = 1 for that row).  The rowset provider is typically, but not always, the Profiler UI.  When I see this for a client, I ask if they use a third-party monitoring tool (as this is how traces from third-party tools can appear).  Even if the customer has a monitoring tool, I want to see what events are being collected, and the filter(s).

Some other items of interest in the output:

  • start_time (when the trace started)
  • last_event_time (when the last event was captured by the trace)
  • event_count (number of events captured)
  • dropped_event_count (number of events not captured)

Notice that for the traces which write to a file, dropped_event_count is NULL.  When writing to a file, SQL Trace guarantees that it will not lose events.  This might sound great – you don’t want to miss any data, right?  But…that also means that if the volume of events captured exceeds SQL Trace’s ability to write those events out to the file, then the activities in SQL Server that generate those events might have to wait.  That is not a great thing.  Note that with the rowset provider, you can lose data.  If there is a 20 second stall where SQL Trace cannot push the events to the provider, it will start dropping events.

What events are in the trace

The third trace is the one in which we are interested.  It is writing to a rowset provider, so we want to determine the events in the trace and who is running it.  The query below lists the events and columns that are being captured, along with any filters that exist:

SELECT
     e.name,
     c.name,
     CASE
          WHEN f.logical_operator = 0 THEN 'AND'
          ELSE 'OR'
     END logical_operator,
     CASE
          WHEN f.comparison_operator = 0 THEN 'EQUAL'
          WHEN f.comparison_operator = 1 THEN 'NOT EQUAL'
          WHEN f.comparison_operator = 2 THEN 'GREATER THAN'
          WHEN f.comparison_operator = 3 THEN 'LESS THAN'
          WHEN f.comparison_operator = 4 THEN 'GREATER THAN OR EQUAL'
          WHEN f.comparison_operator = 5 THEN 'LESS THAN OR EQUAL'
          WHEN f.comparison_operator = 6 THEN 'LIKE'
          WHEN f.comparison_operator = 7 THEN 'NOT LIKE'
     END comparison_operator,
     f.value
FROM sys.fn_trace_geteventinfo(@traceID) t
JOIN sys.trace_events e
     ON t.eventid = e.trace_event_id
JOIN sys.trace_columns c
     ON t.columnid = c.trace_column_id
LEFT OUTER JOIN sys.fn_trace_getfilterinfo(@traceID) f
     ON c.trace_column_id = f.columnid
ORDER BY e.name, c.name;
GO

Here is a subset of the output:

Events, columns, and filters for a trace

Events, columns, and filters for a trace

The good news is that there is a filter applied for the SP: StmtCompleted and SQL: StmtCompleted events.  The not-so-good news is that the Showplan XML event is being captured.  This event adds tremendous overhead to a system and is not recommended for a production environment.  Now I really want to know who is running this trace.

If you remember, back in the sys.traces output we had reader_spid, with a value of 78 for id = 3.  We can take that and query sys.dm_exec_sessions to get more information:

SELECT host_name, program_name, login_name
FROM sys.dm_exec_sessions
WHERE session_id = 78;
GO
Output from sys.dm_exec_sessions

Output from sys.dm_exec_sessions

I now know the machine from which the rowset provider (in this case, Profiler) is running, and I know who is running it.  I can also look through the ERRORLOG to see who last started a trace with that ID, but I’m lazy and the query is easier.  In the ERRORLOG you will see a message:

SQL Trace ID 3 was started by login "HEDWIG\Erin"

A similar message is added when the trace is stopped.

Summary

If SQL Trace is used in production environments, it is important to be able to identify not just what events are captured in a trace, but also the filters and to a lesser extent, the columns.  The query above is the quickest way to determine the components of a trace, as they are not available in any UI.  The exception is if you have started Profiler, and then you can only see the definition of the trace you have started.