Using relog: Creating a smaller file

 

Today I had to remember the steps for using relog. If you’ve never heard of relog, stay with me. It’s a tool for merging PerfMon files, and also for extracting smaller data sets from Perfmon. I still love PerfMon friends, I admit it. It’s been a staple for understanding performance on a Windows Server since I’ve started in IT. But I haven’t used relog in forever so I completely forgot the commands I needed.

The problem

I provided a PerfMon template to a customer a few weeks ago and asked them to start capturing data.  I requested that they roll over the file daily, but that request got lost in the shuffle.  They manually stopped and restarted the data collector intermittently, so when they sent me the file to review data from an issue they had over the weekend, it was almost 5GB in size.  Oof.  I loathe opening large PerfMon files in the UI because it takes a long to load.  This is where using relog came in because I only needed to look at about an hour’s worth of data.

Finding the command I needed

I did a quick Google search looking for the syntax, and landed on a post from my friend Andy Galbraith, Handling Perfmon Logs with Relog.  It’s a useful post if you want to merge multiple files, and I recommend it if you need that.  I needed to separate out my data.

My very first SQL Server presentation was on the topic of baselines, and I used to demo relog.  I had to have the code somewhere right?  Nope.  I did find my old presentation and wow…that needs updating.  Then I searched my old blog and found a post where I had the syntax.  Yay past me!

However, no one else would ever find that post so I thought I’d share the syntax here.  Also, if you go to the official relog documentation the syntax is there, but it’s not quite correct.

Using relog to extract data based on a date range

The parameters are -b for begin time and -e for end time.  And for both parameters the documentation states:

Date syntax for -b and -e

Date syntax for -b and -e

 

Do you see a space between the year and the hour?  Me neither.  This syntax fails:

relog SQLskills_DataCollector.blg -b 3/21/202019:00:00 -e 3/21/202020:00:00 -o SQLskills_March21.blg

relog error with no space between date and time

relog error with no space between date and time

 

But if I add a space between the date and the time:

relog SQLskills_DataCollector.blg -b 3/21/2020 19:00:00 -e 3/21/2020 20:00:00 -o SQLskills_March21.blg

It works:

correct relog command

correct relog command

 

And with that, I’m off to analyze some data!

 

Baselines for SQL Server and Azure SQL Database

Last week I got an email from a community member who had read this older article of mine on baselining, and asked if there were any updates related to SQL Server 2016, SQL Server 2017, or vNext (SQL Server 2019). It was a really good question. I haven’t visited that article in a while and so I took the time to re-read it. I’m rather proud to say that what I said then still holds up today.

The fundamentals of baselining are the same as they were back in 2012 when that article was first published. What is different about today? First, there are a lot more metrics in the current release of SQL Server that you can baseline (e.g. more events in Extended Events, new DMVs, new PerfMon counters,  sp_server_diagnostics_component_results). Second, options for capturing baselines have changed. In the article I mostly talked about rolling your own scripts for baselining. If you’re looking to establish baselines for your servers you still have the option to develop your own scripts, but you also can use a third-party tool, and if you’re running SQL Server 2016+ or Azure SQL Database, you can use Query Store.

As much as I love Query Store, I admit that it is not all-encompassing in terms of baselining a server. It does not replace a third-party tool, nor does it fully replace rolling your own scripts. Query Store captures metrics specific to query execution, and you’re not familiar with this feature, feel free to check out my posts about it.

Consider this core question: What should we baseline in our SQL Server environment? If you have a third-party tool, the data captured is determined by the application, and some of them allow you to customize and capture additional metrics. But if you roll your own scripts, there are some fundamental things that I think you should capture such as instance configuration, file space and usage information, and wait statistics.

Beyond that, it really goes back to the question of what problem are you trying to solve? If you are looking at implementing In-Memory OLTP, then you want to capture information related to query execution times and frequency, locking, latching, and memory use. After you implement In-Memory OLTP, you look at those exact same metrics and compare the data. If you’re looking at using Columnstore indexes, you need to look at query performance as it stands right now (duration, I/O, CPU) and capture how it changes after you’ve added one or more Columnstore indexes. But to be really thorough you should also look at index usage for the involved tables, as well as query performance for other queries against those tables to see if and/or how performance changes after you’ve added the index. Very few things in SQL Server work truly in isolation, they’re all interacting with each other in some way…which is why baselining can be a little bit overwhelming and why I recommend that you start small.

Back to the original question: is there anything new to consider with SQL Server 2016 and higher? While third-party tools continue to improve and more metrics are available as new features are added and SQL Server continues to evolve, the only thing “really new” is the addition of Query Store and its ability to capture query performance metrics natively within SQL Server. Hopefully this helps as you either look at different third-party tools that you may want to purchase, or you look at rolling your own set of scripts.  If you’re interested in writing your own scripts, I have a set of references that might be of use here.

Lastly, you’ll note that I haven’t said much about Azure SQL Database, and that’s because it’s an entirely different beast.  If you have one or more Azure SQL Databases, then you may know that within the Portal there are multiple options for looking at system performance, including Intelligent Insights and Query Performance Insight.  Theoretically, you could still roll your own scripts in Azure SQL DB, but I would first explore what Microsoft provides to see if it meets your needs.  Have fun!

Collection of Baseline Scripts

The topic of baselines in SQL Server is one that I’ve had an interest in for a long time.  In fact, the very first session I ever gave back in 2011 was on baselines.  I still believe they are incredibly important, and most of the data I capture is still the same, but I have tweaked a couple things over the years.  I’m in the process of creating a set of baseline scripts that folks can use to automate the capture of this information, in the event that they do not have/cannot afford a third-party monitoring tool (note, a monitoring tool such as SQL Sentry’s Performance Advisor can make life WAY easier, but I know that not every can justify the need to management).  For now, I’m starting with links to all relevant posts and then I’ll update this post once I have everything finalized.

These scripts are just a starting point for what to monitor.  One thing I like to point in our IEPTO2: Performance Tuning and Optimization course is that there is A LOT of data you can capture related to your SQL Server environment.  Your options include Performance Monitor (using Custom Data Collectors), queries via Extended Events or Trace (depending on version), and any data from the DMVs or system views within SQL Server. You have decide what to capture based on

1) What problem you might be trying to solve in your environment, and

2) What information is most important for you to have.  Start simple, and then work your way up.

Figure out the one or two most critical things to capture, and start there, and then add on.

If you find there’s something missing from my scripts, let me know and I’ll try to get it added!

Monitoring in general

Configuration Information

Disk and I/O

Maintenance

Errors

Performance

Pluralsight