Regularly when working with clients, I implement a basic baseline package to collect key metrics. At a minimum I like to capture CPU utilization, page life expectancy, disk I/O, wait statistics, and a database inventory. This allows me to track and trend what normal CPU utilization looks like, as well as to see if they are having memory pressure during the day by collecting PLE. Disk I/O metrics are good to have in order to know what normal latency looks like during certain intervals of the day. Capturing wait statistics at regular intervals is a must in order to start tracking down any perceived performance issues (please tell me where it hurts!).
A quick internet search will show several packages that others have created and blogged about. All the ones I’ve found collect some of the same metrics I am collecting but not completely the way I like to look at it. My process is simple, the way I like it. I manually create a database called SQLskillsMonitor, so that I can place it where should reside. I then run my script PopulateDB to create the tables and stored procedures. Next I run the script CreateJobs to create the SQL Server Agent jobs. Each job is prefixed with SQLskills-Monitoring-… I also include a purge job that you can modify to delete any data older than the specified time frame, I have it defaulted to 90 days. That is it. No fuss and no complicated requirements.
You are welcome to customize the process any way you like. You can change the DB name, add additional tables and stored procedures to collect anything else you find useful. I’ve included the key things I like to track for performance baselines as well as baselines for upgrades and migrations. If you do add additional metrics, come back and comment on this post and share what you’ve done so others can benefit too.
This in no way should replace or eliminate the need for a nice third party monitoring solution like SentryOne’s SQL Sentry solution, but in a pinch when my clients don’t have anything else in place, this helps.