Tell me how you use Profiler and Trace

Back in April I wrote a post asking why people tend to avoid Extended Events.  Many of you provided feedback, which I greatly appreciated.  A common theme in the responses was time.  Many people found that creating an event session in Extended Events that was comparable to one they would create in Trace took longer.  Ok, I get that…so as a follow up I am interested in knowing how you typically use Profiler and Trace.  If you would be willing to share that information, I would be extremely grateful.  At a minimum, I’d like to know what events you add, what columns you select for those events, what filters you add, and then how you view the data (e.g. do you run Profiler and watch the data live, or do you script it out as a server side trace and write it to a file, then view it later for analysis?).  If it’s easiest to post the trace definition (i.e. export it from the Profiler UI) that works too.  Again, any information you can share in the comments (or if you want to send me an email that’s great too) would be greatly appreciated.  My goal is to create a library of event session definitions for these “typical” traces that people create and share them publicly.  Stay tuned for another post once I (hopefully!) start getting some information.  Thanks again!

18 thoughts on “Tell me how you use Profiler and Trace

  1. I normally use profiler to capture long running queries, or capture a particular query so i can get the actual sproc call, then run for myself to get the execution plan to look at tuning areas. Our application is stored procedure driven so we rely on highly optimized procedures to maintain customer satisfaction.

    I normally use the following to capture the information for finding slow running queries: RPC: STarting, RPC: Completed. I then add a filter for anything over 5000 MS. Once I have those, I will look at execution plans on multiple clients, compare, then make my suggestions to Development.

    I also use for capturing blocked process report, deadlock chains, and the like.

    1. Thanks Jim! To confirm, your filter is duration greater than 5 seconds? Also, are you selecting all columns for the RPC Starting and RPC Completed events, or just some columns? And do you look at the data in Profiler UI or do you write it a file and then look at it?

      Thanks again!

      Erin

  2. Have lots of existing profiler templates for different types of troubleshooting and/or detailed monitoring/auditing requirements (sql audit still has critical gaps) that we use as needed. XEvents is definitely the future but right now documentation is spotty (great for some events/components, crap for others) plus setting up and consuming takes a lot more time and effort. Also, we use replay quite a bit especially for upgrade testing (the current 2016 upgrade assistant looks like some community college freshman project). Microsoft just likes to start new things and not finish them for years, if ever (distributed replay?, policy based management? audit?)

    1. Angie – what documentation are you looking for specific to Extended Events, maybe I can help? Also, what part of set up/consuming takes more time? Again, if you can provide one or two typical/example traces that you set up, perhaps I can help make it easier to use Extended Events.

      Also, I agree that there are some features Microsoft hasn’t finished. But Extended Events isn’t one of them – yes, Distributed Replay can’t process .xel files yet, but it can still process .trc, and they are working on getting to use .xel files.

      Thanks for the feedback!

      Erin

  3. currently i have a process that is using EE to collect logins on database servers. Its working ok. It took quite a bit to put together and understand. It outputs to file, and I have another process that runs on a schedule to open the files up, and import the most recent data into a table, so that I can keep a history of this data, and allow the files to be smaller, thus quicker on fill and read. I noticed that when i had a lot of events collected, reading from file was costly in time.

    I realized after having this automated for a few days, that I did something wrong, and was missing an event. So i am starting another EE session to collect these other events, and am asking myself thee same questions. How do i historicalize this data. Why does it seem so slow to query the files when they fill up ( i know obvious answers are its in XML, lots of data, etc ).

    So im giving it a go. Granted with profiler, it would be easier to configure and get data collecting. However, its not automated in the storing of historical data, so EE is giving me more freedom to retain and use the data later, without human interaction via UI of profiler. So its a win with compromise. And im getting better at it. So thats a win.

    1. So TJay you’ve moved on to XE completely? How large are the files you’re generating with XE? We usually recommend setting the max file size at 512MB, as any time you open a file with more than 1 million rows, it has to cache some rows out to disk ( which is probably why it’s slow to read the data ). Please let me know if there’s something with which I can help!

      1. I have the following settings in my asynchronous_file_target
        max_file_size = 5,
        max_rollover_files = 10
        and when i have a session running on one of my boxes watching sqlserver.sql_batch_completed, sqlserver.rpc_completed, sqlserver.existing_connection to capture who’s doing what, I am generating 5-10 files a minute. I have another process running every 2 minutes to collect the data, and am using datetime values to limit what is retrieved and deposited into the history table.
        When i was watching logins only, it collected a lot less data. So file sizes are smaller, and process to historicalize the data is ran like every 15 minutes.

        I havent gone completely to the light side (using extended events), but am taking needs i have and trying my hand at XE for specific needs. If its something i need quickly, i often still use Profiler. But to code a process, and automate it, I’ll try XE.

        Good to know on the standards that you use. I was guessing. I had my file sizes really small. I may make them bigger and see how they act. It still seems slow to read the data. Do you read and store the data? or just store the data within the files… how do you consume what was processes by XE?

        1. TJay-
          If your file sizes are 5, is that 5MB? That seems pretty small to me. Also, if you only specify 10 rollover files, then once you have 10 files, the 11th will overwrite the first file. You could be missing data (especially if you generate 5-10 files and are collecting them every 2 minutes). I would increase the file size a bit.

          I read the data in the UI, and I just leave it in the files, I don’t push it to a table. But I’m not storing it long term. If you are, then it would make sense, to me, to push the data to a table (separate database) and then back it up regularly.

          E

  4. Hey Erin

    So I use Server Side Traces which I like to run from Agent where I have some variables which give me the ability to state “lets run this trace for 10 minutes only” then it automatically stops . Anyways I capture:

    SP:Starting, SP:Completed, SP:StmtStarting, SP:StmtCompleted, Audit Login, Audit Logout

    In terms of columns pretty much most of them.

    I put 1 filter on (normally) – That targets the DBID that I am interested in.

    Output to TRC file.

    p.s. Let me know if you want some help with the library?

  5. My team can’t move to extended events fast enough. We want to use it for everything from auditing to change management to verification. We are doing some of that on the 2008 R2 servers we are on but are looking for more with the 2014 version & up.

    1. Tim – that’s great to hear! I hope that you’ve been able to find information and examples to make that transition smooth! Note that you won’t want to use Extended Events for auditing – the auditing functionality was moved to the SQL Server Audit feature, so I definitely recommend using that instead of trying to use a variety of events in XE to audit your environment. Good luck and let us know if you have questions!

      Erin

  6. Honestly, I use profiler primarily to reverse engineer 3rd party vendor applications. Other times it’s to simply verify that a user is connected and running against the server when troubleshooting a larger issue.

  7. Hi,
    I often use profiler to identify queries being run by applications that we don’t have the source code for and therefore struggle to debug when we hit errors.

    I plan to use EE, but when we have a problem, need an answer asap, so it’s difficult to justify the learning curve. i keeping waiting for a quiet moment to properly start playing, but as is usually the case, that quiet moment never arrives.

    1. Hey Steve-

      Without listing the events and columns you capture (and any predicates), I can’t create the event session code for you (feel free to reply with that information). But, if you want to do that yourself, you can use Jonathan’s stored procedure: https://www.sqlskills.com/blogs/jonathan/converting-sql-trace-to-extended-events-in-sql-server-2012/

      The next time you run a trace, note the trace ID, and before you stop it, run the SP to have it generate the XE TSQL for you:

      EXECUTE sp_SQLskills_ConvertTraceToExtendedEvents
      @TraceID = ,
      @SessionName = ‘‘,
      @PrintOutput = 1,
      @Execute = 0;

      This will give you the syntax that you can then use the create the XE session the next time to try out.

      Hope that helps!

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

A Fond Farewell

If you haven’t guessed from the title, I’m writing this post because I am leaving SQLskills. This Friday, January 14th, is my last day, and

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.