SSMU Web Seminar: Performance Tuning Series
Part III: Optimizing Procedure Performance (Part I)

Kimberly L. Tripp

 

Q & A

 

Q: I've set up profile traces before and would like to set up a trace to start and stop at a specific time.  I don't see start time as an option in profiler.  How do I go about setting up a job to perform a trace during a specific time window?

The steps to automating traces are somewhat “outside” of (or used in addition to) the Profiler tool – and because of this – probably hard to find! The key thing to remember is that a trace can be scripted. The option to script a trace is under the File menu in Profiler: choose File, Script Trace, For SQL Server 2000 (or For SQL Server 7.0 if needed – they are very different traces). Once scripted you will see that each trace will generate a TraceID. Once you have the TraceID you can use the sp_trace_setstatus stored procedure to start and stop a trace. Additionally you can use the SQL Agent to automate the starting and stopping of a trace and programmatically you can use WAITFOR DELAY to put periodic starts and stops into an automated job. There are lots of topics to review here and a great place to start is the webcast. Once through the webcast checkout the two Books Online topics below. This will give you all of the pieces for automating Profiler Traces!

 

Make sure to watch the SQL Server Webcast titled “What's New and How to Effectively Use Profiler” at http://support.microsoft.com/servicedesks/Webcasts/WC111400/wcblurb111400.asp

 

Checkout these topics in the SQL Server Books Online:

Ø      Starting, Pausing, and Stopping Traces mk:@MSITStore:C:\Program%20Files\Microsoft%20SQL%20Server\80\Tools\Books\adminsql.chm::/ad_mon_perf_0jaq.htm

Ø     sp_trace_setstatus mk:@MSITStore:C:\Program%20Files\Microsoft%20SQL%20Server\80\Tools\Books\tsqlref.chm::/ts_sp_ta-tz_0tnt.htm

 

Q: When running the 3 step code set to clear cache and buffers, which database do you run it against (e.g. master, or each affected db)? 

 

First, the “3 step code” to which this question refers is:

CHECKPOINT         -- Executed within an active database

                   -- this statement will force dirty pages

                   -- (pages which have changed since they

                   -- were read into cache) to be written (and

-- therefore “cleaned”) to disk. The process

-- of “cleaning” the pages does not remove

-- them from cache but instead it allows

-- them to be removed.

DBCC DROPCLEANBUFFERS -- This removed the “clean” buffers.

DBCC FREEPROCCACHE -- This clears all plans from cache.

 

If you want to see what’s currently in cache use the system table syscacheobjects. A typical query could be:

 

SELECT *

FROM master.dbo.syscacheobjects

WHERE CacheObjType = ‘Executable Plan’

 

So within the code this question is answered – somewhat?! The answer is that checkpoint only applies to the database in which it is executed (the two DBCC statements apply serverwide). If you have numerous active databases it is likely that you will want to execute a checkpoint in more than one database. The rights to execute a checkpoint fall to the database owner or higher (meaning SysAdmin). In general the ability to execute many DBCC statements is typically ONLY for System Administrators however a Server Administrator can also execute the DBCC FREEPROCCACHE statement.

 

Q: So if you add an index to a table, should you run sp_recompile on that table?

Since sp_recompile is only a one time thing and very inexpensive (remember – it doesn’t actually recompile it just invalidates plans) I would recommend it anytime you’ve made optimization changes from which you think the procedures will benefit. There are some cases where SQL Server automatically does this behind the scenes – for example, when the schema changes the procedures are automatically recompiled. Check out the sql script “Views&Procs with schema changes.sql” for an example of a procedure recompile due to a schema change. Adding indexes does not force a recompile.

 

Q: Do you have any suggestions what to do about a lot of existing user procs with prefix sp_ ?

This is a tough one because it’s hard to say how much work it’s going to be. If you only have 10 or so then it’s not too bad… What might be best is to create a couple of scripts… One which is all of the drop procedures (don’t run it yet J) and another which includes all of the drops, the creates and the permissions (although the drops are not really necessary they might help if you make a few mistakes – to keep running and dropping and re-creating the entire script until everything works. Also you really only want to execute all of the search and replace requests once! In this process I’d make sure you do a couple of things:

Ø      Update the names to NOT use sp_, a simple change would be to replace all “sp_” with “usp_” for user-defined stored procedure. Make sure to check for any system procedures that still require the name sp_!!! Use a select from master.dbo.sysobjects and msdb.dbo.sysobjects to create a list of names that you don’t want to rename!

Ø      Update all of the calls (where you EXEC proc) to be EXEC dbo.procname (if they are owned by the dbo).

Ø      Add SET NOCOUNT ON at the beginning of each procedure (typically a good habit and causes few problems)

Then test, test, test!!!

IMPORTANT NOTE: Applications will need to know of these changes, users may need to be notified and if there are any cross-database stored procedure calls you will need to ensure that these all work properly. The testing part is really the harder part.

 

Q: Is it safe to say that when your stored procedure returns various row counts and has many parameters you should create it using “WITH RECOMPILE”?

As Anthony mentioned the first time around - I'll continue with my "nice pc answer" and say that "it depends." Additionally, I’ve always tried to minimize my use of the word always! J

 

But – in this case I would probably agree that this might be an OK way to go about it? At least for those procedures where performance significantly varies and where the number of rows and distribution of data varies, it might prove to be an easy and even OK choice. However, it is always (hmm!) best to check to see first if the optimal plans vary – it could be that (even though the distribution varies) the plans don’t change. Also, you may find that in reality – the stored procedure is quite large but only part of the plan varies significantly. In this case you would be creating unnecessary overhead in recompiling the entire procedure and would benefit more from isolating and modularizing the offending line(s) of code into their own procedure and have it recompile on each execution (i.e. create the nested procedure WITH RECOMPILE).

 

Thanks for the great questions everyone!!

 

See you in Part IV next week,

Kimberly

 

Kimberly L. Tripp

President, SYSolutions, Inc. www.SQLSkills.com

Principal Mentor, Solid Quality Learning www.SolidQualityLearning.com