Statement execution and why you should use stored procedures

Stored procedures aren’t always a favorite choice from developers. And, to add to the confusion, there are numerous ways that a client application can make a request to SQL Server:

  1. Submit an adhoc transact-SQL statement
    • This statement can be parameterized and the parameterized statement can be deemed:
      • Safe: These are statements where the optimizer has evaluated the plan and feels that changes to values will not affect the execution plan. And, there are quite a few rules that you must meet for this to be the case. And, as a result, this is NOT something I really rely on and/or aim for (for optimization – I’d rather use stored procedures). You can monitor this through the Perfmon counter: SQLServer:SQL Statistics (Safe Auto-Params/sec).
      • Unsafe: These are statements where the optimizer feels that subsequent values will affect the execution plan. And, for adhoc statements many will be deemed unsafe. You can monitor this through the Perfmon counter: SQLServer:SQL Statistics (Unsafe Auto-Params/sec).
    • This statement might not end up being parameterized (this is not likely and the “failed parameterized statements” count should be low – you can monitor it through the Perfmon counter: SQLServer:SQL Statistics (Failed Auto-Params/sec).
  2. Submit a statement via sp_executesql that parameterizes the statement through defined parameters – this creates a plan that is neither safe nor unsafe but is ALWAYS an exact match for subsequent executions (even when the parameter values change). This is where I generally have problems with it…
  3. Execute a stored procedure which has one or more of the above in it and/or DSE in it. This is where I think things start to get interesting and this is really the reason for why I started this series on optimizing procedural code.

First, and foremost, I believe that stored procedures are a SIGNIFICANTLY better way to develop high-end applications that absolutely must scale. Don’t get me wrong, I understand the benefits of some of the tools out there that can generate your SQL and allow more rapid development of your application. And, if the application isn’t trying to handle thousands of inserts/sec then this might work out really well. However, you still have to be careful. Many of these applications (that auto-generate SQL code) rely (very heavily in some cases) on sp_executesql and this can result in poor performance. Additionally, when there’s A LOT of adhoc SQL then you can end up with a lot of your cache going to “single-plan use” where quite a bit of it is wasted (NOTE: there’s a SQL Server 2008 configuration option [called “optimize for adhoc workloads”] that you really need to know about IF you have a lot of adhoc statements. This can significantly reduce the amount of cache that’s wasted on plans that only execute once… I’ll do a post on that feature NEXT). Again, while there are quite a few benefits the performance problems can become SEVERE. OK, I’ve definitely said this before but why am I rehashing this?

Because this is where we’re at – we’ve looked at some of the problems with DSE in my post titled: Little Bobby Tables, SQL Injection and EXECUTE AS. Then, we started to look at EXEC and sp_executesql – how are they different? in the second post. Finally, I started to show *statement-level* recompilation using OPTION (RECOMPILE) in the post titled: Using the OPTION (RECOMPILE) option for a statement. The real reason for why I started this series is to prove: where, why and how stored procedures really shine (we’re getting close!).

Reasons to use stored procedures:

  • Centralization – Sharing common code between applications asnd only need to make your changes ONCE! And, there’s no possibility of different business rules being enforced in one application and not another and/or for incorrect business rules to be “out there” in your applications.
  • Simplification – Users should not need to know your schema. They shoudn’t care that you’ve denormalized something and they should reap the benenfits through YOUR creating an access method that is simple and works even if you end up changing back and normalizing. The database schema should be isolated from the users…
  • Change control – You should be able to change your schema without breaking applications. This about this one a bit – this is EXACTLY the tactic that SQL Server takes with it’s own internal tables. We access them trhough stored procedurs, views (aka catalog views) and functions. If they need to change the underlying schema then we don’t need to worry – as long as they also make the sp, function or view return the same way it always has. This has complicated things over time. First, the SQL team has bascially decided that (for backward compatiblity) the sps will never handle new features (and, yes, this is HORRIBLY annoying (and, it’s the reason for why I’ve rewriten sp_helpindex here [and, fyi, I have an even better/newer/shinier version coming soon]). And, they’ve had to add functions like DATABASEPROPERTYEX for the “extended” properties that were added around recovery models in SQL Server 2000. However, as long as the funtionality of the interface stays the same then our applications don’t break. That’s incredibly important and probably one of my favorite reasons to create some separation between the schema and the users (and/or interface).
  • Security – This is HUGE. And, it requires an understanding of object chaining and permissions. The general idea is that when objects are all within the same schema (and therefore have the same owner) that when execution rights are given on a stored procedure the caller does not need explicit permissions to the base objects (execution is essentially granted because the ownership chain is not broken). Another way to think about it is that your granting rights to execute a “process” and one that you have control (and ownershp) over all of the object involved. So, SQL Server only requires execute rights on the sp. However, this ONLY goes for explicity statements and not statements that are built dynamically. For those, the CALLER needs explicit permission OR the procedure needs to be created with EXECUTE AS. So, if you plan to use EXECUTE AS be sure to read my post titled: Little Bobby Tables, SQL Injection and EXECUTE AS.

But, more than anything else – you CAN use them for better performance. No, it’s not solely because the might have an already compiled (and therefore optimized) plan that’s available at runtime (yes, that CAN help) but compilation (or recompilation) can often be a small portion of runtime compared to the execution of a bad plan. And, this is where optimizing the procedural code can help. The bad news is that you have to know what you’re doing. And, I could make some arguments that SQL Server could do some things better here. But, if the developer does know more about how SQL Server works then procedures can work consistently and effectively – providing better performance.

UPDATE: Check out the post Building High Performance Procedures for more tips / tricks!

Thanks for reading!!
kt

3 thoughts on “Statement execution and why you should use stored procedures

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.