SQLskills SQL101: Stored Procedures

Start with a CLEAN slate!

Sometimes going back-to-the-basics is the best thing to do to fully understand something. Some of us have been doing things in a certain way for so long that we’ve missed a very basic / important point. So… SQLskills is introducing a SQL101 series of blog posts to do just that – get back to the basics! These will be posts hitting some of the core / critical points around technologies that we often see used the WRONG way or where there are many misunderstandings that lead to serious problems. And, well, stored procedures are definitely something we see misused in SQL Server (resulting in poor performance).

What is a Stored Procedure?

Simply, it’s a set of Transact-SQL commands that have been grouped together for reuse. It’s a way of taking some process you do repetitively and “wrapping” it into a named unit – allowing parameters. Think of something like sales by customer where you pass in a customer number and it outputs their sales. By creating this “process” as a stored procedure you’re allowing SQL Server to also store an executable plan in the plan cache which can save time in compilation / execution.

The Good News: Centralization

Having your code stored on the server can be fantastic for code reuse and manageability / maintainability… this can also allow the database developers to change schema without the application being aware of those changes. Or worse, requiring the application to change. Instead, the stored procedure can be changed to continue to behave the same way after the schema change as it did before the change. This is especially valuable when there are multiple applications accessing the database. And, there are other reasons but these are some of the most important IMO!

The Bad News: Performance

This executable plan that SQL Server comes up with might not be good for all executions (based on the parameters). Again, think of the case for sales by customer; some customers have very few purchases while other customers have many. When an executable plan is created it is tied to the set of parameters used when that procedure is executed (by default – there are statement options [hints] that can override this) and there isn’t already a plan in the cache. That plan is then placed in the cache so that subsequent users benefit from having a cached plan (saving compile time) but since that plan was optimized with the parameters first passed subsequent executions passing different parameters may warrant a plan change (which SQL Server does NOT do by default). And, this becomes significantly more complex as you have more parameters and more variations in how it’s executed.

More info: once you’ve read this post, get more insight into stored procedures by reading: Stored Procedure Execution with Parameters, Variables, and Literals

The Best News: Recompilation

Now that you’re aware of the fact that a stored procedure may be sensitive to the parameters passed, your database developers can dive deeper and learn options to handle this better. One way is through recompilation but here’s where we open a can of worms. There are quick / simple solutions that can help a procedure in the short-term and there are longer-term solutions that are even better for stored procedures that execute often and are used by a lot of users. Since this is only SQL101, I have a quick guide to what I would / wouldn’t use for recompilation as well as some resources to use to get more information.

Recompile the RIGHT Way

Recompilation is not always bad. But, you want to make sure that you always recompile as little as possible. Ideally, using a strategy that only recompiles the statement(s) that need it – is best. And, as of SQL Server 2005, you have a way to do this using OPTION (RECOMPILE). This is – BY FAR – the easiest strategy to use. However, it can become costly if it’s used too often. So, here’s a quick guide to some of the options with a brief description (note: before you use these options you should do more learning / testing as this is starting to head to 200+ level):

  • CREATE <procedure> WITH RECOMPILE: do not use this. First, you rarely want the entire procedure to be recompiled. Second, because it’s recompiled on every execution SQL Server does not place it in the plan cache so troubleshooting performance problems around these procedures is more difficult. Finally, without getting into deeper details – the type of optimization performed for recompilations done WITH RECOMPILE is not as advanced.
  • EXECUTE <procedure> WITH RECOMPILE: for the same reasons as above, this is not recommended either. But, even while it’s not ideal, I DO use this for testing. Testing: YES. Production: NO. I describe this and do some demos in my DBA Fundamentals webcast that you can watch online here.
  • Statement-level
    • OPTION (RECOMPILE): this is fantastic. But, it can be OVER-used. So, use it sparingly. It’s a great way to temporarily solve a problem but then you might want to use the hybrid solution for long-term scalability / performance.
    • OPTION (OPTIMIZE for …): there are specific cases where this can be useful – primarily when you’re trying to tune for a specific case or a specific “type” of value. But, this takes important (and, sometimes deep) knowledge of the data to get this right. And, it needs to be “watched” more carefully as data changes.
    • OPTION (OPTIMIZE FOR UNKNOWN): This was new in SQL Server 2008 and allows you to tune for the “average” case without having to know what that case is (like you need to know with
  • Hybrid option: sometimes cache / sometimes recompile. This can give you the best of both worlds where you programmatically chose to cache stable plans and recompile unstable plans (which sounds more complicated than it us but it does take more knowledge / more testing to get it right). To help you with this one, check out Building High Performance Procedures. And, make sure to review the DBA Fundamentals webcast here.

OK, so, while I tried to be brief – the “instructor” in me wanted to give you lots of nuggets of info. I hope you found this post helpful! And, if you want to find all of our SQLskills SQL101 blog posts – check out: SQLskills.com/help/SQL101

Thanks for reading!
k

 

 

Explicitly naming CL key columns in NC indexes – when and why

This morning, as I was prepping for my PASS Data Architecture Virtual Chapter presentation on “Why Physical Database Design Matters,” I went to tweet that folks could join the presentation. I stumbled on a discussion that led to Robert Davis (@SQLSoldier) asking this: I was searching to see if you had a blog post on missing indexes recommending CI keys as included columns in NCIs.

And, there was no way I’d be able to do that in 140 characters… so, I thought a blog post would be easier!

First off, you do NOT need to explicitly state clustering key columns in your nonclustered index definition; those that aren’t already part of the index – ARE ADDED automatically by SQL Server. So, it seems weird that the missing index DMVs always add the clustering key to the recommended nonclustered indexes – even if it’s not required?

SIDE NOTE: The “green hint” / missing index recommendation shown in showplan does this as well (since it uses the missing index DMVs behind the scenes).

But, let’s go back to why it’s in there to begin with..

Why does SQL Server NEED a lookup value?

SQL Server uses the clustering key (if the table has a rowstore clustered index defined) as the way to “lookup” a corresponding row when accessing the data from a nonclustered index. The main reason to do this is that the nonclustered index does not have all of the information the query needs so SQL Server has to look up the rest of the data by accessing the data row.

SIDE NOTE: If the table is a HEAP (a table without a clustered index) then SQL Server uses a “fixed RID” which consists of an 8 byte row-ID (2 bytes for the fileID, 4 bytes for the pageID, and 2 bytes for the slot number). There are pros/cons to heaps vs. clustered tables but that’s not the main point of this post; I’m solely going to focus on what happens when a table has a clustered index.

If you think of an index in the back of a book on animals, you have duplicate data (like a “common name”) that’s ordered in the index. We use that sorted order to find the animal in which we’re interested and then the index tells us on what page we can find the rest of the information about that animal.

Nonclustered indexes in SQL Server work similarly. However, there isn’t a person “scanning” the sorted data (as we would a regular index). Instead, SQL Server has TWO structures to an index. One structure (the leaf level) is VERY much like an index in the back of a book (in this case, the common name + a lookup value to get to the rest of the data). The other part of an index structure is called the b-tree; this structure (0 to n levels) is solely used for efficient navigation to the leaf level. (If you’re wondering when an index tree would have 0 levels – only when the data of the entire table is only 1 page. SQL Server will not add a navigational level to a table that’s only one page. Yes, rare. But, yes, I expected someone would have asked! And, again, I’m somewhat simplifying things but I want to get to the main point of the question… :))

So, now we know WHY SQL Server needs the clustering key columns added to the index. And, we know that SQL Server adds them when they’re not explicitly defined… but, is it a problem and why do they do it?

Is it a problem to EXPLICITLY define clustered key columns in a nonclustered index?

No. But, I personally don’t recommend it. If the query doesn’t need it nor is the query going to use it (and, isn’t that the point of a missing index recommendation), then there’s no reason to add it. I never recommend adding anything that isn’t explicitly needed. I can only speculate on why they do it AND it does have one benefit.

First, no, it’s completely unnecessary. But, it does NOT cause any problems (SQL Server won’t add clustering key columns to the index more than once; they’re either explicitly added or internally added – but, they’re never in the index twice).

And, it does provide ONE benefit in that columns that are explicitly defined show up in sp_helpindex and in the management studio tools. So, this helps people to better see what’s actually in their indexes (but, this is kind of a weird way around the inadequacies of both the tools as well as sp_helpindex). So, while there is a benefit; it’s only making up for shortcomings in the tools.

SIDE NOTE: Whether or not you believe me is also part of the problem because none of the standard utilities / sps, etc. expose that SQL Server is doing this. But, you can check table size with sp_spaceused and you should see the size is the same whether the clustering key column is explicitly added or not. And, there are [3rd party] ways to expose that these columns ARE part of the index (I’ve written about my sp_helpindex rewrites before and I know there are a bunch of other tools out there that show it as well! I’m a bit overdue for yet-another sp_helpindex rewrite / update but I do have one planned. So, I hope to get to that within the next few days. Stay tuned. But, the 2012 version still works in 2014/2016 for clustered/nonclustered indexes. I have a new version that works with ALL index types but it still needs a bit more testing – email me if you have columnstore, in-memory indexes, full-text, spatial, or hypothetical indexes and want to do some tests for me!)

When should you explicitly define clustered key columns in a nonclustered index?

This is the real reason I wrote this post was to get to this point. And, it’s something I hadn’t thought about until I ran into it personally…

So, when SHOULD you explictly define the clustering key columns in a nonclustered index? When they ARE needed by the query.

This sounds rather simple but if the column is USED by the query then the index MUST have the column explicitly defined. Yes, I realize that SQL Server will add it… so it’s not necessary NOW but what if things change? (this is the main point!)

What could go wrong?

If you don’t explicitly add clustering key columns in a nonclustered index and the queries actually rely on that clustering key column to be there (for better performance)… then your queries will not be able to use the nonclustered index if the clustered index changes.

What if you want to change your clustering key? What if you later want to create a clustered columnstore index. In either case, all of your nonclustered indexes will no longer include those [former] clustering key columns and the queries that relied on their being present, won’t perform.

So, the simple answer is – always define the index explicitly with the columns needed (in the key, in INCLUDE, wherever they are REQUIRED). If a column is not needed by that query then do not explicitly name it (it might happen behind the scenes but it won’t add extra data where it’s not needed if the clustering key were to change at a later time).

Using this strategy when you’re doing query tuning will make your indexes more readable, more effective / scalable and should work even if there are later changes to your schema!

SIDE NOTE: Indexing is a very difficult topic to discuss in only a blog post and “query tuning” is not the same as “server tuning.” Sometimes the index that’s best for a query is not ideal to add to your server… but, once you get to the point where you want to create an index on your server – explicitly state only the columns absolutely necessary. Don’t just add things “because SQL Server would have done it anyway.”

Hope this helps! And, thanks for reading!!
Kimberly

 

Setting CE TraceFlags on a query-by-query (or session) basis

IMPORTANT NOTE: Be sure to read the ENTIRE post as there’s a workaround that QUERYTRACEON is ALLOWED inside of stored procedures… just NOT in an adHoc statement. Not documented but I think this is excellent! There are still some uses of what I’ve written but this specific use is much easier because of this undoc’ed “feature.” :-)

While the new cardinality estimator can offer benefits for some queries, it might not be perfect for ALL queries. Having said that, there is an OPTION clause that allows you to set the CE for that query. The setting to use depends on the CE that you’re running under currently. And, as of SQL Server 2016, even determining this can be difficult. At any given time, there are multiple settings that might affect your CE.

In SQL Server 2014, your CE was set by the database compatibility model. If you’re running with compatibility mode 120 or higher, then you’re using the new CE. If you’re running with compatibility mode 110 or lower, then you’re using the Legacy CE. In SQL Server 2016, the database compatibility mode is not the only setting that can affect the CE that you’re using. In SQL Server 2016, they added ‘database scoped configuations’ and introduced:

ALTER DATABASE SCOPED CONFIGURATION LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY}

Having said that, an administrator can always override this setting by setting one of the CE trace flags globally [using DBCC TRACEON (TF, -1) – but I don’t recommend this!].

To use the LegacyCE when the database is set to the new CE, use Trace Flag 9481.
To use the New CE when the database is set to the LegacyCE, use Trace Flag 2312.

Generally, I recommend that most people STAY with the LegacyCE until they’ve thoroughly tested the new CE. Then, and only then, change the compatibility mode. But, even with extensive testing, you might still want some queries to run with the LegacyCE while most run with the new CE (or, potentially the opposite). What I like most about the addition of the new CE is that we have the ability to set EITHER!

But, before setting this (or, overriding how the database is set), let’s make sure we know how it’s set currently… If you’re wondering which CE you’re running under, you can see it within the graphical showplan (in the Properties [F4] window, use: CardinalityEstimationModelVersion OR search for that within the showplan XML).

CardinalityEstimationModelVersion

Above all, what I like most is – CHOICE. I can even set this for a specific query:

SELECT m.*
FROM [dbo].[Member] AS [m]
WHERE [m].[firstname] LIKE 'Kim%'
OPTION (QUERYTRACEON 9481);

However, the bad news is the QUERYTRACEON is limited to SysAdmin only (be sure to read the UPDATEs at the end of this post). Jack Li (Microsoft CSS) wrote a great article about a problem they solved by using a logon trigger to change the CE for an entire session: Wanting your non-sysadmin users to enable certain trace flags without changing your app? Now, I do want to caution you that setting master to trustworthy is not something you should take lightly. But, you should NOT let anyone other than SysAdmin have any other rights in master (other than the occasional EXEC on an added user-defined SP). Here are a couple of posts to help warn you of the danger:

A warning about the TRUSTWORTHY database option
Guidelines for using the TRUSTWORTHY database setting in SQL Server

Having said that, what I really want is to set this on a query by query basis AND I don’t want to elevate the rights of an entire stored procedure (in order to execute DBCC TRACEON). So, I decided that I could create a procedure in master, set master to trustworthy (with the caution and understanding of the above references/articles), and then I can reference it within my stored procedures NOT having to use use 3-part naming (for the sp_ version of the procedure):

ALTER DATABASE master
SET TRUSTWORTHY ON;
GO

USE master;
GO

ALTER PROCEDURE sp_SetTraceFlag
    (@TraceFlag int,
     @OnOff     bit = 0)
WITH EXECUTE AS OWNER
AS
DECLARE @OnOffStr char(1) = @OnOff;
-- Sysadmins can add supported trace flags and then use this
-- from their applications
IF @TraceFlag NOT IN (
            9481 -- LegacyCE if database is compat mode 120 or higher
          , 2312 -- NewCE if database compat mode 110 or lower
                      )
    BEGIN
        RAISERROR('The Trace Flag supplied is not supported. Please contact your system administrator to determine inclusion of this trace flag: %i.', 16, 1, @TraceFlag);
        RETURN
    END
ELSE
    BEGIN
        DECLARE @ExecStr nvarchar(100);
        IF @OnOff = 1
            SELECT @ExecStr = N'DBCC TRACEON(' + CONVERT(nvarchar(4), @TraceFlag) + N')';
        ELSE
            SELECT @ExecStr = N'DBCC TRACEOFF(' + CONVERT(nvarchar(4), @TraceFlag) + N')';
        -- SELECT (@ExecStr)
        EXEC(@ExecStr)
        -- RAISERROR (N'TraceFlag: %i has been set to:%s (1 = ON, 0 = OFF).', 10, 1, @TraceFlag, @OnOffStr);
    END;
GO

GRANT EXECUTE ON sp_SetTraceFlag TO PUBLIC;
GO

As for using this procedure, you have TWO options. If you can modify the stored procedure then you can wrap a single statement with the change in trace flag. But, to make it take effect, you’ll need to recompile that statement. So, if your stored procedure looks like the following:

CREATE PROCEDURE procedure
( params )
AS
statement;
statement; <<<--- problematic statement
statement;
statement;
GO

Then you can change this to:

CREATE PROCEDURE procedure
( params )
AS
statement;
EXEC sp_SetTraceFlag 2312, 1
statement OPTION (RECOMPILE); <<<--- MODIFIED problematic statement
EXEC sp_SetTraceFlag 2312, 0
statement;
statement;
GO 

If you don’t want to set master to trustworthy then you can add a similar procedure to msdb (which is already set to TRUSTWORTHY) and then use 3-part naming to reference it.

USE msdb;
GO

CREATE PROCEDURE msdbSetTraceFlag
    (@TraceFlag int,
     @OnOff bit = 0)
WITH EXECUTE AS OWNER
AS
DECLARE @OnOffStr char(1) = @OnOff;
-- Sysadmins can add supported trace flags and then use this
-- from their applications
IF @TraceFlag NOT IN (
              9481 -- LegacyCE if database is compat mode 120 or higher
            , 2312 -- NewCE if database compat mode 110 or lower
                     )
     BEGIN
         RAISERROR('The Trace Flag supplied is not supported. Please contact your system administrator to determine inclusion of this trace flag: %i.', 16, 1, @TraceFlag);
         RETURN
     END
ELSE
     BEGIN
         DECLARE @ExecStr nvarchar(100);
         IF @OnOff = 1
             SELECT @ExecStr = N'DBCC TRACEON(' + CONVERT(nvarchar(4), @TraceFlag) + N')';
         ELSE
             SELECT @ExecStr = N'DBCC TRACEOFF(' + CONVERT(nvarchar(4), @TraceFlag) + N')';
         -- SELECT (@ExecStr)
         EXEC(@ExecStr)
         -- RAISERROR (N'TraceFlag: %i has been set to:%s (1 = ON, 0 = OFF).', 10, 1, @TraceFlag, @OnOffStr);
     END;
GO

GRANT EXECUTE ON msdbSetTraceFlag TO PUBLIC;
GO

To use this, you’ll need to use 3-part naming:

CREATE PROCEDURE procedure
( params )
AS
statement;
EXEC msdb.dbo.msdbSetTraceFlag 2312, 1
statement OPTION (RECOMPILE); <<<--- MODIFIED problematic statement
EXEC msdb.dbo.msdbSetTraceFlag 2312, 0
statement;
statement;
GO 

Finally, another option is to wrap a statement with the change in trace flag.

EXEC sp_SetTraceFlag 2312, 1;
GO
STATEMENT or PROCEDURE
GO
EXEC sp_SetTraceFlag 2312, 0;  -- don't remember to turn it back off!
GO
 

Now, you have strategic access to EITHER CE and you don’t have to elevate anyone’s specific rights to SysAdmin. You can even let developers use the changes to the new CE or the LegacyCE in their code which is incredibly useful!

UPDATE: If you can’t change the code then another excellent option would be use to a plan guide (permissions are only at the object/database level: To create a plan guide of type OBJECT, requires ALTER permission on the referenced object. To create a plan guide of type SQL or TEMPLATE, requires ALTER permission on the current database.). Here’s a good post on how to add QUERYTRACEON into a plan guide: Using QUERYTRACEON in plan guides by Gianluca Sartori.

UPDATE #2: Wow! A bit of digging and I stumbled on this ConnectItem: QUERYTRACEON with no additional permissions and after I up-voted it, I noticed the workaround by Dan Guzman: There is currently undocumented behavior in SQL 2014 that allows use of QUERYTRACEON in a stored procedure without sysadmin permissions regardless of module signing or EXECUTE AS. This is a workaround in situations where the query is already in a stored procedure or when a problem ad-hoc query needing the trace flag can be encapsulated in a proc. However, until it’s documented, this workaround is at-your-own-risk. I did NOT know this… So, QUERYTRACEON CAN be used in a stored procedure even without SysAdmin permissions (I actually like this!). And, I tested this in BOTH 2014 AND 2016 and it works! It’s not documented nor is it recommended but I’m happy to see that you do NOT need to do this for QUERYTRACEON. It looks like it evens works in earlier versions. Most of us ran into problems with the clause erroring on adhoc statements so we just didn’t expect it to work INSIDE of a proc. Well… better late than never!

Thanks for reading,
Kimberly

New features plus excellent tips & tricks – 2016 is looking great!

There are so many exciting things going on this year and it’s just getting started! If I’m being honest, I haven’t been as excited for a new release of SQL Server for a long time. Don’t get me wrong – I love new features and I love fixes but I’m also a firm believer in “if it ain’t broke and the features aren’t critical, there’s no rush to upgrade.” And, even though I’m excited about SQL 2016, that’s not going to prevent migration testing and a solid QA pass before upgrading. But, IMO, SQL Server 2016 is a game changer for table design and indexing. I still need to work with it a bit more and I still believe that the BEST large-table architecture isn’t super simple. But, I truly feel that the table design combinations available with SQL Server 2016 (in terms of partitioning [with BOTH partitioned tables AND partitioned views] and indexing) are EXACTLY what you need for high performance hybrid (OLTP+ DSS) implementations. So, this makes me really excited for the upcoming release. I know that most of ySQL_Server_evolution_194x194ou aren’t going to jump ship immediately but you should start learning as much as you can – NOW.

So, since SQL Server 2016 is getting ready to release – check out some of the Microsoft resources that are available already:

Also, if you’re looking for really technical posts from PSS – they just blogged about 2016 with a post titled: SQL 2016 – It Just Runs Faster Announcement. Watch them as that post announces a new techie series on 2016 starting with that post!

 

And, all of this is also getting us excited about our upcoming SQLintersection show in Orlando the week of April 18. Our phenomenal line-up of presenters has been working on content for our SQLintersection show and a few have blogged about the event:

 

And, if that’s not enough to get you excited… Richard Campbell of DotNetRocks and RunAsRadio has been recording some DEVintersection / SQLintersection “countdown” sessions on Channel 9 and a few are already available:

SQLafterDark

I recorded one with Richard yesterday and that’s what’s got me thinking about everything that’s starting to come together for 2016. And, we’re also starting to put together our trivia questions and our SQL questions for our SQLafterDark evening event… this is one of our favorite events at SQLintersection. Yes, the tech content is absolutely top-notch and the speakers are all technical folks that work in the trenches day after day so they’re presenting what they know works; and they can also present well as they’re seasoned speakers. However, an evening event where you kick-back, have a few beers with speakers and peers, and play a goofy (but also interesting and fun) trivia game… who can beat that! Our event is a place where you can really get to intersect with other attendees and speakers and get your tough questions answered! Also, remember to use the discount code ‘SQLskills’ (without quotes) to save $50 on your registration.

So, lots going on and things are really shaping up to be an amazing year. Are you excited about the new release?

Thanks for reading,
Kimberly

SQLintersection Spring 2016 – Our best line-up yet!

As we head into our 7th SQLintersection this April, I’m excited to say that it’s one of our most diverse, complete, and information-packed shows yet! We’ve added a 3rd precon day to our show and with our postcon day, there are 9 full-day workshops from which to choose. We have 3 SQL keynotes that are relevant and timely including two from Microsoft experts Bob Ward and Buck Woody and a third is all about productivity using their fantastic tools (SQL Sentry).

If you’re interested in how we got here – check out some of my past posts:

  1. SQLintersection: a new year, a new conference
  2. SQLintersection’s Fall Conference – It’s all about ROI!
  3. Fall SQLintersection is coming up soon and we can’t wait!
  4. SQLintersection Conference and SQLafterDark Evening Event – what a fantastic week in VegasSQLafterDark

With minimal to no marketing filler, we’ve largely kept our conference focus on ROI and technical content (performance / troubleshooting / tales-from-the-trenches with best practices on how to fix them ) but we’ve also added even more social events so that you really get time to intersect with the conference attendees and speakers. The addition of the SQL-specific evening event SQLafterDark was wildly popular from some of our past shows and that’s returning for Spring!

And, so are many of our phenomenal speakers. But, we also have a few new industry-experts rounding out our line-up.

SQLintersection: Great Speakers!

Once again, I think a great show starts with great speakers and current / useful content. All of these speakers are industry-experts that have worked in data / SQL for years (some can even boast decades) but all are still focused on consulting and working in the trenches. And, they’re good presenters! Not only will you hear useful content but you’ll do so in a way that’s digestible and applicable. Every speaker is either an MCM (Master), a SQL Server MVP, (or both), or a Microsoft employee. But, regardless of their official credentials – ALL are focused on providing the most ROI that’s possible in their session(s) and/or their workshops. Check out this phenomenal list of speakers:

  • Aaron Bertrand
  • Allen White
  • Ben Miller
  • Bob Ward
  • Brent Ozar
  • Buck Woody
  • David Pless
  • Glenn Berry
  • Grant Fritchey
  • Jason Hall
  • Jeremiah Peschka
  • Kimberly L. Tripp
  • Paul S. Randal
  • Randy Knight
  • Scott Fallen
  • Tim Chapman
  • Tim Mitchell
  • Tim Radney

You can read everyone’s full bio on our speaker page here.

SQLintersection: When is it all happening?

The conference officially runs from Monday, April 19 through Thursday, April 21 with pre-conference and post-conference workshops that extend the show over a total of up to 7 full days. For the full conference, you’ll want to be there from Saturday, April 16 through Friday, April 22.

  • Saturday, April 16 – pre-con day. There is one workshops running:
    • Intro to SSIS with Tim Mitchell
  • Sunday, April 17 – pre-con day. There are two workshops running:
    • SQL Server 2014 and 2016 New Features and Capabilities with David Pless and Tim Chapman
    • Developer’s Guide to SQL Server Performance with Brent Ozar and Jeremiah Peschka
  • Monday, April 18 – pre-con day. There are three workshops running during the day with the first keynote of the conference on Monday evening:
    • Performance Troubleshooting using Waits and Latches with Paul Randal
    • Analyzing and Improving I/O Subsystem Performance with Glenn Berry
    • PowerShell for the DBA from 0-60 in a Day with Ben Miller
  • Tuesday, April 19 through Thursday, April 21 is the main conference. Conference sessions will run all day in multiple tracks:
    • Check out our sessions online here
    • Be sure to check out our cross-conference events and sessions .
    • Get your pop-culture trivia and techie-SQL-trivia hat on and join us for SQLafterDark on Wednesday evening, April 20
  • Friday, May 21 is our final full day running with three post-conference workshops running:
    • Queries Gone Wrong: Statistics, Cardinality, Solutions with Kimberly Tripp
    • Leveraging SQL Server in the Cloud with Brent Ozar and Jeremiah Peschka
    • Building Better SSIS Packages with Tim Mitchell

SQLintersection: Why is it for you?

If you want practical information delivered by speakers that not-only know the technologies but are competent and consistently, highly-rated presenters – this is the show for you. You will understand the RIGHT features to troubleshoot and solve your performance and availability problems now!

Check us out: www.SQLintersection.com.

We hope to see you there!
kt

PS – Use the discount code ‘SQLskills’ when you register and receive $50 off registration!

Inconsistent analysis in read committed using locking

Well, I had a second post already planned and partially written but a comment on the first post (in what’s going to become a multi-part series) has made me decide to do a different part 2 post. So, make sure you read this post first: Locking, isolation, and read consistency.

The comment was about how read committed using locking has inconsistencies – and, yes, that’s true. Read committed using locking is prone to multiple inconsistencies; there are things you can do to reduce some anomalies but in a highly volatile OLTP environment you will get inconsistencies for long running readers. This is why having a secondary server for analysis is always a great idea. Then, everyone gets the same version of the truth – the point in time to which the secondary was updated (is that nightly, weekly, monthly?). Now, if the secondary is getting updates through transactional replication, you are back to the same problem. But, here, I cannot imagine NOT using read committed with versioning. Probably one of the BEST places to use versioning is on a replication subscriber. Replication won’t be blocked by readers and readers won’t be blocked by replication:

SubscriberLockingWithVersioning

Why is read committed with versioning preferred for readers?

More and more, we’re being tasked with analyzing “current” data. This is harder and harder to do without introducing anomalies. So, I’ll discuss non-repeatable reads in the bounds of a single statement. To get this to occur with a small data set and without setting up a bunch of connections, etc. I’ll engineer the anomaly.

Example

To do this, I’m going to use a sample database called Credit. You can download a copy of it from here. Restore the 2008 database if you’re working with 2008, 2008R2, 2012, or 2014. Only restore the 2000 database if you’re working with 2000 or 2005 (but, you can’t do versioning on 2000 so this is a 2005 or higher example).

Also, if you’re working with SQL Server 2014, you’ll potentially want to change your compatibility mode to allow for the new cardinality estimation model. However, I often recommend staying with the old CE until you’ve done some testing. To read a bit more on that, check out the section titled: Cardinality Estimator Options for SQL Server 2014 in this blog post.

So, to setup for this example – we need to:

(1) Restore the Credit sample database

(2) Leave the compatibility mode at the level restored (this will use the legacy CE). Consider testing with the new CE but for this example, the behavior (and all examples / locking, etc.) don’t actually change.

Scenario / example

(3) This is where things will get a bit more interesting – you’ll need a couple of windows to create the scenario…

FIRST WINDOW – setup the table:

USE [Credit];
GO

IF OBJECTPROPERTY(object_id('[dbo].[MembersOrdered]'), 'IsUserTable') = 1
    DROP TABLE [dbo].[MembersOrdered];
GO

-- Create a copy of the Member table to mess with!
SELECT *
INTO [dbo].[MembersOrdered]
FROM [dbo].[Member];
GO

-- Create a bad clustered index that's prone to both fragmentation and
-- record relocation. This is part of what makes this scenario more likely!
-- But, that's not the main point of discussion HERE. This is good
-- because it's also easy to visualize
CREATE CLUSTERED INDEX [MembersOrderedLastname]
ON [dbo].[MembersOrdered]
    ([lastname], [firstname], [middleinitial]);
GO

SELECT COUNT(*) FROM [dbo].[MembersOrdered];
GO

Now… we’ll engineer the problem. But, I’ll describe it visually here:

This is a rough approximation of what the data looks like when the table is clustered by lastname

This is a rough approximation of what the data looks like when the table is clustered by lastname (not that I’m recommending that you do this).

The table has NO nonclustered indexes and ONLY a clustered index on lastname. The data is essentially ordered by lastname (not going to get technical about what the data looks like on the page here or the fact that it’s really the slot array that maintains order not the actual rows… for this example – the logical structure of the table is good enough to get the point).

(4) So, now that the table has been created – we will start by creating a problem (a row that will end up blocking us):

SECOND WINDOW – we’ll create a blocking transaction (t1)…

USE [Credit];
GO

BEGIN TRANSACTION
UPDATE [dbo].[MembersOrdered]
    SET [lastname] = 'test',
        [firstname] = 'test'
WHERE [member_no] = 9965;

Don’t get me wrong – this is a horribly bad practice (to have interaction in the midst of a transaction and to only send part of the transaction… if you walk away – SQL Server is incredibly patient and will hold locks until the transaction is finished [or, killed by an administrator]). But, the point… now our table will essentially look like this:

 

Yes, there are other locks that exist but this is the only one we really care about here...

Yes, there are other locks that exist but this is the only one we really care about here…

And, yes, there are MORE locks than just what I show here. SQL Server will also hold an IX lock on the page where this row resides and an IX lock on the table as well. And, of course, SQL Server will also hold a shared lock on the database. All of these locks (IX and the database-level S lock) are used as indicators. Individually, they don’t really block all that many other operations. You can have MANY IX locks on the same page at the same time and you can have many IX locks on the same table at the same time. Essentially these locks are used to show that someone has the “intent” to exclusively lock resources on this resource.

If you’re interested, check out sys.dm_tran_locks to see the locks held at each point of this example…

(5) So, now that the table has a locked row, we will try again to get a row count. Go back to the FIRST WINDOW and JUST re-run the query to get a count:

SELECT COUNT(*) FROM [dbo].[MembersOrdered];
GO

This will be blocked. And, it will sit blocked until the transaction finishes (either through a commit or rollback OR by being killed – none of which we’ll do…yet ).

So, what’s interesting and we need a bit of back-story to fill in is that our count query reads ONLY committed rows (we know that – it’s read committed locking). What a lot of people don’t know is that the row-level shared locks are released as soon as the resource has been read. This is to reduce blocking and because there’s no need (in read committed ) to preserve the state of the row (like repeatable reads does). And, I’ll prove that with an update…

(6) Now we’ll update a row that’s just sitting there and not locked (but, may have already been counted [locked, read, and then the lock was released] by our count query).

THIRD WINDOW – update and move a row that was already counted

USE [Credit];
GO

UPDATE [dbo].[MembersOrdered]
 SET [lastname] = 'ZZZuckerman',
 [firstname] = 'ZZZachary'
WHERE [lastname] = 'ANDERSON'
 AND [firstname] = 'AMTLVWQBYOEMHD';
GO

OK, now here’s another horribly bad practice (not using a key for a modification). But, this table has all sorts of problems (which is what makes it a wonder example for this). And, if I don’t go with something that’s efficiently indexed for this update then the locking would be different. I NEED to get row-level locking here. So, I have to do the update above. And, now what’s happened:

There are so many bad things going on with this . What if there are two rows with that name (ok, unlikely) but you still shouldn't do modifications without a KEY (primary / or candidate)!

There are so many bad things going on with this poor table. What if there are two rows with that name (ok, unlikely) but you still shouldn’t do modifications without a KEY (primary / or candidate)!

(7) Now, it’s time to “complete” the transaction that was blocking our count query.

Go back to the SECOND WINDOW and complete the transaction (to be honest, you can COMMIT or ROLLBACK – it really won’t matter for this example):

ROLLBACK TRANSACTION;

(8) Step 7 will have unblocked the row count query so all you need to do is go back to the first window and check the count that’s sitting there… which will be 10001  rows in our table that only has 10000 rows. Why? Because you were NOT guaranteed repeatable reads even in the bounds of a single statement. 

Re-run the count query and you’ll get 10000 again.

Why read committed with versioning solves this problem?

When you change the database to read_commmitted_snapshot you change ALL read committed statements to use version-based reads. The best part is that we’ll never WAIT for in-flight rows nor will we see rows whose transactions completed within the bounds of your statement [the longer running the statements the more that this is possible]. The incorrect count would not occur for a few reasons but I’ll create a list of the reasons here:

When the row modification (t1) is made – the row will be versioned (SQL uses “copy on write” / “point in time” = snapshot) to take the BEFORE modification VERSION of the row and put it into the version store.

When the count begins it will just quickly continue (and not be blocked by the X lock held on the ‘Gohan’ row). This is not a problem, the columns are changing but the row is not being deleted – yes, we should count that row.

If I could switch to the window and run the update fast enough, it too would not be blocked by the version-based read NOR would the moved row be visible (even if it made it there in time) by the version-based read because it’s new location (handled as an INSERT) would be NEWER than our count statement’s START time. Our count will reconcile to the point in time when the statement BEGAN. Nothing is blocked and those changes (even when committed before we get there) are NOT visible to our version-based read.

Is Summary 

Yes, read committed with versioning is often MUCH preferred over read committed using locking. But, versioning does come at an expense. You do need to do monitoring and analysis of THE version store (in THE tempdb). Someday (SQL Server 2016???) maybe we’ll multiple tempdbs (that would be awesome) and ideally, we’ll have some nice capabilities for controlling / limiting version store overhead (of one database over another). But, for right now – you have to make sure that your tempdb is optimized! And, if you have an optimized tempdb and you don’t have a lot of really poorly written long-running queries, then you might find out that the overhead is less than you expected. This is absolutely something that you should be not only considering but using!

Whitepaper: Working with tempdb in SQL Server 2005

Plus, be sure to review Paul’s tempdb category for other tips / tricks, etc.

OK, so now the originally planned part 2 is now trending to be part 3. Depending on questions – I’ll get to that in the next few days!

Thanks for reading,
k

 

Locking, isolation, and read consistency

Today I ran into a really interesting locking / blocking problem that I want to discuss… originally, a related discussion came up last week in our IEPTO1 course (Immersion Event on Performance Tuning – Part 1) but today I ran into a similar problem that brought me back to the original discussion and I was able to reproduce the original scenario with a really interesting combination of things (some of which I did not expect and that led me down a super interesting path). However, before I can get to the weirdness (and try to make sense of it), I need to make sure the foundation is set well… In IEPTO1, I discuss locking types and how long locks are held for the different isolation levels – spending a great deal of time talking about the default mode of locking (read committed – with locking) vs. all other options:

  • Read uncommitted ( same as using NOLOCK ) and a lower isolation level than the default. This allows dirty reads.
  • Read committed using locking ( this is the DEFAULT when read_committed_snapshot has NOT been turned on )
  • Read committed using version ( this is the DEFAULT when read_committed_snapshot has been turned on )
  • Repeatable reads ( which also uses locking – even when one or both flavors of versioning has been enabled )
  • Serializable ( which also uses locking [ same as using HOLDLOCK ] – even when one or both flavors of versioning has been enabled )
  • Snapshot Isolation ( which is ALLOWED if allow_snapshot_isolation has been turned on and the client requests it, but won’t be used by anyone unless SET TRANSACTION ISOLATION LEVEL SNAPSHOT has been requested. Here you’ll have all the overhead of versioning without anyone using it unless you make code changes for snapshot isolation.)

Essentially, your database can be configured for one of FOUR possible states:

  1. No options set (connections default to read committed using locking)
  2. ONLY read_committed_snapshot set ( connections default to read committed using versioning; no other code changes needed for read committed statements ). This provides statement-level read consistency. Every read will reconcile to the point in time when the STATEMENT started.
  3. ONLY allow_snapshot_isolation set ( connections DEFAULT to read committed using LOCKING – because of the absence of read_committed_snapshot ). For those that request snapshot isolation, this provides transaction-level read consistency. Every read will reconcile to the point in time when the TRANSACTION started.
  4. BOTH read_committed_snapshot and allow_snapshot_isolation set. Without snapshot transaction isolation level requested, statements will reconcile to the point in time that the statement started and when snapshot isolation is requested, then all statements will reconcile to the point in time that the transaction began.

To be honest, I’ve been wanting to get that “simple” view of the locking world written down for a while. Even the combination of options ( that a database can really be configured into one of four possible states ) is not very well known. Often content about versioning just states that you need to turn both options on and they don’t describe that they’re really distinct environments.

Statement-level Read Consistency

What I love about read_committed_snapshot is that you get statement-level read consistency… what this means is that you can get a definable point in time to which your statement reconciles – that point, the time when the statement started. So, if you ask for a count of rows, you get THE count of rows that were present when your statement started. The best part about it is that this count is not only accurate to the point in time that hte statement started, it also does so without preventing transactions / locks on the object where you’re counting rows. Without a lot of detail here – it does this by copying the transactionally consistent VERSION of the row into the version store and leaving behind a pointer to allow readers to use without being blocked AND without blocking other writers. The marketing tagline makes it sound perfect ( readers don’t block writers and writers don’t block readers ). All of this happens AUTOMATICALLY and without code changes for all statements running with read committed isolation. If a statement has a hard-coded lock hint ( like NOLOCK or HOLDLOCK, etc. ) then their statement-level hints will override this and use locking.

NOTE: Windows Azure SQL Database defaults to read committed using VERSIONING for new databases.

Transaction-level Read Consistency

What I love about allow_snapshot_isolation is that you don’t have to use it, unless you really want version-based TRANSACTIONS. And, oddly, many of you won’t want this for the majority of your complex updates / OLTP transactions ( where you’ll also have to deal with version-based reads and update conflicts ). If your writers are in read committed using locking OR read committed using versioning then an update will use locks and not version-based reads so conflicts are prevented. The place to use transaction-level read consistency is for “transactions” that will handle multiple reads across volatile data. Imagine that you want to have 3 reports / queries reconcile to the same point in time ( and you want to do this real-time reporting in your OLTP environment ). What you could do is this:

SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
GO
BEGIN TRANSACTION
SELECT... --query1
SELECT... --query2
SELECT... --query3
COMMIT TRANSACTION
GO

So, your use of snapshot isolation should be a bit more limited IMO. And, largely limited to controlled reporting. Yes, you can do this with modifications / transactions… but if you have transaction-level read consistency and then you have multiple readers / writers to the same data then you’ll have to make sure you have good error handling ( you should be using TRY / CATCH ) so that you can deal with the conflict detected by SQL Server and resolved with the following error:

Msg 3960, Level 16, State 2, Line 6
Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.member' directly or indirectly in database 'Credit' to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.

In Summary

So, to bring it all together. All LOCKING discussions really need to start with how the session and the database are handling isolation. Different behaviors occur in the different configurations and some locks are held for different amounts of time depending on your isolation level.

And, that’s a good start for now. I wanted to get a few of these basics out there first as I have a couple of problems I want to describe (and resolve) with long running transactions and blocking. And, some special cases with schema locks as well as some special code I’ve written to reduce some of the horrible problems you can run into with long blocking chains (and my code works across earlier versions of SQL Server and for MORE statements than just partition switching and index rebuilds… but, I’m getting ahead of myself on this one). Stay tuned, I’m going to get to those in tomorrow’s post and I’ll compare them against the low-priority lock wait feature in SQL Server 2014.

Also, if you’re more interested in learning more about versioning – check out the whitepaper I wrote (for SQL 2005) and which was updated for name changes and a few other things by Neal Graves ( thanks Neal! ): SQL Server 2005 Row Versioning-based Transaction Isolation

Thanks for reading!
k

Stored Procedure Execution with Parameters, Variables, and Literals

In Nov 2014, SQLskills made an offer to user group leaders to deliver remote user group sessions in 2015 and we’ve been having great fun delivering them. So far the team has delivered 34 sessions and we have 72 signed up! At my last session (for the San Diego SQL Server User Group), I delivered a session on stored procedures. It was great fun and I had a few questions at the end that made me come up with some new sample code… that’s where this post is coming from!

The question was because I said to make note of something being a parameter and not a variable (during the lecture) and so the question was – what’s the difference and how does that change the behavior?

Definitions

Parameters are values that are being passed into a stored procedure. Since these are part of the call to execute the procedure; these are “known” during compilation / optimization (but, that’s only IF SQL Server has to compile / optimize, more on that in a minute)

Variables are assigned at runtime and therefore are “unknown” during compilation / optimization.

Literals are known at all times as they are exactly that – a simple / straightforward value

Review this heavily commented stored procedure:

CREATE PROCEDURE ParamVarLit
    (@p1    varchar(15)) -- defining the parameter name @p1 AND the data type
AS
-- Declarations / assignments
DECLARE @v1 varchar(15); -- defining the variable name @v1 AND the data type
SELECT @v1 = @p1;        -- assigning the variable to the parameter input value

DECLARE @v2 varchar(15); -- defining the variable name @v2 AND the data type
SELECT @v2 = 'Tripp';    -- assigning the variable to a literal value

                         -- Note: also acceptible is this format
                         -- DECLARE @v2 varchar(15) = 'Tripp'

-- Statement 1 (using a parameter)
SELECT [m].*
FROM [dbo].[member] AS [m]
WHERE [m].[lastname] = @p1 

-- Statement 2 (using a variable)
SELECT [m].*
FROM [dbo].[member] AS [m]
WHERE [m].[lastname] = @v1; -- or @v2, these will work EXACTLY the same way!

-- Statement 3 (using a literal)
SELECT [m].*
FROM [dbo].[member] AS [m]
WHERE [m].[lastname] = 'Tripp';
GO

In the stored procedure you can see that there’s one input parameter, we’ll assign that when we call the stored procedure.

We can assign parameters by name (line two) or we can assign them by position (note, when you have a lot of parameters this can be frustrating)

EXEC [dbo].[ParamVarLit] @p1 = 'Tripp' -- assignment by name
EXEC [dbo].[ParamVarLit] 'Tripp'       -- assignment by position

Outside of that, everything else is being defined / assigned  within the stored procedure.

Setup

Now, we have to see how each of them works from an optimization perspective. To do this, I’m going to use a sample database called Credit. You can download a copy of it from here. Restore the 2008 database if you’re working with 2008, 2008R2, 2012, or 2014. Only restore the 2000 database if you’re working with 2000 or 2005.

Also, if you’re working with SQL Server 2014, you’ll potentially want to change your compatibility mode to allow for the new cardinality estimation model. However, I often recommend staying with the old CE until you’ve done some testing. To read a bit more on that, check out the section titled: Cardinality Estimator Options for SQL Server 2014 in this blog post.

So, to setup for this demo – we need to:

(1) Restore the Credit sample database

(2) Leave the compatibility mode at the level restored (this will use the legacy CE). Consider testing with the new CE but for this example, the behavior (and all estimates) don’t actually change.

(3) Update a row and add an index – use this code:

USE Credit;
GO

UPDATE [dbo].[member]
    SET [lastname] = 'Tripp'
    WHERE [member_no] = 1234;
GO

CREATE INDEX [MemberLastName] ON [dbo].[member] ([Lastname]);
GO

Execution

To execute this procedure it’s simple – just use one of the execution methods above and make sure that you turn on “Show Actual Execution Plan” from the Query, drop-down menu.

EXEC [dbo].[ParamVarLit] @p1 = 'Tripp' -- assignment by name
GO

This is what you’ll see when you execute:

ExecutionPlans

 

Notice that the first statement and the third statement both use an index to look up the row but statement two does not. Why? It’s all tied to whether or not the value was “known” at the time of optimization. And, most importantly – this procedure was the executed when there wasn’t already a plan in cache. Because there wasn’t already a plan in cache, SQL Server was able to optimize this procedure for the parameters passed in at THIS execution. This does NOT happen for subsequent executions.

More specifically, when the statement KNOWS that the value is ‘Tripp’ (statement 1 knows because it’s getting optmized for ‘Tripp’ and statement 3 knows because the value is hard-coded for ‘Tripp’) then SQL Server can look to the statistics to determine how much data is going to be processed. In this case, SQL Server estimates that there are very few rows with a last name of ‘Tripp’ (from the statistics, it thinks there’s only 1). As a result, an index would be helpful to find this highly selective result so it chose a plan to use an index.

For statement 2 though, SQL Server doesn’t seem to know to use an index. Why? Because here the variable (@v1) was unknown at the time of compilation / optimization. The variable is not assigned until actual execution but execution only occurs after a plan has been generated. So, the problem with variables is that SQL Server doesn’t know their actual values until after it’s chosen a plan. This can be both good and bad. Remember, SQL Server has to do something… So, in this case, SQL Server uses an average to estimate the rows and come up with a plan. This average comes from the density_vector component of a statistic rather than this histogram. If your data is reasonably evenly distributed then this can be good. And, it also means that the plan won’t change change after it’s kicked out of cache and a different execution occurs with different parameters. Some have learned this trick and have used it with success – but, only because their data is either evenly distributed OR the executions are using values that all resemble the average.

NOTE: This is EXACTLY the same behavior as using the OPTION (OPTIMIZE FOR UNKNOWN) clause on the statement within the stored procedure.

In this case, however, ‘Tripp’ is NOT like the average value and so the plan for statement 2 is not ideal for a variable assigned to ‘Tripp’. The data has a lot of duplicates and the average number of rows for most names is quite high (where an index is no longer userful). However, ‘Tripp’ is really not an average data value here and so the plan might be good for most other values. But, in this case, it’s not good for the value Tripp.

Execute the procedure again but supply a different value for @p1:

EXEC [dbo].[ParamVarLit] @p1 = 'Anderson' -- assignment by name
GO

This is what you’ll see when you execute with ‘Anderson’ AFTER having created a plan for ‘Tripp':

ExecutionPlans

Wait – there’s no difference?

Nope, absolutely none! Really, review every aspect of your output /plan [not the actual values] and you’ll see it’s exactly the same! The plan that you see is always the estimated plan and the estimated plan is chosen when the stored procedure is optimized / compiled. Optimization / compilation occurs only when there isn’t already a plan in cache for that procedure.

Why are they the same?

The parameter – ‘Tripp’ was used on the first execution and this is what was “sniffed” and use for optimization. When I say “sniffed” all that means is that the value was KNOWN such that the optimizer could look at the statistics (and specifically the histogram) to estimate how many rows had a last name of ‘Tripp.’ It turns out that the estimate was 1. You can see this by hovering over the Index Seek in the first statement:

Diagrams

 

If you were to kick this plan out of cache and re-execute with ‘Anderson’ then something interesting would happen:

EXEC [sp_recompile] '[dbo].[ParamVarLit]';
GO

EXEC [dbo].[ParamVarLit] @p1 = 'Anderson';
GO

Results in this plan:

Statement1Estimate_Anderson

There’s really one incredibly important observation here: ONLY the first statement’s plan changed!

The first statement’s plan changed because on this execution SQL Server was able to “sniff” the parameter and optimize / compile a plan specific to it. More specifically, when the statement KNOWS that the value is ‘Anderson’ (again, only statement 1 knows this) then SQL Server can look to the statistics to determine how much data is going to be processed. In this case, SQL Server estimates that there are numerous rows with a last name of ‘Anderson’ (from the statistics, estimates 385 rows). As a result, the data is not selective enough to warrant using an index so in this case, SQL Server uses a table scan. (shown as a clustered index scan solely because the table has a clustered index)

Bringing It All Together

Parameters are evaluated and sniffed ONLY when a plan is being created. This ONLY happens when a plan is NOT already in the cache. And, sniffing is fantastic for THAT specific execution because “sniffing” a parameter lets SQL Server use the histogram component of statistic to determine the estimate. While the histogram is not always perfect, it’s usually a more accurate way of estimating rows. But, this can also lead to parameter sniffing problems where subsequent executions don’t perform well because the plan in cache wasn’t optimized for their values. There are many solutions to this problem, I covered a few of the options here: Building High Performance Stored Procedures.

Variables are ALWAYS deemed “unknown” and they cannot be sniffed. In this case, SQL Server doesn’t have a value to lookup in a statistic’s histogram. Instead, SQL Server uses an average to estimate the rows and come up with a plan. But. as I mentioned – this can be sometimes good and sometimes bad.

The literal is the easiest of them all. SQL Server knows this value and there’s absolutely nothing that will ever change that value. This can be sniffed and it will use the histogram getting the best estimate to use for optimization.

Play around with this sample procedure. Review the plans and the estimates v. actuals. Next week I’ll dive more into the statistics themselves and where the specific estimates are coming from!

Have fun and thanks for reading!
k

Prepared statements: be careful when given “THE” solution for all of your development practices

I’m not sure about how to title this OR how to avoid the term “it depends” but I also don’t feel that “it depends” is a bad phrase. Some folks think of it as a cop-out when they don’t have a better answer but the problem is that it’s often the RIGHT answer. There are rarely simple solutions that work in ALL situations. And, as a result – it depends – is really correct. What’s most important is knowing what it depends ON.

So, where am I going with this one?

I recently received a question that started by referencing a post titled: NHibernate, the database query cache and parameter sizes. My issue is that there is truth in that post (and also in the comments) but there are also a few statements that just aren’t quite right. In fact, there’s one that I thought was right but I had to really dig into it to see what was really happening and what’s happening is NOT what I expected. And, this is really the crux of the problem. Sometimes the behavior (or, solution) is completely counter-intuitive. But, above all, there’s one fatal mistake in this post (and many others) – where they state what THE solution is. Some statements are claiming that they are [ALWAYS] better than the other and unfortunately, that’s just not the case… (comments were closed so I couldn’t comment on it directly)

What I really want to dissect a bit is this part of the post:

There are arguments for both sides, but most DBAs would really like to see only a single query plan for all parameter sizes. I didn’t have the answer off the top of my head, but I was pretty sure that NHibernate did the right thing from the point of view of the database.

As it turn out, I was wrong. By default, NHibernate will generate two separate query plans for this query in the database. Sounds bad, right?

Not really, here is the fix, just put the following in your configuration:

<property name='prepare_sql'>true</property>

And, this particular comment:

The side-effect of calling Prepare() is that a so-called “prepared statement” is created in the DB. This is essentially a statement that is precompiled in the DB and gives back a handle so future calls on THE SAME CONNECTION can be made just referencing this handle and not sending the entire sql again. If all calls to the DB were made on the same connection its true that the execution plans would be reused, but as we know – connections are drawn from the connection pool as needed and handed back in a random manner. The prepared statements on the connection are un-prepared everytime a new session is created and the connection is soft-initialized.

First and foremost, the post and the comments aren’t bad. These people are working to make their environments better and to help others to sort out their bizarre behaviors / problems. They are truly trying to help. I don’t suspect even a tiny bit of their intentionally wanting to lead anyone astray. However, some of this is just wrong. So, let me begin.

There ARE arguments for both sides.

Yes, this is TERRIBLY important. The reason there are arguments for both sides is that EACH side is correct SOME of the time. Neither is correct ALL of the time and, of course, something has to be chosen as the default. Let me try to give a basic example of this… I’ll use a traffic analogy. It’s 2am, you’ve been coding for quite a few hours, and you want to get some great diner food – in Seattle. But, you live in Redmond (across the bridge). The shortest distance between you and the restaurant will get you there in 30 minutes but requires you to cross a toll-bridge. So, the trade-offs are money for time. But, you decide to go solely “for performance” and you choose that plan… take the 520 over to Seattle.

Now, let’s say that plan is “in cache” and MUST be re-used the next time you want to go to the diner. However, the next time you want to go – it’s 4:30pm, on a Friday, and there’s a Husky game… the plan to use 520 is a bad idea. Not because it’s not the shortest distance but because of other problems (congestion, “blocking,” etc…). And, don’t get me wrong – SQL Server does NOT change plans for “blocking” or “congestion” (so, the analogy breaks down a bit here). But, you can still see the point.

A chosen plan for one set of parameters may not be best with different parameters.

And, this is why there isn’t a single right answer for EVERY statement. So, if you’re looking for THE right choice (should the statement’s plan be placed in the cache or not) then this is the tough part – what’s really best depends on what I call – the stability of the plan generated against a wide variety of different parameters.

With the right kind of testing, you can execute a statement with a variety of different parameter combinations and very whether or not each of their optimal plans has the same pattern / shape. If they do – then, I’d say that statement’s optimal plan is stable. However, (and this is often MORE likely), if the optimal plan varies, then that statement’s optimal plan is unstable (across different parameter combinations).

The end result is really a set of two rules:

  1. When a statement’s optimal plan is stable and does NOT vary across executions – a plan should be saved.
  2. When a statement’s optimal plan is unstable and varies across executions – a plan should not be saved.

“Most DBAs would really like to see only a single query plan for all parameter sizes.”

Well, only when the plan is stable. If the plan is unstable then another problem occurs – this is often referred to as “parameter sniffing problems” or “parameter sensitivity problems” (a.k.a. PSP). PSP can actually end up being a worse problem. If a bad plan sits in cache and is re-used for all executions (executing an inefficient plan) then you can end up with statements doing a tremendous amount of I/O or spilling to disk with bad memory grants or doing the wrong type of join, etc. when they shouldn’t. I’ve seen PSP essentially take a server down (causing so much I/O that the server is essentially inaccessible and unable to process any additional requests. Some clients even “react” to this by restarting SQL Server… problem solved. Well, cache cleared and the bad plan will be thrown out. Hopefully, the next execution will be what we want and a reasonable plan will get into cache. At least temporarily.

And, this is how it goes. Poor performance occurs and often a variety of things are blamed (cache or I/O or statistics or indexes) and some quick fix (updating statistics is another common one) SEEMS to solve the problem. But, ultimately, it’s probably a side-effect of what you’ve done that’s solved the problem (NOTE: Updating statistics often “invalidates” plans so the side-effect of having updated statistics might be what solved your problem and NOT the updating of statistics itself). I talked about this in a related post: What caused that plan to go horribly wrong – should you update statistics?

So, to bring it back to the main point… HOW you execute and whether or not YOU choose to force the statement to be cached are VERY important decisions and unfortunately depend on a statement by statement basis. I’ll try to bring it all together in just a moment.

“I was pretty sure that NHibernate did the right thing from the point of view of the database. As it turn out, I was wrong. By default, NHibernate will generate two separate query plans for this query in the database. Sounds bad, right?

First off, I’d have to agree WITH the default. The default is often referred to as “simple” parameterization. This is where SQL Server looks at the statement and decides – no, I shouldn’t cache a single plan because the plan might not be “safe” across all executions. And, as a result, it must be compiled for EVERY execution. I agree with the default because many executing a bad plan can be WAY more expensive than recompiling to get a good plan and executing that.

But, there is a bad side to this too. If the statement is STABLE then recompiling every time wastes time in compiling (CPU) and, it can also lead to plan cache bloat. (I’ve talked more about this here: Plan cache and optimizing for adhoc workloads)

So… I think that NHibernate is doing the right thing by default.

But, having said that, that default is NOT great for everything.

“Just put the following in your configuration: <property name=‘prepare_sql’>true</property>”

MAYBE – but, only when the plan is STABLE.  (yes, I know… I still haven’t fully described how you can tell if a plan is stable)

So, what does that do? With a bit of digging (and, from the email that I received), it appears as though this essentially uses / created a “prepared” statement. And, this also ties into one the comment that I pointed out above. And, while the comment sounds correct (and, I even agreed with it at first); that’s not actually how it works (yes, I was surprised too).

Prepared statements using sp_prepare

It turns out that telling NHibernate to use prepare_sql = true effectively does something like the following:

DECLARE @handle int
EXEC sp_prepare @handle OUTPUT,
     N'@P1 VARCHAR(15)',
     N'SELECT [m].* FROM [dbo].[member] AS [m] WHERE [m].[lastname] LIKE @p1';
EXEC sp_execute @handle, N'Anderson';
EXEC sp_unprepare @handle;
GO

At first glance it seemed like SQL would have to generate a different plan for every execution (except those that used the same handle) but I found that as long as statement was the same – the plan was the same. ACROSS CONNECTIONS. This surprised me. Even with the sp_unprepare. Even with the explicit @handle. YES, they re-used the plan. So, the effect is what was desired by the prepare_sql property being set to true. But, this is NOT always ideal (again, remember, the execution of a bad plan can often be more expensive than the cost of recompilation). But, not, recompiling every time should be avoided where the plans are stable.

So, then I started wondering about what sp_unprepare does and I found another post discussing this a bit more: Watch out those prepared SQL statements which is interesting in and of itself but that just exposes yet-another issue with where / how the handles are managed. So, if you’re calling these yourself then you definitely want to sp_unprepare. But, that still doesn’t get rid of the PSP problem. And, again, these PSP problems can be across connections.

Prepare your statements or not – that is the question?!

It really all boils down to this – effective testing. Most developers are fantastic at code coverage testing. However, most don’t know about what I call “plan stability” testing. The good news is that if you are doing good code-coverage testing then it should make code-stability testing easy!

Code-coverage testing should thoroughly test all of your code branches and all of your possible parameter combinations (or, at least, a healthy set of the more common combinations). In addition to making sure that these code combinations “do the right thing,” what I want you to do is see if their optimal plan varies across these executions. There are a few ways to do this:

For statements: If you can isolate the statement and hard-code different combinations of parameters – this might be the easiest way to do a test. What I would do with the statement above is something like this:

SELECT [m].* FROM [dbo].[member] AS [m] WHERE [m].[lastname] = 'Tripp';
SELECT [m].* FROM [dbo].[member] AS [m] WHERE [m].[lastname] = 'Anderson';
SELECT [m].* FROM [dbo].[member] AS [m] WHERE [m].[lastname] = 'Smith';
SELECT [m].* FROM [dbo].[member] AS [m] WHERE [m].[lastname] = 'Abbas';
GO

Then, execute all of these in SSMS with the option “Include Actual Execution Plan” (from the Query drop-down). Now, before you think you need to understand everything about plans and what they mean… here’s the good news. For “stability-testing” you do NOT need to care what the plan does. What you need to look at is the plan shape (OK, sometimes it’s not quite this easy and I would look a little deeper [like at the types of operators and sometimes deeper still] but I will venture that the large majority of huge performance problems are caused by vastly different plan shapes and so the majority of problems can be found JUST by looking at the shape of the plan and not having to dive in much deeper).

If the plan shape varies then the optimal plan for this statement varies; do not FORCE / PREPARE it into cache. 

If the plan shape does not vary then the optimal plan for this statement is consistent / stable; this statement should be forced / prepared in the cache. (IMPORTANT NOTE: if you don’t do a very good job testing a wide-enough range of combinations you might make a bad decision here. It’s very important to have realistic data, realistic parameter combinations, and a number of them.)

For stored procedures: For stored procedures there’s a similar / easy way to see if the plan varies from execution to execution by using EXEC procedure parameters WITH RECOMPILE. While this does have some limitations, this can also be helpful at knowing if a stored procedure might be prone to similar PSP. I did write a detailed post on that here: Building High Performance Stored Procedures.

The End Result

Blog posts like the one I referenced are based in fact. And, I believe that what they’ve done did help for THEIR problem. However, trying to over simplify (for ALL environments) a single coding practice is often NOT IDEAL. Unfortunately, it depends IS the right general answer here.

If you’re looking for a lot more content around this, check out the links I’ve included in this post – there are numerous free resources including an online video I did as part of PASStv (linked to in the Building High Performance Stored Procedures post). And, if you still want more content, demos, and details, check out my two Pluralsight courses specifically targeting these problems. Start with this one: SQL Server: Optimizing Ad Hoc Statement Performance and then watch this one: SQL Server: Optimizing Stored Procedure Performance.

Have fun!
k

Our 5th SQLintersection is coming up in May 2015

I can’t believe we’re already approaching our 5th co-located SQLintersection & DEVintersection events… if you’re interested in how we got here – check out some of my past posts:

  1. SQLintersection: a new year, a new conference
  2. SQLintersection’s Fall Conference – It’s all about ROI!
  3. Fall SQLintersection is coming up soon and we can’t wait!
  4. SQLintersection Conference and SQLafterDark Evening Event – what a fantastic week in VegasSQLafterDark

The last post is from our most recent event in Las Vegas this past fall. It was our biggest show yet and while we’ve largely kept our focus on ROI and technical content the same (performance / troubleshooting / tales-from-the-trenches with best practices on how to fix them) we’ve also added even more social events so that you really get time to intersect with the conference attendees and speakers. The addition of the SQL-specific evening event SQLafterDark was wildly popular from our Fall show and that’s returning for Spring!

And, so are many of our phenomenal speakers. But, we also have a few new industry-experts rounding out our line-up. We’re still working out a few remaining sessions but the show is already packed with enough to keep you busy for weeks after the show ends.

SQLintersection: Great Speakers!

Once again, I think a great show starts with great speakers and current / useful content. All of these speakers are industry-experts that have worked in data / SQL for years (some can even boast decades) but all are still focused on consulting and working in the trenches. Every speaker is a SQL Server MVP (with the except of the Microsoft speakers – but, I don’t think anyone’s going to question their SQL knowledge :)) and some are Microsoft Certified Masters in SQL Server. But, no matter what – ALL are focused on providing the most ROI that’s possible in their session(s) and/or their workshops. Check out this list of speakers:

  • Aaron Bertrand, Sr. Consultant, SQL Sentry, Inc. [blog | twitter]
  • Allen White, Practice Leader, Upsearch [blog | twitter]
  • Brent Ozar, Brent Ozar Unlimited [blog | twitter]
  • David Pless, Dedicated Support Engineer (PFE), Microsoft
  • Erin Stellato, Principal Consultant, SQLskills.com [blog | twitter]
  • Kevin Kline, Director of Engineering Services, SQL Sentry, Inc. [blog | twitter]
  • Kimberly L. Tripp, President/Founder, SQLskills.com [blog | twitter]
  • Paul S. Randal, CEO / Owner, SQLskills.com [blog | twitter]
  • Shep Sheppard, Senior Program Manager, Microsoft AzureCAT [blog twitter]
  • Tim Chapman, Dedicated Support Engineer (PFE), Microsoft [blog | twitter]
  • Tim Radney, Principal Consultant, SQLskills.com [blog | twitter]

SQLintersection: When is it all happening?

The show officially runs from Monday, May 18 through Thursday, May 21 but there are both pre-conference and post-conference workshops that extend the show over a total of up to 6 full days. For the full conference, you’ll want to be there from Sunday, May 17 through Friday, May 22.

  • Sunday, May 17 – pre, pre-con day. There are two workshops running:
    • Very Large Tables: Optimizing Performance and Availability through Partitioning by Kimberly L. Tripp
      Manage SQL Server Efficiently with PowerShell by Allen White
  • Monday, May 18 – pre-con day. There are two workshops running. And, the first keynote of the conference is Monday evening.
    • Performance Troubleshooting using Waits and Latches by Paul Randal
      Setting Up and Troubleshooting SQL Server by Brent Ozar
  • Tuesday, May 19 through Thursday, May 21 is the main conference. Conference sessions will run all day in multiple tracks:
    • Check out our sessions online here: http://www.sqlintersection.com/SQL-Conference/Sessions
    • Be sure to check out our cross-conference events and sessions.
    • Get your pop-culture trivia and techie-SQL-trivia hat on and join us for SQLafterDark on Wednesday evening, May 20th
  • Friday, May 21 is our final full day running with an absolutely applicable workshop for most development environments.
    • Performance Tuning When You Can’t Fix the Queries by Brent Ozar

SQLintersection: Why is it for you?

If you want practical information delivered by speakers that not-only know the technologies but are competent and consistently, highly-rated presenters – this is the show for you. You will understand the RIGHT features to troubleshoot and solve your performance and availability problems now! 

Check us out: www.SQLintersection.com.

We hope to see you there!
kt

PS – Use the discount code ‘SQLskills’ when you register and receive $50 off registration!