How can you tell if an index is REALLY a duplicate?

There’s a lot of confusion around duplicate indexes and I think a lot of it has to do with how sp_helpindex (or even SSMS) display what’s in an index. What’s actually in the index – and how it’s structured – are not always what they seem. This was the original motivation behind my rewrite of sp_helpindex but even since then, I’ve still seen a lot of confusion. In today’s blog post I’m going to first explain EXACTLY which indexes are the same and which aren’t – as well as the faults in the tools.

So, starting with index structures… (it all starts with internals :) )

The clustered index IS the data. The KEY to the clustered index (what I often refer to as the clustering key) defines the way that the data is ordered (not necessarily truly, physically ordered on disk – but ordered LOGICALLY). And, no, I’m not going to rehash every bit of internals here… just a few reminders.

A nonclustered index is duplicated data (similar to an index in the back of a book). This duplicated data can be used to help reference the actual data (exactly like an index in the back of a book) OR can be used to actually respond to requests (for example, if you’re solely looking for a count of people with a name that begins with ” then an index that has LastName in the index could be used to count them – without actually “looking up” the actual data). So, there are some very powerful uses to indexes. But, alas, this isn’t a post on using indexes or indexing strategies – this is ALL about internals (and understanding the structure of an index). So, I’m going to cut to the chase!

A nonclustered index always has:

  • Key (this is what defines the order of the index)
  • A leaf-level entry (this is the actual data stored in the index + the lookup value* + any included columns) – however, ALL of these columns are only stored once (and they’ve ALWAYS only been stored once here so, even if you reference a column that’s part of the lookup value, SQL Server will NOT duplicate it again).

*So, what is this lookup value?

The lookup value is what SQL Server uses to reference the actual data row. If a table has a clustered index then the lookup value is the clustering key (and EVERY column defined by it). If a table does not have a clustered index (it’s then a HEAP) then SQL Server uses something called a RID. A RID is an 8-byte structure consisting of 2:4:2 bytes which breakdown into 2 for the FileID, 4 for the PageID and 2 for the slot number. And while RID’s (and their history) are very interesting – it’s not entirely relevant here (how they specifically work and/or their structures) but if/when they’re in an index, I’ll list is as RID.

Let’s now put all of this together with an example (or two) starting with a relatively easy one.

USE [JunkDB];

    TestID  int identity,
    [Name]  char(16)


CREATE INDEX TestName ON Test([Name]);

sp_helpindex Test

The output shows:

index_name      index_description                               index_keys
TestCL          clustered, unique located on PRIMARY            TestID
TestName        nonclustered located on PRIMARY                 Name

And, while that looks right, it’s horribly misleading. The index that’s on TestName also includes TestID in the index as well. And, not just in the leaf level but in the tree (for ordering purposes). So, that should really show Name, TestID. But, it’s a bit more confusing if you throw in this:

CREATE UNIQUE INDEX TestNameUnique ON Test([Name]);

EXECUTE sp_helpindex Test;

The output shows:

index_name      index_description                               index_keys
TestCL          clustered, unique located on PRIMARY            TestID
TestName        nonclustered located on PRIMARY                 Name
TestNameUnique  nonclustered, unique located on PRIMARY         Name

And, at this point, it doesn’t look like there’s any difference at all between the second and third indexes (well, except that the third index requires that the values be unique – shown in the description). But, as for the “index_keys” they look the same. However, they’re actually not the same (in all parts of the tree). So, this is why I tend to differentiate between the “leaf” and the non-leaf levels of an index (when I describe them). And, it only gets more complicated when you throw in included columns (2005+).

So, how to do you tell the difference? Unfortunately, there are NO tools within SQL Server (or even any 3rd party tools that I know of) that display this through the UI, etc. But, you could start by using my replacement to sp_helpindex. I’ve updated and rewritten it a few times but the latest one will always be in the sp_helpindex reqrites category here: I personally think that’s the easiest (and is the reason why I originally wrote it!). Using it you can see how the output is more detailed.

The output shows (specifically showing ONLY the last 2 columns):

columns_in_tree              columns_in_leaf
[TestID]                     All columns "included" – the leaf level IS the data row.
[Name], [TestID]             [Name], [TestID]
[Name]                       [Name], [TestID]

Now, we’re getting somewhere. We can definitely see that these two indexes ARE different. A nonunique nonclustered needs to have the lookup value pushed up into the tree (for navigation). A unique nonclustered index does not.

Next, let’s go with a [just slightly more challenging] example:

USE [JunkDB];

    MemberNo    int     identity,
    FirstName   varchar(30)     NOT NULL,
    LastName    varchar(30)     NOT NULL,
    RegionNo    int

CREATE INDEX MemberIndex1 
ON Member(FirstName, RegionNo, MemberNo)
INCLUDE (LastName);

CREATE INDEX MemberIndex2 
ON Member(FirstName, RegionNo)
INCLUDE (LastName);

CREATE INDEX MemberIndex3 
ON Member(FirstName, RegionNo)
INCLUDE (MemberNo, LastName);

ON Member(Firstname, RegionNo)
INCLUDE (MemberNo, lastname);

First, let’s review with sp_helpindex:

EXECUTE sp_helpindex Member;

The output shows:

index_name      index_description                           index_keys
MemberCL        clustered, unique located on PRIMARY        MemberNo
MemberIndex1    nonclustered located on PRIMARY             FirstName, RegionNo, MemberNo
MemberIndex2    nonclustered located on PRIMARY             FirstName, RegionNo
MemberIndex3    nonclustered located on PRIMARY             FirstName, RegionNo
MemberIndex4    nonclustered, unique located on PRIMARY     FirstName, RegionNo

Looking solely at sp_helpindex it LOOKS like the first nonclustered index is different from the others and that the 2nd, 3rd and 4th nonclustered indexes are the same. However, that’s actually NOT true. Next, let’s use my version of sp_helpindex.

EXECUTE sp_SQLskills_SQL2008_helpindex Member;

The output (again, just showing the last two columns of output):

columns_in_tree                           columns_in_leaf
[MemberNo]                                All columns "included" – the leaf level IS the data row.
[FirstName], [RegionNo], [MemberNo]       [FirstName], [RegionNo], [MemberNo], [LastName]
[FirstName], [RegionNo], [MemberNo]       [FirstName], [RegionNo], [MemberNo], [LastName]
[FirstName], [RegionNo], [MemberNo]       [FirstName], [RegionNo], [MemberNo], [LastName]
[FirstName], [RegionNo]                   [FirstName], [RegionNo], [MemberNo], [LastName]

From this, you can see that all 4 indexes have the same leaf level but index 4 has a slightly different tree structure. Ultimately, it’s indexes 1, 2 and 3 that are the same and index 4 is actually [slightly] different. How they’re different (outside of the fact that the 4th index guarantees uniqueness) is a bit beyond the scope of this post. But, yes, there are some [relatively minor in this case] differences. And, since I’m ONLY looking for indexes that are the same then only 1, 2 and 3 fit that requirement.

And, things get MUCH more complicated if you have multicolumn clustering key and/or more complicated INCLUDEs.

Having said that – how do you find duplicate indexes?

Well… I had started this by putting together a simple way for you to check for dupes with my version of sp_helpindex but then I figured out an issue with included columns. I’m showing the structures as they’re defined (and stored). But, in terms of usage – the order of the columns in the INCLUDE does not matter. As a result, two indexes with different orders for their included columns will show as two different indexes (technically they are and they are stored differently). However, there are absolutely NO differences (in terms of usage) so I’ll need to write code to adjust for that (to truly find duplicates).

For now, here’s some quick code to get you closer. I had written some of this when we discussed this in our London Immersion Event. However, I’ve tweaked it even further here after thinking about some interesting exceptions. This code will find absolute duplicates (where the order of the structure is completely identical). To use it you’ll need to enter in the schemaname and tablename (in bold below).

IF (SELECT count(*) FROM #FindDupes) IS NOT NULL
    DROP TABLE #FindDupes;

 index_id           int,
 is_disabled        bit,
 index_name         sysname,
 index_description  varchar(210),
 index_keys         nvarchar(2126),
 included_columns   nvarchar(max),
 filter_definition  nvarchar(max),
 columns_in_tree    nvarchar(2126),
 columns_in_leaf    nvarchar(max)

DECLARE @SchemaName sysname,
        @TableName  sysname,
        @ExecStr    nvarchar(max);

SELECT @SchemaName = N'schemaname',     — MODIFY
       @TableName = N'tablename';       — MODIFY
SELECT @ExecStr = 'EXECUTE sp_SQLskills_SQL2008_helpindex ''' 
                    + QUOTENAME(@SchemaName) 
                    + N'.' 
                    + QUOTENAME(@TableName)
                    + N'''';

INSERT #FindDupes
EXEC (@ExecStr);

SELECT t1.index_id, COUNT(*) AS 'Duplicate Indexes w/Lower Index_ID', 
        N'DROP INDEX ' 
            + QUOTENAME(@SchemaName, N']') 
            + N'.' 
            + QUOTENAME(@TableName, N']') 
            + N'.' 
            + t1.index_name AS 'Drop Index Statement'
FROM #FindDupes AS t1
    INNER JOIN #FindDupes AS t2
        ON t1.columns_in_tree = t2.columns_in_tree
            AND t1.columns_in_leaf = t2.columns_in_leaf 
            AND ISNULL(t1.filter_definition, 1) = ISNULL(t2.filter_definition, 1) 
            AND PATINDEX('%unique%', t1.index_description) = PATINDEX('%unique%', t2.index_description)
            AND t1.index_id > t2.index_id
GROUP BY t1.index_id, N'DROP INDEX ' + QUOTENAME(@SchemaName, N']') 
                    + N'.' 
                    + QUOTENAME(@TableName, N']') 
                    + N'.' + t1.index_name;

I’m planning to bulid a sproc around this next as well as get rid of cases where the included columns are just in different orders next. It will be another sp_ for master and I do hope to wrap it up shortly! That code will REALLY find ALL unnecessary duplicates and help you to drop them! Stay tuned!!

Cheers and thanks for reading!!


Looking forward to London

Our next Immersion Event is in London. It follows our DevConnections conference in Karlsruhe, Germany (which was great fun) and a bit of a holiday in between where we've been driving and seeing a few places I wasn't sure I'd ever get to (specifically Liechtenstein and Luxembourg). Our driving tour definitely warrants it's own post and if you look on a map – you'll probably be a bit confused but our tour mostly includes castles and cathedrals (a longer/detailed post from one of us will definitely happen!). Here's our list:

  • June 5 – Arrival at FRA and then drove to Stuttgart to spend the night
  • June 6 – Drove Stuttgart to Liechtenstein through Austria to spend the night in Liechtenstein (*loved* it!)
  • June 7 – Drove Liechtenstein to Karlsruhe through Switzerland (stopping in Säntis)
  • June 8-9-10 – We were at the DevConnections conference in Germany. Had a great time and met a lot of fun SQL folks. We also met with the local user group(s) and had a few beers (and my favorite – a radler (similar to a shandy – because I just can't drink a lot of beer))
  • June 10 – After the conference ended we drove from Karlsruhe to Basel, Switzerland. Some good friends of ours moved to Basel about 8 months ago and we were determined to make it there! What a lovely city on the Rhine.
  • June 11 – Drove Basel, Switzerland to Luxembourg City, Luxembourg to spend the night
  • June 12 – Drove Luxembourg City to Bruges, Belgium to spend 2 nights
  • June 14 – Drove Bruges to Amsterdam, Netherlands to spend 2 nights
  • June 16 – Driving from Amsterdam to Cologne, Germany to spend 2 nights before we head to London (Saturday the 18th) to get ready for our Immersion Event!

And, that's where we're at next week. We're really looking forward to it. And, if you're local – feel free to join us for one of our evening presentations on Tuesday or Wednesday evening. We have two special evening events being presented by two companies whose products we both use and recommend. SQL Sentry's been a sponsor of ours since our first Immersion Event in the US and we're really happy they've come over with us to the UK. And, Fusion-io is a new sponsor for us; we're equally pleased to be hearing about their new products. If you’re interested in hearing about them and meeting us for some Q&A – please feel free to join us. The specific location details and session abstracts are within the links. Here's just a quick overview.

Each evening (Tue/Wed) – we'll start with an open Q&A from 6:30 to 7pm and then our guest presenter will present from 7 to 8:30. Then, we might be forced, once again, to head to the bar in the hotel. It will be a hardship but someone will have to do it!

Tuesday evening, 21 June 2011SQL Sentry will present: Tips for Query Tuning with SQL Sentry Plan Explorer and More!

Presenter: Scott Fallen, Technical Specialist, SQL Sentry, Inc.

If you’re NOT an Immersion Event attendee, please register for this FREE event here:

Wednesday evening, 22 June 2011Fusion-io will present: SQL at the speed of FLASH

Presenter: Mat Young, Senior Marketing Director, Fusion-io

If you’re NOT an Immersion Event attendee, please register for this FREE event here:

Thanks so much and we look forward to seeing you there!

Trusting the tools!

Well, it's been a crazy few weeks! Paul and I have been booked solid with client engagements, Immersion Events, conferences and other things (like chapter editing for the upcoming SQL Server Deep Dives Volume 2) and so I've been a bit behind with blogging. I have a ToDo list of items to blog about but I've got one that's fairly urgent. I'll mention a few issues here but above all – don't trust anyone.

OK, I know that sounds a bit paranoid. Not meaning to be but this is a post that I'm adding to my "Just because" series as well as my "Are you kidding me?" series. That's how bad this is… almost as bad as these "stupid bacon related tatoos" – really, someone has this (forever!):

As a DBA (and/or SQL Architect/Database Designer/Developer) we're often tasked with making changes to databases and ideally, we want to use tools to make it easier. While I do agree that many tools help tremendously, I've been shocked lately by a few that have done the wrong thing. Paul blogged about a third party tool that didn't analyze indexes correctly here: Beware of advice from 3rd-party tools around dropping indexes (ultimately, the recommendations for what to drop – were wrong). And, recently, I've been shocked to learn that some of the schema change options in the database designer (in SSMS) are less than optimal when changing index definitions (especially those that handle changes to the clustering key).

Specifically, we had a customer that was doing everything right by testing the changes on a secondary system but SSMS was doing everything wrong (the order in which it made the changes and how it made the changes – were horrible). The end result is that if you're going to make schema changes, you really need to get more insight into what the application is doing. And, many tools have an option to script out the changes rather than execute them. So, my "quick tip" is to do that and see what's going on.

And, if you're going to make schema changes in ANY tool – then work through what those changes are going to do. Most important – consider re-arranging the SQL (especially wrt indexes). If you're ever changing the CL index you need to do the following:

  1. Drop nonclustered indexes first
  2. Drop the clustered next
    • IMPORTANT NOTES: If you're completely changing it you definitely need to drop the nonclustered and then drop the CL. BUT, if you're keeping the name and only changing the definition (and it's NOT a constraint-based index) then you can use WITH DROP_EXISTING. If you use DROP_EXISTING you do NOT need to drop the nonclustered indexes first.
  3. Create the new clustered
  4. Recreate the nonclustered indexes

Some of the weird things that we've seen:

  1. Some changes to table structure (changing from a heap to a clustered or a clustered to a heap) cause disabled nonclustered to get rebuilt (and re-enabled). That's not even SSMS – that's a general SQL Server problem. In my opinion this means that disabling indexes is somewhat broken. So… gotta use drop/create (which is a bummer!).
  2. Changes to the clustering key in SSMS are sometimes done by dropping the clustered and then recreating the clustered. This will cause your nonclustered indexes to be rebuilt TWICE!
  3. If you have col1 that's currently clustered and col2 is currently nonclustered but you want to swap them… you can do that in the UI and then save but here's what they do:
    1. They drop the nonclustered
    2. They drop the clustered (this causes all of the remaining nonclustered to get rebuilt)
    3. They create the nonclustered
    4. They create the clustered (this causes all of the remaining AND the new nonclustered to get rebuilt… again)

This is only a short list of some of the crazy things that we've seen. I'm working on a more comprehensive overview of all of these things but I wanted to get a few of them out there. BEWARE OF THE TOOLS and always use scripting and testing to make sure that things are doing what you think. If you're EVER going to make changes to a critical system – this is exactly what I'd do:

  1. Script out *JUST* the objects and their definitions from the production environment
  2. Take *just* the schema and then go through and make your changes in the designer. Instead of saving the changes (which would immediately implement them), script out the changes and review the SQL.
  3. Run through the SQL and see what it does (but, this is an empty database so time isn't going to be as much of a factor here). If you think there's something wrong – ask around (colleagues/twitter/forums)…
  4. Then, once you feel you have a good version of the script THEN backup/restore the production database to your test system (I hope that you have this??) and THEN see how long it takes. If you have new concerns then ask around again!
  5. Next, before you consider if for production – you need to thoroughly test your applications. Are they affected by these changes?
  6. Then, and only when you've thoroughly tested it – you can consider it for production.

Picky yes… surprises NO. On a critical system you CANNOT afford surprises that create downtime – or worse, data loss.

Thanks for reading!!

Indexes: just because you can, doesn’t mean you should!

I've decided to create a new series of posts – just because you can, doesn't mean you should! These are going to go in a new category called "Just because" and I've already got a few lined up in this series. The first one is one I've spoken about often and one that's near/dear to my heart – indexes.

I've often received the question – why did they increase the limit from 249 nonclustered indexes to 999 nonclustered in SQL Server 2008? Does that mean that I have a need for more – potentially, many more – indexes? Simply put – yes and no. (a new spin on "it depends" :)). However, for most environments, NO is the correct answer. And, a VERY STRONG NO at that. To help give you some insight, I've answered the question "how many indexes should a table have" in this SQL Server Magazine Q&A blog post titled How Many Indexes Should I Create? (however, it's a difficult question to answer in general). But if everyone says that tables should have minimal indexes then why would the SQL team increase the limit from 249 to 999? It just doesn't seem to make sense.

And, it's a great start to my "just because" series in that this is one area where you can really shoot yourself in the foot if you create too many indexes (which, btw, has become a VERY common problem). So… I want to break this down into a few parts (and therefore posts).

Why shouldn't I create a lot of indexes?

  • Indexes have overhead for every INSERT/DELETE on the table. Every INSERT must add a row to each/every nonclustered index. Every DELETE must remove a row from each/every nonclustered index.
  • Indexes require disk space. While nonclustered indexes are generally a lot smaller than the table itself their total size (of all indexes together) can often exceed the size of the table by 2 or 3 times. This can be quite normal. However, a table that is poorly indexed and severely over indexed might have index space which is 6-10 times the size of the base table. This is wasted space on disk and also in cache. While disk space is relatively cheap, memory is not. But, that's not even the most important point. There's a lot more to it… For more insight, check out the post titled: Disk space is cheap… (that's NOT the point!)
  • Even if you have the disk space to store all of these indexes, it's unlikely that you can fit all of your tables and all of your excessive indexes in cache. Most environments have a hard time fitting all of their data in general in cache – let alone a poorly/overindexed set of data. As a result, you're going to be constantly marshalling data in and out of memory and causing your system to perform excessive IOs.
  • You can cause the optimizer to waste time in evaluating indexes as well as cause compilation plans to be bigger. This can in turn waste both time and waste even more cache. Especially if you have a lot of plans being generated through adhoc statements. Each statement might waste quite a bit of cache. For more insight, check out the post titled: Clearing the cache – are there other options?

Why is this a common problem?

I'll go into more details in a moment but there are really three reasons:

  1. SHOWPLAN shows missing index recommendations when evaluating a plan. Again, some folks take that to mean that they absolutely should create the index.  Often, I've seen multiple developers working on a project all get their index recommendations in a vacuum (per se) and then they just provide scripts to the DBA for the indexes to implement. At this point, someone should evaluate the overall combination but often no one does. And, this leads to another reason why this is such a big problem.
  2. SQL Server 2005 introduced the missing index DMVs and some people believe that all of the indexes it recommends should be created (which is far from the truth).
  3. SQL Server (every version) will allow you to create redundant indexes. Not just similar (and redundant) but completely duplicate. There is a reason and I wrote about it in this SQL Server Magazine Q&A titled: Why SQL Server Lets You Create Redundant Indexes

The end result of this combination is that more and more servers I see – have too many indexes and often redundant or similar indexes. This wastes resources, time, cache and ultimately adds to a servers inability to scale as the data set grows. See, during development when the data sets were smaller, none of this really matters. Almost none is detected until more data and more users (and unfortunately, more code and more applications) exist. And, at that point it's too late to make schema changes BUT, it's not too late to make index changes. Phew. Believe me, many problems are MUCH harder to solve after an application is in production but indexes are one of the easier ones.

Common Problem 1: Index Recommendations from SHOWPLAN (the green hint)


Don't get me wrong – I *LOVE* that this exists. And I *always* look at what they recommend. It's a good way to see what tables might have holes in my indexing strategy. But, there's a potential problem here. These recommendations are tied to the Missing Index DMVs and so BOTH have this fault but it's A LOT more apparent when using the "green hint" than using the DMVs. My main reason to think twice about these recommendations is that the Missing Index DMVs recommendations are based SOLELY on the plan that was executed. SQL Server did not analyze the statements for different algorithms. If a hash join was used then the index recommended will be an index that will help the hash join. However, your query might be SIGNIFICANTLY better off using a merge join instead. No, please understand that this is NOT a generalization (I'm not saying that merge is better than hash) it's just an example of a situation where I saw a query go from many minutes to HALF with the best index for the hash join (hey, that's good, right?) BUT, the query went down to only 4 seconds with the merge join's index. There was NO comparison here.

There are many cases where the plan that was executed IS the best plan but there are also cases where it's not. As a result, if I'm about to create the index from the "green hint" in SHOWPLAN, I will always run the individual query through DTA (the Database Tuning Advisor) when possible. This will more thoroughly evaluate the query, providing me with alternatives. Yes, it's "focused" tuning but if you're already going to create an index for that query then you might as well work out the best one. Having said that, I will ONLY create the index recommended for the table that the "green hint" recommended and I will see if that gives me enough of a gain. DTA will often make additional recommendations for both indexes and stats. For stats, I would HIGHLY recommend creating all of the statistics it recommends but ONLY the table(s) where you actually create the indexes it recommends. Personally, I don't always create all of the recommended indexes. If I'm going to create them I'll do so iteratively – testing the improvements as I go so I know what indexes give me the best gains. But, I will always add the statistics for the tables where I create their recommended indexes. Why? Because the stats (and usually multi-column stats) are there to help the indexes and their uses. So, it's pretty much always a good thing to create those stats.

Common Problem 2: The Missing Index DMVs

Don't get me wrong – I *LOVE* that these exist too and just like SHOWPLAN, I use these as well. There are some great examples out there of ways to better evaluate what's in them, what should be created and there are even some cool posts about clever ways to corroborate what's recommended by looking at showplan XML, etc. Check out these links:

However, there's a dark side to these missing index DMVs and these points aren't often mentioned and – they might SIGNIFICANTLY change the way that you view these recommendations. First, they're not perfect (ok, nothing is but I had to remind you :). The Missing Index DMVs might recommend indexes that you have already created – even after you created them, even after a reboot, etc. They will just continue to recommend the index as if it didn't exist. Check out Paul's post titled: Missing index DMVs bug that could cost your sanity. And, second, my issue ties in with Common Problem 1. Again, these recommendations are NOT for the best index but for indexes that will help the plans as they were executed.

However, there's a HUGE win that comes from the Missing Index DMVs OVER SHOWPLAN. And, that's that they tune for query classes. What this means is that a query that's higher up in the "user impact" category will likely affect many queries and therefore it is much higher on the list to consider. But, even armed with this information, I would NOT automate the creation of these indexes based on the results of queries that access these DMVs. Instead, as an application runs in production and as workload characteristics are being evaluated – save this information and use it when you're tuning to help you know better what indexes might give you the best improvements.

Common Problem 3: Duplicate/redundant Indexes

I can't help you with this one. Except feel free to increase the vote count on this on connect. I found that someone recently (3/17/2011) re-added this as a DCR here: I've made a comment to further explain it and so I hope that they will reevaluate it.

In Summary

The tools are great. You definitely WANT to use them. However, you must understand their caveats and pitfalls and add indexes only when it's appropriate. Adding more and more indexes does NOT improve performance. And, above all – be sure to review your existing indexes to see if you can drop an existing index OR make your index slightly different to consolidate index definitions into one index that does a lot more with less overhead, etc. I call this Index Consolidation. And, this is another topic for another day. :)

So – why would you want to create lots and lots of indexes?

Ha, I bet you never thought we'd get here? And, for today – I'd rather you just DON'T do it. Just because you can, doesn't mean you should. I'll give you some reasons to consider LOTS of indexes…later…after you've cleaned up all of these redundant indexes! And, to help you find redundant indexes – use my latest version of sp_helpindex from my category: sp_helpindex rewrite. This will help you to more accurately see what's really in those indexes.

Thanks for reading!

Become a SQLskills Insider NOW!

Important non-SQL request: Help save polar bears in Russia!

Last year Paul and I had the pleasure of taking an amazing adventure into the Arctic with Heritage Expeditions on the Spirit of Enderby (Professor Khromov). In addition to seeing a ton of wildlife (polar bears, arctic fox, walrus, puffins, whales, etc.) we were fortunate enough to meet some of the local experts who have spent years of their lives studying and helping to better understand many of these animals. Two of the people that we met were Dr. Nikita Ovsyanikov (a polar bear expert has worked on the Wrangel Island Nature Reserve for 30+ years and has dedicated his work in educating and studying their behaviors) and his daughter Katya Ovsyanikova – who has been going to Wrangel since a child and has dedicated her time and studies to sea otters and other marine critters as well.

So, why am I blogging about this now?

Katya is in the US this week (she arrived yesterday) to speak at the Sea Otter Conservation Workshop at the Seattle Aquarium. Since she arrived we've been chatting about the state of affairs for polar bears and the possibility of the hunt being opened up again in the Chukotka region of Russia (aka the Russian Far East). While there has been some media attention stating that the polar bear hunt has been reopened (Russia lifts ban on polar bear hunting and Russia's Chukotka backs polar bear hunting), there is also still some debate about the current status. It has not yet been approved by the Russian Federal Government even though it's been signed by the Chukotka Regional Government. Because this is still under debate, you can help to influence their decision by writing to them now. A more formal letter about the state of the polar bear is attached below.

What can you do?

(1) Tell as many people as possible, tweet, facebook, do you know someone in the media? Please spread the word! And, if they want to talk to Katya – shoot me an email and I'll get you/them hooked up!

(2) Write a letter of discontent to the Ministry of Natural Resources:

123995, Bolshaya Gruzinskaya 4/6,
Moscow, Russia
Minister – Truntev Yuriy Petrovich

(3) Sign the petition: Help to save Polar Bears in Russia!

Dear friends!

Sadly, there has arisen in Russia a very disturbing situation with polar bears.  The plight of polar bears in Russia hangs in the balance if legislation currently before the Russian Government is passed.

The total number of polar bears in the wild is only about 20,000.  Part of the Chukchi-Alaskan population, which was almost exterminated by the middle of the twentieth century, lives on the Chukotka Peninsula (North-East Russia). At present, specialists estimate its size to be no greater than 2,000 bears.  After the USSR imposed a moratorium on polar bear hunting in 1957, the Russian coast was a stronghold for polar bear conservation. For decades, and the Chukchi-Alaskan population had a chance to increase in numbers. Unfortunately, poaching also increased in the Chukotka region, and has been thriving there, particularly since the 1990s.  Poachers now kill up to 300 polar bears a year for their pelts, and do so without even trying to conceal it.  There is a huge black market for those pelts, with each one selling for about $20,000.

In 2007, a bilateral Russian-American agreement on the conservation and management of the Alaska-Chukotka polar bear population was legislated, mentioning a possibility of issuing a hunting quota for indigenous people on both sides. Disregarding the fact that IUCN's Polar Bear Specialists Group considered the situation with polar bears critical, the Russian-American Polar Bear Commission issued a resolution that a quota may be introduced on both sides of the Bering Strait. The decision was reached despite the fact that there is not enough data on the size of the population.  Moreover, the possibility of a moratorium was not even considered. As of March 2011, after more than half a century of moratorium, the hunt is being re-evaluated and potentionally re-opened in Russia!

Global warming has been rapidly shrinking the extent of the Arctic sea ice, the major platform from which polar bears hunt during most of the year.  It is difficult for polar bears to survive the ice-free seasons because their main habitat (drifting ice) is shrinking, and what remains is far from the best hunting grounds. More and more often in recent years, scientists have observed famished bears, as well as fewer females showing up to den on Wrangel Island, the main polar bear “maternity ward” in the Chukchi Sea and a protected Nature Reserve since 1976.

Under these stressful conditions, the impacts of poaching on the polar bear population are disastrous, and implementation of hunting will simply constitute legalization of poaching, and a cover for the poachers.  With regards to traditional use, the indigenous people of Chukotka have never hunted polar bears for subsistence. Legalization of polar bear hunting is just a path to the commercial use of the species. And, while a quota is intended to be issued only for “subsistence” use by indigenous people, trophy hunters are actively lobbying for implementation of the quota, which, in itself, is revealing as to the consequences of legalizing polar bear hunting. 

It is not global warming that is dangerous for polar bears; the species has survived cyclic warm periods and glaciations before.  It is the presence and activity of humans in the Arctic that pose a very real threat for bears.  With all the pressures on the population now, it is disastrous, malicious, and inhumane to open the hunt.  It would be tragic to lose such a unique species because of greediness and cynicism of a small group of people that is interested in exploiting these animals for profit.

We are deeply unhappy with this situation and are trying to raise people’s awareness and to convince the Russian government not to legislate polar bear hunting.  It is not too late, and we have to do everything possible to express the discontent of the people.

For further information on the subject, see the article by polar bear specialist Nikita Ovsyanikov:

And, for additional information – check out some of these links:

Thanks for reading and an even bigger thanks for giving your time and your signature to help these beautiful mammals.

Immersion Event Series – 4 Weeks – August 2011

As many of you know, we’ve changed our Immersion Event courses to more closely match the requirements of the Microsoft Certified Masters – SQL Server 2008 certification. No, the exams are NOT part of the training. And, NO, you don’t need to be aiming for certification to benefit greatly from the training.

Being a master means mastering the materials, mastering the features and mastering SQL Server. This will NEVER come directly from JUST a class – EVER. If you could just pass the test after attending a class then what would that prove?

In fact, this is partially why these exams exist. They are meant to really test people’s knowledge of the product as well as a lot of their real-world experiences with it.

Wait – so then what do the classes do?

They help you get on the right path. They teach you proven, real-world best practices. They get you focused on solutions rather than just features. And, most importantly, these classes are not just teaching what the best practices are, but WHY they’re best practices.

Our goal is NOT to teach you the answer, but instead we teach you the information and steps necessary to be able to solve your problem scenarios.

But, even after attending a week – or two – of deep training, can you really go out and successfully implement ALL of these features and solutions without help and without error? I really doubt it; you need that real-world experience and reinforcement.

What are the Immersion Event Options?

That’s your choice – and you have quite a few!

  • You can attend ONLY the training weeks where you have some weaknesses
  • You can attend multiple classes – but over time – allowing that time to solidify the content learned and allowing your experience to build
  • You can watch the online videos – no, they’re not the entire set of class content. Far from it actually, they’re about 20-25% of the content. And, numerous attendees have said that they’re both helpful before AND after these courses. Before – to help them prepare for the class and to get the most out of it. After – for review and to help reinforce many of the concepts.
  • You can attend ALL four classes… in a row. Wow, this is probably the most challenging.

Dead-man’s Float – for 4 Solid Weeks – Can you do it?

Or, even better – should you do it? Well, it’s definitely NOT for everyone. These classes are intense as standalone one or two-week courses… And, we’ve heard repeatedly that many attendees are really happy that they have the option to spread out the training and reinforce the materials learned between each of the weeks. However, there are still folks that want to truly immerse themselves in the content and then go for the exams within a few months of the training. Will that work? Well, you knew I’d have to say it eventually – it depends!

Honestly, it’s going to be VERY difficult. However, there are folks out there that can do it. In fact, there are already folks* that have passed the exams without the formal training (yet another benefit from Microsoft having separated the training from the exams). And, those folks that have attended the training certainly understand *many* real-world practices – and WHY they’re best practices.

SO, if you do this – what should you do to be the most successful?

  • Prior to class:
    • First, watch as many of the online videos as you can
    • Second, review as many links from our MCM prep materials as you can
  • Attend the classes:
    • Try to completely immerse yourself in the classes and try to disconnect (this is very, VERY difficult but if you spend your entire day on email and/or fighting fires then what’s the point of attending the training?)
    • Nightly, review some of the class content – at least briefly
    • Nightly and/or on weekends – work through the hands-on labs (we give you a DVD in class that you can install and work with to solidify many of the concepts covered)
    • Hey, I said it wasn’t going to be easy!
  • After class: Use these best practices in your day-to-day jobs
    • Continue to watch the videos
    • Use any further MCM prep content that’s available

Incentive to swim with the sharks!

For those of you who want to tackle all four of our Immersion Events as a single 4 week series – you can! We don’t offer all four classes in the same city, four weeks in a row very often but we generally will at least once a year. This year, we’re offering all four in Bellevue, WA in the month of August. We chose August as it’s an absolutely wonderful time of year to be in the Northwest and if you do venture out of the hotel on nights or weekend – there’s tons to do!

What’s the deal?

If you attend all four weeks of Immersion Events in August and stay at our event hotel (the Bellevue Courtyard by Marriott) during these classes (at least 20 nights) we can get you an even lower nightly rate of $149 _and_ we will pay $1,500 of your total hotel bill.

But, the hotel is more expensive than some of the extended stay locations?

True – but with our taking $1,500 off of your bill, it's a lot closer. And, we think it’s ideal to stay at the hotel because of the length of the event. With breakfast starting at 7:45 am, the lecture running until 17:30+ and evening events (on most evenings) that end between 8 or 9pm, you’ll be glad you have a place to go drop your stuff (or grab something you forgot) or even just a place to go relax for a few minutes.

And, at the end of a long day – you won’t need to deal with a shuttle (or a taxi or other) to get back to where you're staying. And, if you do have your own car then you’ll need to pay for the rental car and hotel parking ($5 per day). What's even easier, if you stay at the hotel, is that you can rent a car only when you need it (there’s a Hertz facility close to the hotel).

And, finally, my favorite part about the hotel is the location; you can walk to a tremendous number of restaurants and shops and they’re only two blocks from the Bellevue Transit Center so you’re a quick bus ride to the city if you want to see some sights over the weekend!

So, while I don't expect a lot of folks to take the full-on 4 week Immersion Event series, we've already got two that are doing it and so this is why I thought I'd do the post. And, for those of you that end up hanging out in WA before/after our classes – I even send out a list of possible sight-seeing activities and things to do/see. August is a fantastic time to be in Bellevue and in WA in general. Last year over half of our August Immersion Event attendess brought SOs to enjoy the trip.

We hope to see you there!!


* Joseph Sack ( t | b ), Microsoft Certified Master – SQL Server 2008 blogger has blogged about each of the folks that has passed here: and we’d like to make a few special call outs to some of the recently certified masters. Congrats first – to our newest SQLskills member – Jonathan Kehayais ( t | b ). And, additional congrats to our good friends: Denny Cherry ( t | b ), Simon Sabin ( t | b ), and Maciej Pilecki ( t )!!!

Our February Immersion Event in Dallas, TX – Final Wrap-up

OK, I've written about our February Immersion Event in two other recent posts:

Sadly, this is going to be my last post about our February Immersion Event on Internals and Performance. It was a fantastic week getting to know our attendees – from Q&A during lectures to breaktime to chatting in the lounge in the evenings. Quite a few attendees were surprised that we hung out and chatted with them after class and during breaks/lunch. But, we still love what we're doing and we love hearing about all of the ways that you use SQL Server. And, to be honest, we also love hearing about your lives outside of SQL Server. I remember some great conversations about health (for me, it's trying to get healthy), animals (a few dog lovers!) and racing cars (and a few speeding tickets – but I won't name names here at all!).

We all learned a lot and we also had fun. I know quite a few of you have asked about when we're going to get to your city – so that you don't have to travel. However, I have to admit that traveling to some of these events (and actually being away from home) can be extremely beneficial. The networking and time spent hearing about other scenarios and even just the time spent relaxing (and not having to rush to get the kids or feed the dog or water the plants…) can allow you really dive in and think about nothing but learning for a week. I know this increases the cost of the week but if you get more out of the training itself by being less distracted then that certain offsets the costs. And, we're also trying to choose comfortable, quality hotels – for reasonable rates.

Getting away and disconnecting (to a point) will allow you to focus better and retain more. In fact, a couple of local attendees that made arrangements to stay late on Th and/or Fr evenings ended up commenting to me later (retrospectively) that they wished they had stayed late on the other evenings as well. On Friday, there were about 12 of us that had a bite in the bar and stayed until after 8. Quite a few booked their flights to fly out Saturday morning just to make sure that they didn't miss anything on Friday.

Summing it up

So, if I were to sum it up – an Immersion Event is about immersing yourself in the technology as well as in the process of learning. If you can disconnect for a week – and, REALLY disconnect – you might get even more out of it than just the daily lectures. Networking and learning from your peers is an important part of the experience. In fact, my favorite "fun" tweet of the week (there were TONS of great tweets on #SQLskills):

Love the class– great discussion even in the women's bathroom. (@Kendra_Little on Feb 21 after chatting with Vicky Harp)

There were side discussions – everywhere! And, Eric Humphrey created a twitter list of the class group here.

What did the attendees say DURING and AFTER the event?

Thanks to everyone for a great week!

Upcoming Immersion Events

We're looking forward the next few Immersion Events coming up as well:

  • Dallas, TX March 14-18: IE2 – Immersion Event on Performance Tuning
  • Chicago, IL May 2-6: IE1 – Immersion Event on Internals and Performance (same content as the class discussed in these posts)
  • London, UK June 20-24: IE1 – Immersion Event on Internals and Performance (same content as the class discussed in these posts)
  • Bellevue, WA August 1-5: IE1 – Immersion Event on Internals and Performance (same content as the class discussed in these posts)
  • Bellevue, WA August 8-12: IE2 – Immersion Event on Performance Tuning
  • Bellevue, WA August 15-18: IE3 – Immersion Event on High Availability and Disaster Recovery
  • Bellevue, WA August 22-26: IE4 – Immersion Event on Development Support

Immerse yourself!

PS – Become a SQLskills Insider to get special information (and sometimes early discounts/details) about our events as well as content available only to Insiders! Become a SQLskills Insider here:

Our February Immersion Event in Dallas, TX – What did the attendees think?

I love this shot! I told everyone to do something fun and this is what I got… What a great group! (Well, except for Paul. I'm sure that's intended for me!! :))

This particular Immersion Event ran from February 21 through 25 in Dallas, TX and I've done a write up from our perspective here: What a week for our first Immersion Event on Internals and Performance.

In this post, I want to give you the attendee's perspectives and experience… I'll throw out a few of our favorite comments to start:

"Best training I have ever had – in any technical field. Very deep – but presented in a way that is easily understood. Thank you! I have several specific takeaways that are going to have an immediate input on my job. Finally, great hotel and great food – this helps get the most out of training."
Luke Newport (@newportl)

"The expertise/level of the speakers is refreshingly excellent! Practitioners as trainers results in a great class. I appreciate the very healthy balance of lecture/demos/questions. … All sessions based on true results and real-world experience. I can't stress enough how happy I am with this choice for training this year. … This is without a doubt the best training I have attended. It was professional, but comfortable, both of which are conducive to learning. I think you folks are doing it right: professional content + demos + open Q&A + multiple trainers!"
Darrell Landrum, Buchanan Technologies (@DarrellLandrum)

"This was an excellent class! The amount of content was incredible. Based on conversations with other attendees, I was probably the least experienced DBA attending, but the materials were presented such that I could easily follow and absorb. I learned quite a few things that I can use as soon as I get back into the office. Very engaging and informative presentation style."
Peter Hoidas, Whole Foods Market

"This was a great event. I learned so much. I thought you hit the perfect balance between theoretical and practical information. I will highly recommend this class to database developers and administrators. … Having the two of you present is especially great because of your complimentary presentation styles. It adds real balance and excitement."
Kendra Little, Wild Tangent Games (@Kendra_Little)

"The best SQL Server technical instructors I ever came across! They can present, demonstrate and explain every topic very well. I highly recommend attending all of their courses."
Viren Shah, Reliance Steel and Aluminum Co.

"There is no doubt you guys know your stuff. Any question thrown at you was addressed and then some. I have a lot to do when I get home. Your slides are full of information and everything was well demonstrated – on the whiteboard or in demos. Also, your demos are always flawless. Very impressive."
In summary: "All topics were interesting. I never found myself bored or uninterested in anything. I think that is due to your presentation skills and the fact that you make all of the information accessible even when it is quite complicated."
Darcy Dupuis, Saskatchewan Liquor & Gaming Authority (@DarcyDupuis)

"The SQLskills Internals and Design Immersion Event is without a doubt the greatest learning experience I’ve ever had for SQL Server.  Not only are Paul and Kimberly at the top of any list regarding expertise on the subjects, but they possess a keen ability to present the material in a way that makes it easy to absorb.  A rare combination in any environment, I left feeling well informed and excited to attend the next event."
Steven Wright, Director of Engineering Services for SQL Sentry (@SQL_Steve)

"I will DEFINITELY take another Immersion Event from SQLskills."
Bill Scrivener, Harris County Appraisal District

"Thank you for opening my 'SQL' eyes. Now, I know how to argue and find bad coding practices. Also, I can provide the right solution for the problems I encounter. Thank you with all of my <3!"
Jeffrey Chen, Mecklenburg County Government IST

Very informative, very pleased with skillsets learned.
Aaron Winkler,

"As the event approached the last two days, it appeared that the presenters started to hurry through the modules to keep up with the schedule. Although allowing for the audience to ask questions will cause the sections of the presentation to run long, overall the entirety of the event was well scheduled and laid out. I am very impressed wth the breadth of knowledge the presenters have and their ability to convey this information to the audience."
Terry Anthony, Passport HealthCare Communication Inc.

Thanks Terry! We could have used an extra hour or two. We've already updated the website to change the Friday schedule to a full day and NOT end early. We're also thinking of running until 5:45 each day to get a bit more time for questions. Q&A is really valuable but we also need to make sure that we don't rush in any way. Thanks again! kt

"Very much satisfied with the training. There was never a boring moment. I see myself doing a lot of things differently. The information is very useful for my day to day activities and I can work with my developers a bit better. … The best training I ever attended."
Sunil Mikkilineni, First Solar

"Very satisfied with the event. Now it's about time to show some of the benefits back to my employer and come back for more events like this. There are more than a few topics that can be taken to my environment and get good performance benefits. Discussions, whiteboards and tangents. Wonderful execution. Every aspect is perfectly planned and executed. Thank you."
Sankar Reddy, Funmobility, Inc. (@SankarReddy13)

In response to: what is your favorite topic: "Wow, just one? That would be hard. I would have to sum it up as 'internals and performance.' In other words, the entire event. But, if I have to pick one: M3: Data File Internals and Maintenance. Well, wait, M11 – the explanation of the DMVs and DTA's shortcomings."
Joe Noll, RKL eSolutions LLC (@JLNoll)

And what about the food?

SQL Sentry sponsored our event and covered the costs of food so that the attendees didn't have to go and forage for food at lunch. This allows students to take a REAL break from the day and relax. And, overall the food at the hotel was fantastic. Here are a couple of quick comments:

  • Thank you for the vegetarian food at lunch – it was really great.
  • Impressed with the food. I have had a couple of folks from next door's event rave about our food.
  • No one (even Microsoft) takes care like you guys. You are the best!
  • What a great selection for everyone – I never went hungry!

And, can I just tell you that there were OVER 30 refills of coffee/tea during the week. WOW! And, a HUGE thanks to SQL Sentry for the food!

By the Numbers

For overall course evaluations, we asked students to rate each module as well as the overall course. We used a scale from 1 to 9 with 9 being the highest and most satisfied. For the modules, all modules ranked above 8.55 meaning that we got more 9s than anything else – across EVERY module. The overwhelming favorites were my Module 8: Index Internals with an 8.9 and Paul's Module 11: Index Fragmentation with an 8.88.

As for the event, we also asked the attendees to rate the event with 7 questions about their satisfaction, the usefulness of the content and the effectiveness of the presenters. Here's the breakdown by question:

Overall Satisfaction with the event
Score     Question
8.80       Q1: Overall satisfaction with the SQLskills Immersion Event
8.85       Q2: Overall usefulness of the information
8.80       Q3: Overall effectiveness of the presentation
8.90       Q4: Kimberly L. Tripp’s knowledge of the subject
8.90       Q5: Kimberly L. Tripp’s presentation skills
8.90       Q6: Paul S. Randal’s knowledge of the subject
8.85       Q7: Paul S. Randal’s presentation skills

The best part about the event is that we learned a lot from the attendees! We're still learning what we're doing right and what we could do better and we're going to KEEP improving thanks to their fantastic feedback and comments.

Immerse yourself!

PS – Become a SQLskills Insider to get special information (and sometimes early discounts/details) about our events as well as content available only to Insiders! Become a SQLskills Insider here:

Please don’t create a painful slide deck

OK, I know that a lot of you know where I was this week… yes, the MVP Summit. I absolutely love getting together with so many friends that I don't regularly see… This is by far, my favorite part about this annual event. Yes, I know I get access to cool information about upcoming technologies but after 20 years of this (remember, I started when I was 2 :)), I've also learned that a lot changes and so I no longer spend a ton of time learning all about early betas. Having said that, I am pretty excited about SQL Server Denali. I've already started to dive into Denali and I'm even more interested in upcoming features. But, things can still change and so I'm not betting the farm on what I see.

Having said that, one thing did surprise me this week. A few of the sessions had slide decks where a horrible "event" template was applied. I don't directly blame a lot of these presenters because they're just not regular speakers at events. Many of these folks are developers and program managers on the team and so they haven't all been burned by "slide management" teams (like I have) and so many didn't even think to review their reviewed/tweaked decks. As a result, a few slides ended up with some bad color choices. From what I was told, somewhere a template was applied and somewhere whites turned into blacks (or similar dark colors) and the next thing we know, there are some VERY hard to read slides. And, because the content is the most important to me, well, I'm still happy.

However, I decided to put together a short (and fun) slide deck with a few tips and tricks that I've learned over the years. I even accepted a few tweets about other snafus during presenting (again, mostly related to slide deck no-nos) and added a few of them here. Feel free to comment about other things you've seen, learned and now do differently/better! I'd love to hear more!!!

So, without further ado, here's the deck titled: How to make the audience turn on you… FAST!
(with a subtitle of: kind of like the cougar in Ricky Bobby’s car… but worse!)

In addition to all of the tips/tricks for making presentations better (or worse :)), here are some additional and VERY helpful links to help you create a better presentation AND become a better presenter!

 But wait, there's more!

Am I missing any resources that you think are/were helpful? Shoot me an email with the links!

Good luck and have fun!

What a week for our first Immersion Event on Internals and Performance

This post is from OUR perspective. The next one I'm going to do is from our attendee feedback! 

Last week was our first week delivering a new style of Immersion Event. And, now that the class has finished, we're really happy with our decisions on content and naming. The name was new because we had originally called these Master Immersion Events and the class was somewhat new because we replaced our ONE 5-day SQL Immersion Event (Internals/Design/Performance/Maintenance) with FOUR new 4-5 day classes (19 days total). 

Why were they "master" events and why did we remove that part of the title? First, let me begin with with we did add it! We added "master" because these new courses cover the wide range of topics as required by the exam and practical for those working toward their Microsoft Certified Masters – SQL Server certification. However, soon after, we ended up dropping the "master" part because it lead to confusion (some folks thought that the class was meant ONLY for those preparing for the exams). And, we changed them from SQL Immersion Events because we significantly extended the content to cover a wider spectrum of topics but also rearranged them to create concise, individual weeks of training. So, while they're still our fun, deep, focused Immersion Events, we have a lot more depth and content (19 days just in this area) covered for those who really want to design, tune and manage SQL Server systems effectively. Paul wrote an excellent post about it here: Leaving the Land of Confusion…

For those who attended, the feedback was fantastic and I have a post coming about that. But for this post, I just wanted to give you some insight into how the class was run and what it's like to attend an "Immersion Event" with us! 

Why do we deliver these at hotels?

We set up to deliver our event at the Marriott Suites Market Center in Dallas, TX and everyone that was not local stayed at the hotel. While being local is definitely nice for folks that don't have to travel, staying at the hotel with attendees is really the icing on the cake for these events. We've chosen Hiltons/Marriotts (in general) because they're comfortable and usually have good food and a nice lounge. And, don't think that I'm talking about drinking heavily every night. No, that's not really conducive to absorbing all of the content that we're going through. In fact, I didn't even have my first drink until Friday (but, I'm also a big wimp and I can't be impaired in any way if I'm going to lecture for 10 hours!!).

What the hotel/lounge/atmosphere allowed us to do is really focus on learning. We didn't have leave to forage for food. We didn't have to think about parking or traffic or having to arrange with a facility that we wanted to run over into the evenings. We were able to retreat to a warm and cozy lounge with a fireplace and relax together, have a drink and just kickback before heading off to bed at a reasonable hour. The days were intense and these events really take a lot out of you… Many commented on how relaxed they were and how comfortable the hotel was for them. Specifically, let me give you a picture of a typical day.

A Day in the Life of an Immersion Event Attendee

  • 7:45 am – Breakfast (thanks to SQLSentry's sponsorship we added a Full American breakfast – yum!)
    • In Dallas we had a Full American breakfast with plenty of bacon. And, eggs, pastries, fruits, cereals… high carb or low carb, vegetarian or not – you were covered.
  • 8:30 am – Dive right in
    • Content: We don't dance around, we don't waste time. We were ALL about the content and as much of it as we could fit in 5 days…
    • Breaks: We'd take 1-2 breaks in the morning for a total of about 20-30 mins of break time. But, even these we often filled with chats around the coffee/tea or chats with one of us around the whiteboard.
    • Whiteboard: Speaking of the whiteboard… we do A LOT of whiteboarding! So much so that we have our OWN whiteboard shipped to the event (the flash created a bit of glare in the pic but we fixed that with lighting). And, for this one we had two (we had a second smaller whiteboard in the back where we could write notes to the attendees however, this one turned out to have the MANY twitter aliases of attendees).
  • 12:30 pm – Lunch (again, sponsored by SQLSentry which was fantastic, we had plenty of food and plenty of choices)
    • Lunch was served at 12:30 pm and what was great here is that no one had to leave the hotel to go forage for food somewhere. We could really take a break and relax. Again, people could network and chat over lunch as well. This really adds to the Immersion experience.
  • 1:30 pm – Dive back in
    • The afternoon was much like the morning but we know it's always harder to focus while digesting… We took more breaks to get people moving/stretching but often shorter breaks just to break up the time sitting. We also had dessert served as part of an afternoon break along with a coffee refresh and a host of low-carb snacks as well.
  • ~5:30 pm – Wrap-up
    • Officially class would end around 5:30 but most days we'd go a bit longer with questions… 
  • 6:30 pm – There was always something going on in the evenings… 
    • Monday night: We all gathered in the bar and did our "meet'n'greet." We bought the first round, chatted with everyone and ended up staying until around 8:30 or so. It was a really nice way to get the week rolling and make sure that no one was nervous to ask questions. However, I don't think anyone was shy even by Monday lunch!
    • Tuesday night: SQL Sentry delivered a session on how to use their FREE Plan Explorer tool. Greg Gonzalez (t|b), President/CEO of SQLSentry flew in for the evening presentation. After the presentation, we all retired to the lounge and spent a bit more time talking about the tools, SQL Server and how fit Greg is! (CrossFit is definitely a powerful workout!) 
    • Wednesday night: Immersion Event attendee and Director of Engineering Services for SQL Sentry, Steven Wright (t) delivered a session on how to use their Performance Advisor tool and the new features of v6
    • Thursday night: some attendees offered to give micro sessions on some of their favorite topics and we thought that was a great idea! We ordered a bunch of appetizers, opened a tab at the bar, and away we went! For those that asked, we also provided speaker feedback to help folks improve their speaking skills. Trevor Barkhouse (t|b), Martin Catherall (t|b), Eric Humphrey (t|b), Brian D. Knight (t) and Kendra Little (t|b) all presented 20 min sessions and informed/entertained us all! It was the PERFECT way to host our last evening of the class… And, we think this might become a staple at future Immersion Events!
    • Friday night: Even on Friday… we had to pack a bunch of stuff but many stayed and chatted and then we all headed over to the lounge for dinner and drinks. I finally relaxed and had a nice big glass of wine! What a great way to end the week!!

The Attendees

First off, we had a fantastic group with 29 attendees from as far away as New Zealand. Four attendees were winners from our contest (Martin Catherall (t|b), Eric Humphrey (t|b), Kendra Little (t|b) and Sankar Reddy (t|b)) and most were not local. In fact, one attendee used to work with Kevin Kline (t|b) in Hermitage, TN when I first met them (and a few others) back in the mid-90s. It was a bit of a blast from the past!!

And, many of the attendees were on twitter and have blogs too. All of these things continued to add to the content learned during the week and the connections made. Most folks are now following each other and keeping up with what they're learning and uncovering with SQL Server. And, we're going to see a few of them in Dallas in March for our second Immersion Event on Performance Tunng. In fact, 8 attendees for the class coming up in March are folks that have attended the week 1 (Internals and Performance) Immersion Event either this year or last and a couple are coworkers of those who have attended. So, it will be great to meet up with them again and see what they're up to!

The End Result

We love the new format and we love the feedback we've received so far. We had a great time sharing our experiences and delivering our new style Immersion Event and now we're going to put a few more on the schedule. If you're interested in immersing yourself in SQL Server – check out our Immersion Events.

And, consider becoming a SQLskills Insider as well. For our London class in June, we've already offered a discounted rate and an early early-bird registration site. And, the hotel has a wonderful pub, internet access, and a full English breakfast (Paul's *very* happy about that). We're really looking forward to heading over! And, we're planning to add another US event (Immersion Event on Internals and Performance) to the schedule for early to mid-May.

If you want to get the scoop early – become a SQLskills Insider now!

Thanks for reading!