SQLskills training goes online worldwide (and free in September!)

We can make SQL Server scale really well, but we can’t scale ourselves.

Although our Immersion Events are extremely popular, and in high demand, we can’t spend all year traveling and teaching, and we can’t come to everyone’s favorite location, so we have to limit the number that we can do. Our 2013 public training schedule has been solidified, all the contracts are signed, and we’ll be releasing the four locations (Tampa, Chicago, London, Bellevue) for registrations later this week.

But in addition to in-person training, how can we create a scalable way to get really good SQL Server training to all the people across the world who want it?

The only answer is to put our training online.

We’re not talking about recording our Immersion Events – those will remain an in-person, intense, learning experience for those who want them. We’re not moving away from our Immersion Events. Instead we’re allowing those who cannot afford the time, travel, or cost of an Immersion Event to benefit from our knowledge. And for those who do attend our Immersion Events, for that unique experience, the online training will complement and enhance the classroom learning. Indeed, anyone who attends our Immersion Events from 2013 onwards will also receive access to our online training, creating a perfect, hybrid learning solution. We’ll also be arranging something similar for past attendees too.

We’re talking about recording our entire corpus of SQL Server, virtualization, hardware, I/O subsystem, HA, DR, design, troubleshooting, performance, and more knowledge, literally many hundreds of hours, over the next 3-4 years and making it all available for anyone around the world to watch as easily as possible. Anyone will be able to learn from us from now on, in the manner and pace that they choose.

And we’re doing it through a new strategic partnership with our good friends at Pluralsight, led by Aaron Skonnard. As we see it, Pluralsight is the leader in providing highest-quality, online technical training that’s accessible across the world, using a very simple interface and with all kinds of pricing models to fit everyone from individuals to large corporations, from as little as US$29/month. We will be developing a core SQL Server curriculum for Pluralsight users and rapidly populating it with 3-4+ hour courses on everything to do with SQL Server. This will add to the more than 200 courses on other development and IT topics that Pluralsight offers, expanding at a rate of 20 new courses per month!

To celebrate the start of this great partnership, Pluralsight is offering you all complete access to three of our SQL Server courses, for free for 30 days, starting this week – with absolutely no obligation.

The three courses you can watch for free, each of which are 4+ hours long, are:

  • SQL Server: Performance Tuning Using Wait Statistics (by Paul
  • SQL Server: Transact-SQL Basic Data Retrieval (by Joe)
  • SQL Server: Collecting and Analyzing Trace Data (by Jonathan)

To sign up for this, all you have to do is follow SQLskills and Pluralsight on Twitter and you get sent a registration code, then sit back and learn. The special-offer sign-up is available until September 11th, and then you have 30 days from when you sign up to view the three courses.

Sign up at: http://pluralsight.com/training/TwitterOffer/sqlskills

We have six more courses coming online in September, one from each of us, with at least another six more coming before the end of the year. The September courses are:

  • SQL Server: Understanding Logging, Recovery, and the Transaction Log (by Paul)
  • SQL Server: Database Design Mistakes and Solutions (by me)
  • SQL Server: Extended Events Basics (by Jonathan)
  • SQL Server: Transact-SQL Basic Data Modification (by Joe)
  • Server Hardware Fundamentals (by Glenn)
  • SQL Server: Benchmarking and Baselining (by Erin)

We are extremely excited to be partnering with Pluralsight to make our collective knowledge accessible to the world, and we hope you’re excited to learn from us!

Thanks as always,

Paul and Kimberly

Understanding LOB data (2008/2008R2 & 2012)

I know it’s been a while since I last blogged here on SQLskills. It’s been a busy few months with Immersion Events, conferences and even a bit of personal travel in there. And, we’ve also been blogging very regularly (weekly) at SQLServerPro here: Kimberly & Paul: SQL Server Questions Answered. And, this is what’s brought me to write this post. A few days ago, I blogged What about moving LOB data? and in part I discussed some of the issues moving SQL Server LOB data using OFFLINE operations. In part II (What about moving LOB data while keeping the table online?), I’m continuing the discussion with some additional points for ONLINE moves.

However, I also wanted to make sure that people fully understood how SQL Server LOB data works with regard to the physical location within the database… So, I created a script to walk you through it. This post will show you the specifics…

First, we’ll create a TestLOB database. Please note that this script drops TestLOB if it exists. If you already have a database with this name, it will be removed. Additionally, you will probably need to change the directory structure to something more appropriate on your system. These paths are the default data/log locations for SQL Server 2012 running as a default instance (MSSQLServer).


USE [master]


(   NAME = N'TestLOBPrimary'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\TestLOBPrimary.mdf'
, SIZE = 5120KB , FILEGROWTH = 1024KB ),
(   NAME = N'FG1File1'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FG1File1.ndf'
, SIZE = 61440KB , FILEGROWTH = 20480KB ),
(   NAME = N'FG1File2'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FG1File2.ndf'
, SIZE = 61440KB , FILEGROWTH = 20480KB ),
(   NAME = N'FG2File1'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FG2File1.ndf'
, SIZE = 61440KB , FILEGROWTH = 20480KB ),
(   NAME = N'FG2File2'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FG2File2.ndf'
, SIZE = 61440KB , FILEGROWTH = 20480KB ),
(   NAME = N'FG3File1'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FG3File1.ndf'
, SIZE = 61440KB , FILEGROWTH = 20480KB ),
(   NAME = N'FG3File2'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FG3File2.ndf'
, SIZE = 61440KB , FILEGROWTH = 20480KB )
(   NAME = N'TestLOBLog'
, FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\TestLOBLog.ldf'
, SIZE = 102400KB , FILEGROWTH = 10240KB);


Next, I’ll make the FG1 filegroup the default filegroup. For all objects that are not explicitly defined (using the ON clause), they will be created in FG1.


We’ll start by creating the TestLOBTable table within the FG1 filegroup

c1  int identity,
c2  char(8000)      default 'this is a test',
c3  varchar(max)    NULL
); -- will be created on FG1

We can verify the filegroup location for this object by using sp_help (review the results of the 5th grid returned).

Next, we’ll add 1000 rows to the table. Each row is basically 8kb because of the size of the c2 column. For these inserts, we will insert data for c1 and c2 but the value for c3 is NULL.

GO  1000 -- create an 80MB table

To review the size (as well as the location) of this data, we’ll run a quick query against sys.dn_db_file_space_usage (which works for ALL databases in SQL Server 2012 prior to 2012 this DMV only returned space usage information for tempdb [DBID = 2]).

Since there are no other objects in the database this very generic view of the location will be just fine:

FROM sys.dm_db_file_space_usage;

Or, if you’re on a version prior to SQL Server 2012 – you can look at the data space id set for the table (from sys.tables). Here’s the query you need:

SELECT t.name AS [Table Name]
, lob_data_space_id AS [Data Space ID]
, ds.name AS [Data Space Name]
FROM sys.tables AS t
INNER JOIN sys.data_spaces AS ds
ON t.lob_data_space_id = ds.data_space_id;

Even if we create a clustered index on the table, the data doesn’t change filegroup. However, SQL Server does reorder the data within FG1 to match the key order defined by the clustering key. So, this operation can take quite bit of time and log space (depending on recovery model).

ON TestLobTable (c1);

Again, we can verify that the object still lives on the FG filegroup by using sp_help (5th grid).

Now, things will get a bit more interesting…

In SQL Server 2012, you can rebuild a table even if it has LOB data. So, you might think that the following will move the entire table… let’s see:

ON TestLobTable (c1)

Now, check the results of sp_help. sp_help says it lives on FG2. However, we curently don’t have any data in the LOB portion of this table. What if we add some??

UPDATE TestLobTable
SET c3 = REPLICATE (convert(varchar(max), 'ABC'), 8000) -- creates a 24 row
WHERE c1 % 17 = 0; -- only 58 rows

Now, re-check the results from both sp_help and the LOB query. sp_help still thinks we’re writing to FG2 but that’s correct because sp_help ONLY returns information about the in_row portion of the table. Even if you had specified a location for LOB data using TEXTIMAGE_ON when creating the table… sp_help would still ONLY reference the location for the in_row structure.
The only way to know where SQL Server LOB data is going is to use the query above. And, nothing you do can move the LOB data using just a standard rebuild.

So, let’s try to rebuild the object again…

ON TestLobTable (c1)

And, recheck with our queries. Again, our in_row data has moved but not our LOB data.

My SQLServerPro blog post mentioned that you do have one option using partitioning but a great point is that this requires Enterprise Edition. So, it’s not ideal. If you are running on Enterprise Edition (or Enterprise Eval/Developer Edition), the scripts continue and show how to go throught he move process using a partition function/partition scheme.

And, in this week’s SQLServerPro blog post I’m going to give you other options – that can be done ONLINE without Enterprise Edition (thanks to a cool trick by Brad Hoff that I just learned too)!

However, the end result is that moving around and working with LOB data is more challenging. To be honest, if you plan to have a lot of LOB data, you will want to be very strategic with how you design that data/table (even considering better isolation for it) so that you have more options available later if you want to move it to another hard drive.

Having said all of that, it has gotten A LOT better now that SQL Server 2012 supports online operations for indexes where a LOB column is included. So, that’s a great start. But, moving LOB data is non-trivial and really needs to be thought out properly especially if you have a lot of data to move/migrate.

Thanks for reading!

Removing duplicate indexes

SQL Server has a horrible habit of letting bad habits move forward (for the sake of backward compatibility [e.g. autoshrink]). And, I do understand *some* of the reasons for allowing some of these “features/options” to move forward. However, there are a few that frustrate me and I’ve talked about this one quite often.

SQL Server lets you create completely redundant and totally duplicate indexes.

Yes, you can even create the same index 999 times! Try it:


    col1 int  identity

DECLARE @IndexID smallint,
        @ExecStr nvarchar(500);

SELECT @IndexID = 1;

WHILE @IndexID <= 999
    SELECT @ExecStr = 'CREATE INDEX [Test' + right('00' + convert(varchar(5), @IndexID), 3) + '] ON TestTable (col1)';


    SELECT @IndexID = @IndexID + 1;

SELECT count(*)
FROM sys.indexes
WHERE object_id = object_id('TestTable');

--DROP TABLE TestTable

For somewhat obvious reasons having 999 duplicate indexes does seem completely useless… so, why does SQL Server allow it at all? And, what’s the backward compatibility argument? Well, I’ve written about that before on our SQL Server Magazine Q&A blog (Kimberly & Paul – Questions Answered): Why SQL Server Lets You Create Redundant Indexes.

Having said that – and, for those of you that are only checking YOUR code, you don’t use index hints and/or you don’t care about backward compat – then, you can use my proc to help you find duplicate indexes (and drop them)! It’s all based on my “sp_helpindex rewrites” stored procedures but I had to do some “tweaking” to get the procedures to understand that the order of the columns in the INCLUDE list are irrelevant (in terms of order). So, you can’t use the sps as written. They give you the exact (and correct) structure – and, that’s exactly how SQL Server stores them. For example, the following two indexes have a different structure. However, they are completely redundant and you do NOT need both.

CREATE INDEX Test1 ON dbo.Member (LastName)
INCLUDE (FirstName, MiddleInitial);

CREATE INDEX Test2 ON Member (LastName)
INCLUDE (MiddleInitial, FirstName);

Here’s what you need:

  1. First, my modified sp_ that exposes all of the columns in all levels of the index. This one does NOT replace the one needed by sp_SQLskills_SQL2008_helpindex. You need both. This version is used to reorder the INCLUDEd columns so that the “finddupes” proc can remove indexes that are different solely because of the order of the columns in the INCLUDE clause (which is irrelevant for navigation/index usage).

    Install this one first:
    20110715_sp_SQLskills_ExposeColsInIndexLevels_INCLUDE_UNORDERED.sql (6.95 kb)

  2. Second, my modifed sp_helpindex that is used SOLELY by finddupes. Basically, it’s just modified to use the UNORDERED version of “exposing the columns” in the nonclustered indexes. Again, this does NOT replace the other sp – you need both.

    Install this one second:
    20110715_sp_SQLskills_SQL2008_finddupes_helpindex.sql (10.95 kb)

  3. Finally, here’s the “finddupes” procedure

    Install this one last:
    20110720_sp_SQLskills_SQL2008_finddupes.sql (5.30 kb)

  4. NOTES: At some point in the not-too-distant future I’ll modify this for the following:

    1) One set of code that will work for 2005/2008
    2) I’ll combine the two versions of the “expose columns in index levels” so that there’s only one.
    3) I’ll make this work for indexed views

Finally, how can you use this:

To find the duplicates for just one table – you can use one-part or two-part naming:

USE [AdventureWorks2008];

EXECUTE sp_SQLskills_SQL2008_finddupes @ObjName='Production.Document';

OR… to find all duplicates in the entire database:

USE [AdventureWorks2008];

EXECUTE sp_SQLskills_SQL2008_finddupes;

OR… if you’re really motivated, you can run it for all databases – but, before you do – see Aaron Bertrand’s comment to this post for his replacement for sp_msforeachdb and use that instead!!

EXECUTE sp_msforeachdb 'USE ?; exec sp_SQLskills_SQL2008_finddupes';

Have fun and let me know how many duplicates you find?
I found one in AdventureWorks2008 and a few in Northwind. I suspect you’ll find a few! This isn’t something that will find tons of indexes (to drop) but if there are even a couple of dupes (especially on large tables) you’ll have savings in database modification statements, logging, caching, maintenance, storage (and therefore backups, etc.) and this is why dupes are HORRIBLE! I do wish that SQL Server had an option to prevent their being created! Maybe someday?!



How can you tell if an index is REALLY a duplicate?

There’s a lot of confusion around duplicate indexes and I think a lot of it has to do with how sp_helpindex (or even SSMS) display what’s in an index. What’s actually in the index – and how it’s structured – are not always what they seem. This was the original motivation behind my rewrite of sp_helpindex but even since then, I’ve still seen a lot of confusion. In today’s blog post I’m going to first explain EXACTLY which indexes are the same and which aren’t – as well as the faults in the tools.

So, starting with index structures… (it all starts with internals :) )

The clustered index IS the data. The KEY to the clustered index (what I often refer to as the clustering key) defines the way that the data is ordered (not necessarily truly, physically ordered on disk – but ordered LOGICALLY). And, no, I’m not going to rehash every bit of internals here… just a few reminders.

A nonclustered index is duplicated data (similar to an index in the back of a book). This duplicated data can be used to help reference the actual data (exactly like an index in the back of a book) OR can be used to actually respond to requests (for example, if you’re solely looking for a count of people with a name that begins with ” then an index that has LastName in the index could be used to count them – without actually “looking up” the actual data). So, there are some very powerful uses to indexes. But, alas, this isn’t a post on using indexes or indexing strategies – this is ALL about internals (and understanding the structure of an index). So, I’m going to cut to the chase!

A nonclustered index always has:

  • Key (this is what defines the order of the index)
  • A leaf-level entry (this is the actual data stored in the index + the lookup value* + any included columns) – however, ALL of these columns are only stored once (and they’ve ALWAYS only been stored once here so, even if you reference a column that’s part of the lookup value, SQL Server will NOT duplicate it again).

*So, what is this lookup value?

The lookup value is what SQL Server uses to reference the actual data row. If a table has a clustered index then the lookup value is the clustering key (and EVERY column defined by it). If a table does not have a clustered index (it’s then a HEAP) then SQL Server uses something called a RID. A RID is an 8-byte structure consisting of 2:4:2 bytes which breakdown into 2 for the FileID, 4 for the PageID and 2 for the slot number. And while RID’s (and their history) are very interesting – it’s not entirely relevant here (how they specifically work and/or their structures) but if/when they’re in an index, I’ll list is as RID.

Let’s now put all of this together with an example (or two) starting with a relatively easy one.

USE [JunkDB];

    TestID  int identity,
    [Name]  char(16)


CREATE INDEX TestName ON Test([Name]);

sp_helpindex Test

The output shows:

index_name      index_description                               index_keys
TestCL          clustered, unique located on PRIMARY            TestID
TestName        nonclustered located on PRIMARY                 Name

And, while that looks right, it’s horribly misleading. The index that’s on TestName also includes TestID in the index as well. And, not just in the leaf level but in the tree (for ordering purposes). So, that should really show Name, TestID. But, it’s a bit more confusing if you throw in this:

CREATE UNIQUE INDEX TestNameUnique ON Test([Name]);

EXECUTE sp_helpindex Test;

The output shows:

index_name      index_description                               index_keys
TestCL          clustered, unique located on PRIMARY            TestID
TestName        nonclustered located on PRIMARY                 Name
TestNameUnique  nonclustered, unique located on PRIMARY         Name

And, at this point, it doesn’t look like there’s any difference at all between the second and third indexes (well, except that the third index requires that the values be unique – shown in the description). But, as for the “index_keys” they look the same. However, they’re actually not the same (in all parts of the tree). So, this is why I tend to differentiate between the “leaf” and the non-leaf levels of an index (when I describe them). And, it only gets more complicated when you throw in included columns (2005+).

So, how to do you tell the difference? Unfortunately, there are NO tools within SQL Server (or even any 3rd party tools that I know of) that display this through the UI, etc. But, you could start by using my replacement to sp_helpindex. I’ve updated and rewritten it a few times but the latest one will always be in the sp_helpindex reqrites category here: http://www.sqlskills.com/blogs/kimberly/category/sp_helpindex-rewrites.aspx. I personally think that’s the easiest (and is the reason why I originally wrote it!). Using it you can see how the output is more detailed.

The output shows (specifically showing ONLY the last 2 columns):

columns_in_tree              columns_in_leaf
[TestID]                     All columns "included" – the leaf level IS the data row.
[Name], [TestID]             [Name], [TestID]
[Name]                       [Name], [TestID]

Now, we’re getting somewhere. We can definitely see that these two indexes ARE different. A nonunique nonclustered needs to have the lookup value pushed up into the tree (for navigation). A unique nonclustered index does not.

Next, let’s go with a [just slightly more challenging] example:

USE [JunkDB];

    MemberNo    int     identity,
    FirstName   varchar(30)     NOT NULL,
    LastName    varchar(30)     NOT NULL,
    RegionNo    int

CREATE INDEX MemberIndex1 
ON Member(FirstName, RegionNo, MemberNo)
INCLUDE (LastName);

CREATE INDEX MemberIndex2 
ON Member(FirstName, RegionNo)
INCLUDE (LastName);

CREATE INDEX MemberIndex3 
ON Member(FirstName, RegionNo)
INCLUDE (MemberNo, LastName);

ON Member(Firstname, RegionNo)
INCLUDE (MemberNo, lastname);

First, let’s review with sp_helpindex:

EXECUTE sp_helpindex Member;

The output shows:

index_name      index_description                           index_keys
MemberCL        clustered, unique located on PRIMARY        MemberNo
MemberIndex1    nonclustered located on PRIMARY             FirstName, RegionNo, MemberNo
MemberIndex2    nonclustered located on PRIMARY             FirstName, RegionNo
MemberIndex3    nonclustered located on PRIMARY             FirstName, RegionNo
MemberIndex4    nonclustered, unique located on PRIMARY     FirstName, RegionNo

Looking solely at sp_helpindex it LOOKS like the first nonclustered index is different from the others and that the 2nd, 3rd and 4th nonclustered indexes are the same. However, that’s actually NOT true. Next, let’s use my version of sp_helpindex.

EXECUTE sp_SQLskills_SQL2008_helpindex Member;

The output (again, just showing the last two columns of output):

columns_in_tree                           columns_in_leaf
[MemberNo]                                All columns "included" – the leaf level IS the data row.
[FirstName], [RegionNo], [MemberNo]       [FirstName], [RegionNo], [MemberNo], [LastName]
[FirstName], [RegionNo], [MemberNo]       [FirstName], [RegionNo], [MemberNo], [LastName]
[FirstName], [RegionNo], [MemberNo]       [FirstName], [RegionNo], [MemberNo], [LastName]
[FirstName], [RegionNo]                   [FirstName], [RegionNo], [MemberNo], [LastName]

From this, you can see that all 4 indexes have the same leaf level but index 4 has a slightly different tree structure. Ultimately, it’s indexes 1, 2 and 3 that are the same and index 4 is actually [slightly] different. How they’re different (outside of the fact that the 4th index guarantees uniqueness) is a bit beyond the scope of this post. But, yes, there are some [relatively minor in this case] differences. And, since I’m ONLY looking for indexes that are the same then only 1, 2 and 3 fit that requirement.

And, things get MUCH more complicated if you have multicolumn clustering key and/or more complicated INCLUDEs.

Having said that – how do you find duplicate indexes?

Well… I had started this by putting together a simple way for you to check for dupes with my version of sp_helpindex but then I figured out an issue with included columns. I’m showing the structures as they’re defined (and stored). But, in terms of usage – the order of the columns in the INCLUDE does not matter. As a result, two indexes with different orders for their included columns will show as two different indexes (technically they are and they are stored differently). However, there are absolutely NO differences (in terms of usage) so I’ll need to write code to adjust for that (to truly find duplicates).

For now, here’s some quick code to get you closer. I had written some of this when we discussed this in our London Immersion Event. However, I’ve tweaked it even further here after thinking about some interesting exceptions. This code will find absolute duplicates (where the order of the structure is completely identical). To use it you’ll need to enter in the schemaname and tablename (in bold below).

IF (SELECT count(*) FROM #FindDupes) IS NOT NULL
    DROP TABLE #FindDupes;

 index_id           int,
 is_disabled        bit,
 index_name         sysname,
 index_description  varchar(210),
 index_keys         nvarchar(2126),
 included_columns   nvarchar(max),
 filter_definition  nvarchar(max),
 columns_in_tree    nvarchar(2126),
 columns_in_leaf    nvarchar(max)

DECLARE @SchemaName sysname,
        @TableName  sysname,
        @ExecStr    nvarchar(max);

SELECT @SchemaName = N'schemaname',     — MODIFY
       @TableName = N'tablename';       — MODIFY
SELECT @ExecStr = 'EXECUTE sp_SQLskills_SQL2008_helpindex ''' 
                    + QUOTENAME(@SchemaName) 
                    + N'.' 
                    + QUOTENAME(@TableName)
                    + N'''';

INSERT #FindDupes
EXEC (@ExecStr);

SELECT t1.index_id, COUNT(*) AS 'Duplicate Indexes w/Lower Index_ID', 
        N'DROP INDEX ' 
            + QUOTENAME(@SchemaName, N']') 
            + N'.' 
            + QUOTENAME(@TableName, N']') 
            + N'.' 
            + t1.index_name AS 'Drop Index Statement'
FROM #FindDupes AS t1
    INNER JOIN #FindDupes AS t2
        ON t1.columns_in_tree = t2.columns_in_tree
            AND t1.columns_in_leaf = t2.columns_in_leaf 
            AND ISNULL(t1.filter_definition, 1) = ISNULL(t2.filter_definition, 1) 
            AND PATINDEX('%unique%', t1.index_description) = PATINDEX('%unique%', t2.index_description)
            AND t1.index_id > t2.index_id
GROUP BY t1.index_id, N'DROP INDEX ' + QUOTENAME(@SchemaName, N']') 
                    + N'.' 
                    + QUOTENAME(@TableName, N']') 
                    + N'.' + t1.index_name;

I’m planning to bulid a sproc around this next as well as get rid of cases where the included columns are just in different orders next. It will be another sp_ for master and I do hope to wrap it up shortly! That code will REALLY find ALL unnecessary duplicates and help you to drop them! Stay tuned!!

Cheers and thanks for reading!!


Looking forward to London

Our next Immersion Event is in London. It follows our DevConnections conference in Karlsruhe, Germany (which was great fun) and a bit of a holiday in between where we've been driving and seeing a few places I wasn't sure I'd ever get to (specifically Liechtenstein and Luxembourg). Our driving tour definitely warrants it's own post and if you look on a map – you'll probably be a bit confused but our tour mostly includes castles and cathedrals (a longer/detailed post from one of us will definitely happen!). Here's our list:

  • June 5 – Arrival at FRA and then drove to Stuttgart to spend the night
  • June 6 – Drove Stuttgart to Liechtenstein through Austria to spend the night in Liechtenstein (*loved* it!)
  • June 7 – Drove Liechtenstein to Karlsruhe through Switzerland (stopping in Säntis)
  • June 8-9-10 – We were at the DevConnections conference in Germany. Had a great time and met a lot of fun SQL folks. We also met with the local user group(s) and had a few beers (and my favorite – a radler (similar to a shandy – because I just can't drink a lot of beer))
  • June 10 – After the conference ended we drove from Karlsruhe to Basel, Switzerland. Some good friends of ours moved to Basel about 8 months ago and we were determined to make it there! What a lovely city on the Rhine.
  • June 11 – Drove Basel, Switzerland to Luxembourg City, Luxembourg to spend the night
  • June 12 – Drove Luxembourg City to Bruges, Belgium to spend 2 nights
  • June 14 – Drove Bruges to Amsterdam, Netherlands to spend 2 nights
  • June 16 – Driving from Amsterdam to Cologne, Germany to spend 2 nights before we head to London (Saturday the 18th) to get ready for our Immersion Event!

And, that's where we're at next week. We're really looking forward to it. And, if you're local – feel free to join us for one of our evening presentations on Tuesday or Wednesday evening. We have two special evening events being presented by two companies whose products we both use and recommend. SQL Sentry's been a sponsor of ours since our first Immersion Event in the US and we're really happy they've come over with us to the UK. And, Fusion-io is a new sponsor for us; we're equally pleased to be hearing about their new products. If you’re interested in hearing about them and meeting us for some Q&A – please feel free to join us. The specific location details and session abstracts are within the links. Here's just a quick overview.

Each evening (Tue/Wed) – we'll start with an open Q&A from 6:30 to 7pm and then our guest presenter will present from 7 to 8:30. Then, we might be forced, once again, to head to the bar in the hotel. It will be a hardship but someone will have to do it!

Tuesday evening, 21 June 2011SQL Sentry will present: Tips for Query Tuning with SQL Sentry Plan Explorer and More!

Presenter: Scott Fallen, Technical Specialist, SQL Sentry, Inc.

If you’re NOT an Immersion Event attendee, please register for this FREE event here: http://SQLskillsLondonTue.eventbrite.com/

Wednesday evening, 22 June 2011Fusion-io will present: SQL at the speed of FLASH

Presenter: Mat Young, Senior Marketing Director, Fusion-io

If you’re NOT an Immersion Event attendee, please register for this FREE event here: http://SQLskillsLondonWed.eventbrite.com/

Thanks so much and we look forward to seeing you there!

Trusting the tools!

Well, it's been a crazy few weeks! Paul and I have been booked solid with client engagements, Immersion Events, conferences and other things (like chapter editing for the upcoming SQL Server Deep Dives Volume 2) and so I've been a bit behind with blogging. I have a ToDo list of items to blog about but I've got one that's fairly urgent. I'll mention a few issues here but above all – don't trust anyone.

OK, I know that sounds a bit paranoid. Not meaning to be but this is a post that I'm adding to my "Just because" series as well as my "Are you kidding me?" series. That's how bad this is… almost as bad as these "stupid bacon related tatoos" – really, someone has this (forever!):

As a DBA (and/or SQL Architect/Database Designer/Developer) we're often tasked with making changes to databases and ideally, we want to use tools to make it easier. While I do agree that many tools help tremendously, I've been shocked lately by a few that have done the wrong thing. Paul blogged about a third party tool that didn't analyze indexes correctly here: Beware of advice from 3rd-party tools around dropping indexes (ultimately, the recommendations for what to drop – were wrong). And, recently, I've been shocked to learn that some of the schema change options in the database designer (in SSMS) are less than optimal when changing index definitions (especially those that handle changes to the clustering key).

Specifically, we had a customer that was doing everything right by testing the changes on a secondary system but SSMS was doing everything wrong (the order in which it made the changes and how it made the changes – were horrible). The end result is that if you're going to make schema changes, you really need to get more insight into what the application is doing. And, many tools have an option to script out the changes rather than execute them. So, my "quick tip" is to do that and see what's going on.

And, if you're going to make schema changes in ANY tool – then work through what those changes are going to do. Most important – consider re-arranging the SQL (especially wrt indexes). If you're ever changing the CL index you need to do the following:

  1. Drop nonclustered indexes first
  2. Drop the clustered next
    • IMPORTANT NOTES: If you're completely changing it you definitely need to drop the nonclustered and then drop the CL. BUT, if you're keeping the name and only changing the definition (and it's NOT a constraint-based index) then you can use WITH DROP_EXISTING. If you use DROP_EXISTING you do NOT need to drop the nonclustered indexes first.
  3. Create the new clustered
  4. Recreate the nonclustered indexes

Some of the weird things that we've seen:

  1. Some changes to table structure (changing from a heap to a clustered or a clustered to a heap) cause disabled nonclustered to get rebuilt (and re-enabled). That's not even SSMS – that's a general SQL Server problem. In my opinion this means that disabling indexes is somewhat broken. So… gotta use drop/create (which is a bummer!).
  2. Changes to the clustering key in SSMS are sometimes done by dropping the clustered and then recreating the clustered. This will cause your nonclustered indexes to be rebuilt TWICE!
  3. If you have col1 that's currently clustered and col2 is currently nonclustered but you want to swap them… you can do that in the UI and then save but here's what they do:
    1. They drop the nonclustered
    2. They drop the clustered (this causes all of the remaining nonclustered to get rebuilt)
    3. They create the nonclustered
    4. They create the clustered (this causes all of the remaining AND the new nonclustered to get rebuilt… again)

This is only a short list of some of the crazy things that we've seen. I'm working on a more comprehensive overview of all of these things but I wanted to get a few of them out there. BEWARE OF THE TOOLS and always use scripting and testing to make sure that things are doing what you think. If you're EVER going to make changes to a critical system – this is exactly what I'd do:

  1. Script out *JUST* the objects and their definitions from the production environment
  2. Take *just* the schema and then go through and make your changes in the designer. Instead of saving the changes (which would immediately implement them), script out the changes and review the SQL.
  3. Run through the SQL and see what it does (but, this is an empty database so time isn't going to be as much of a factor here). If you think there's something wrong – ask around (colleagues/twitter/forums)…
  4. Then, once you feel you have a good version of the script THEN backup/restore the production database to your test system (I hope that you have this??) and THEN see how long it takes. If you have new concerns then ask around again!
  5. Next, before you consider if for production – you need to thoroughly test your applications. Are they affected by these changes?
  6. Then, and only when you've thoroughly tested it – you can consider it for production.

Picky yes… surprises NO. On a critical system you CANNOT afford surprises that create downtime – or worse, data loss.

Thanks for reading!!

Indexes: just because you can, doesn’t mean you should!

I've decided to create a new series of posts – just because you can, doesn't mean you should! These are going to go in a new category called "Just because" and I've already got a few lined up in this series. The first one is one I've spoken about often and one that's near/dear to my heart – indexes.

I've often received the question – why did they increase the limit from 249 nonclustered indexes to 999 nonclustered in SQL Server 2008? Does that mean that I have a need for more – potentially, many more – indexes? Simply put – yes and no. (a new spin on "it depends" :)). However, for most environments, NO is the correct answer. And, a VERY STRONG NO at that. To help give you some insight, I've answered the question "how many indexes should a table have" in this SQL Server Magazine Q&A blog post titled How Many Indexes Should I Create? (however, it's a difficult question to answer in general). But if everyone says that tables should have minimal indexes then why would the SQL team increase the limit from 249 to 999? It just doesn't seem to make sense.

And, it's a great start to my "just because" series in that this is one area where you can really shoot yourself in the foot if you create too many indexes (which, btw, has become a VERY common problem). So… I want to break this down into a few parts (and therefore posts).

Why shouldn't I create a lot of indexes?

  • Indexes have overhead for every INSERT/DELETE on the table. Every INSERT must add a row to each/every nonclustered index. Every DELETE must remove a row from each/every nonclustered index.
  • Indexes require disk space. While nonclustered indexes are generally a lot smaller than the table itself their total size (of all indexes together) can often exceed the size of the table by 2 or 3 times. This can be quite normal. However, a table that is poorly indexed and severely over indexed might have index space which is 6-10 times the size of the base table. This is wasted space on disk and also in cache. While disk space is relatively cheap, memory is not. But, that's not even the most important point. There's a lot more to it… For more insight, check out the post titled: Disk space is cheap… (that's NOT the point!)
  • Even if you have the disk space to store all of these indexes, it's unlikely that you can fit all of your tables and all of your excessive indexes in cache. Most environments have a hard time fitting all of their data in general in cache – let alone a poorly/overindexed set of data. As a result, you're going to be constantly marshalling data in and out of memory and causing your system to perform excessive IOs.
  • You can cause the optimizer to waste time in evaluating indexes as well as cause compilation plans to be bigger. This can in turn waste both time and waste even more cache. Especially if you have a lot of plans being generated through adhoc statements. Each statement might waste quite a bit of cache. For more insight, check out the post titled: Clearing the cache – are there other options?

Why is this a common problem?

I'll go into more details in a moment but there are really three reasons:

  1. SHOWPLAN shows missing index recommendations when evaluating a plan. Again, some folks take that to mean that they absolutely should create the index.  Often, I've seen multiple developers working on a project all get their index recommendations in a vacuum (per se) and then they just provide scripts to the DBA for the indexes to implement. At this point, someone should evaluate the overall combination but often no one does. And, this leads to another reason why this is such a big problem.
  2. SQL Server 2005 introduced the missing index DMVs and some people believe that all of the indexes it recommends should be created (which is far from the truth).
  3. SQL Server (every version) will allow you to create redundant indexes. Not just similar (and redundant) but completely duplicate. There is a reason and I wrote about it in this SQL Server Magazine Q&A titled: Why SQL Server Lets You Create Redundant Indexes

The end result of this combination is that more and more servers I see – have too many indexes and often redundant or similar indexes. This wastes resources, time, cache and ultimately adds to a servers inability to scale as the data set grows. See, during development when the data sets were smaller, none of this really matters. Almost none is detected until more data and more users (and unfortunately, more code and more applications) exist. And, at that point it's too late to make schema changes BUT, it's not too late to make index changes. Phew. Believe me, many problems are MUCH harder to solve after an application is in production but indexes are one of the easier ones.

Common Problem 1: Index Recommendations from SHOWPLAN (the green hint)


Don't get me wrong – I *LOVE* that this exists. And I *always* look at what they recommend. It's a good way to see what tables might have holes in my indexing strategy. But, there's a potential problem here. These recommendations are tied to the Missing Index DMVs and so BOTH have this fault but it's A LOT more apparent when using the "green hint" than using the DMVs. My main reason to think twice about these recommendations is that the Missing Index DMVs recommendations are based SOLELY on the plan that was executed. SQL Server did not analyze the statements for different algorithms. If a hash join was used then the index recommended will be an index that will help the hash join. However, your query might be SIGNIFICANTLY better off using a merge join instead. No, please understand that this is NOT a generalization (I'm not saying that merge is better than hash) it's just an example of a situation where I saw a query go from many minutes to HALF with the best index for the hash join (hey, that's good, right?) BUT, the query went down to only 4 seconds with the merge join's index. There was NO comparison here.

There are many cases where the plan that was executed IS the best plan but there are also cases where it's not. As a result, if I'm about to create the index from the "green hint" in SHOWPLAN, I will always run the individual query through DTA (the Database Tuning Advisor) when possible. This will more thoroughly evaluate the query, providing me with alternatives. Yes, it's "focused" tuning but if you're already going to create an index for that query then you might as well work out the best one. Having said that, I will ONLY create the index recommended for the table that the "green hint" recommended and I will see if that gives me enough of a gain. DTA will often make additional recommendations for both indexes and stats. For stats, I would HIGHLY recommend creating all of the statistics it recommends but ONLY the table(s) where you actually create the indexes it recommends. Personally, I don't always create all of the recommended indexes. If I'm going to create them I'll do so iteratively – testing the improvements as I go so I know what indexes give me the best gains. But, I will always add the statistics for the tables where I create their recommended indexes. Why? Because the stats (and usually multi-column stats) are there to help the indexes and their uses. So, it's pretty much always a good thing to create those stats.

Common Problem 2: The Missing Index DMVs

Don't get me wrong – I *LOVE* that these exist too and just like SHOWPLAN, I use these as well. There are some great examples out there of ways to better evaluate what's in them, what should be created and there are even some cool posts about clever ways to corroborate what's recommended by looking at showplan XML, etc. Check out these links:

However, there's a dark side to these missing index DMVs and these points aren't often mentioned and – they might SIGNIFICANTLY change the way that you view these recommendations. First, they're not perfect (ok, nothing is but I had to remind you :). The Missing Index DMVs might recommend indexes that you have already created – even after you created them, even after a reboot, etc. They will just continue to recommend the index as if it didn't exist. Check out Paul's post titled: Missing index DMVs bug that could cost your sanity. And, second, my issue ties in with Common Problem 1. Again, these recommendations are NOT for the best index but for indexes that will help the plans as they were executed.

However, there's a HUGE win that comes from the Missing Index DMVs OVER SHOWPLAN. And, that's that they tune for query classes. What this means is that a query that's higher up in the "user impact" category will likely affect many queries and therefore it is much higher on the list to consider. But, even armed with this information, I would NOT automate the creation of these indexes based on the results of queries that access these DMVs. Instead, as an application runs in production and as workload characteristics are being evaluated – save this information and use it when you're tuning to help you know better what indexes might give you the best improvements.

Common Problem 3: Duplicate/redundant Indexes

I can't help you with this one. Except feel free to increase the vote count on this on connect. I found that someone recently (3/17/2011) re-added this as a DCR here: https://connect.microsoft.com/SQLServer/feedback/details/652071/duplicate-indexes#tabs. I've made a comment to further explain it and so I hope that they will reevaluate it.

In Summary

The tools are great. You definitely WANT to use them. However, you must understand their caveats and pitfalls and add indexes only when it's appropriate. Adding more and more indexes does NOT improve performance. And, above all – be sure to review your existing indexes to see if you can drop an existing index OR make your index slightly different to consolidate index definitions into one index that does a lot more with less overhead, etc. I call this Index Consolidation. And, this is another topic for another day. :)

So – why would you want to create lots and lots of indexes?

Ha, I bet you never thought we'd get here? And, for today – I'd rather you just DON'T do it. Just because you can, doesn't mean you should. I'll give you some reasons to consider LOTS of indexes…later…after you've cleaned up all of these redundant indexes! And, to help you find redundant indexes – use my latest version of sp_helpindex from my category: sp_helpindex rewrite. This will help you to more accurately see what's really in those indexes.

Thanks for reading!

Become a SQLskills Insider NOW!

Important non-SQL request: Help save polar bears in Russia!

Last year Paul and I had the pleasure of taking an amazing adventure into the Arctic with Heritage Expeditions on the Spirit of Enderby (Professor Khromov). In addition to seeing a ton of wildlife (polar bears, arctic fox, walrus, puffins, whales, etc.) we were fortunate enough to meet some of the local experts who have spent years of their lives studying and helping to better understand many of these animals. Two of the people that we met were Dr. Nikita Ovsyanikov (a polar bear expert has worked on the Wrangel Island Nature Reserve for 30+ years and has dedicated his work in educating and studying their behaviors) and his daughter Katya Ovsyanikova – who has been going to Wrangel since a child and has dedicated her time and studies to sea otters and other marine critters as well.

So, why am I blogging about this now?

Katya is in the US this week (she arrived yesterday) to speak at the Sea Otter Conservation Workshop at the Seattle Aquarium. Since she arrived we've been chatting about the state of affairs for polar bears and the possibility of the hunt being opened up again in the Chukotka region of Russia (aka the Russian Far East). While there has been some media attention stating that the polar bear hunt has been reopened (Russia lifts ban on polar bear hunting and Russia's Chukotka backs polar bear hunting), there is also still some debate about the current status. It has not yet been approved by the Russian Federal Government even though it's been signed by the Chukotka Regional Government. Because this is still under debate, you can help to influence their decision by writing to them now. A more formal letter about the state of the polar bear is attached below.

What can you do?

(1) Tell as many people as possible, tweet, facebook, do you know someone in the media? Please spread the word! And, if they want to talk to Katya – shoot me an email and I'll get you/them hooked up!

(2) Write a letter of discontent to the Ministry of Natural Resources:

123995, Bolshaya Gruzinskaya 4/6,
Moscow, Russia
Minister – Truntev Yuriy Petrovich

(3) Sign the petition: Help to save Polar Bears in Russia!

Dear friends!

Sadly, there has arisen in Russia a very disturbing situation with polar bears.  The plight of polar bears in Russia hangs in the balance if legislation currently before the Russian Government is passed.

The total number of polar bears in the wild is only about 20,000.  Part of the Chukchi-Alaskan population, which was almost exterminated by the middle of the twentieth century, lives on the Chukotka Peninsula (North-East Russia). At present, specialists estimate its size to be no greater than 2,000 bears.  After the USSR imposed a moratorium on polar bear hunting in 1957, the Russian coast was a stronghold for polar bear conservation. For decades, and the Chukchi-Alaskan population had a chance to increase in numbers. Unfortunately, poaching also increased in the Chukotka region, and has been thriving there, particularly since the 1990s.  Poachers now kill up to 300 polar bears a year for their pelts, and do so without even trying to conceal it.  There is a huge black market for those pelts, with each one selling for about $20,000.

In 2007, a bilateral Russian-American agreement on the conservation and management of the Alaska-Chukotka polar bear population was legislated, mentioning a possibility of issuing a hunting quota for indigenous people on both sides. Disregarding the fact that IUCN's Polar Bear Specialists Group considered the situation with polar bears critical, the Russian-American Polar Bear Commission issued a resolution that a quota may be introduced on both sides of the Bering Strait. The decision was reached despite the fact that there is not enough data on the size of the population.  Moreover, the possibility of a moratorium was not even considered. As of March 2011, after more than half a century of moratorium, the hunt is being re-evaluated and potentionally re-opened in Russia!

Global warming has been rapidly shrinking the extent of the Arctic sea ice, the major platform from which polar bears hunt during most of the year.  It is difficult for polar bears to survive the ice-free seasons because their main habitat (drifting ice) is shrinking, and what remains is far from the best hunting grounds. More and more often in recent years, scientists have observed famished bears, as well as fewer females showing up to den on Wrangel Island, the main polar bear “maternity ward” in the Chukchi Sea and a protected Nature Reserve since 1976.

Under these stressful conditions, the impacts of poaching on the polar bear population are disastrous, and implementation of hunting will simply constitute legalization of poaching, and a cover for the poachers.  With regards to traditional use, the indigenous people of Chukotka have never hunted polar bears for subsistence. Legalization of polar bear hunting is just a path to the commercial use of the species. And, while a quota is intended to be issued only for “subsistence” use by indigenous people, trophy hunters are actively lobbying for implementation of the quota, which, in itself, is revealing as to the consequences of legalizing polar bear hunting. 

It is not global warming that is dangerous for polar bears; the species has survived cyclic warm periods and glaciations before.  It is the presence and activity of humans in the Arctic that pose a very real threat for bears.  With all the pressures on the population now, it is disastrous, malicious, and inhumane to open the hunt.  It would be tragic to lose such a unique species because of greediness and cynicism of a small group of people that is interested in exploiting these animals for profit.

We are deeply unhappy with this situation and are trying to raise people’s awareness and to convince the Russian government not to legislate polar bear hunting.  It is not too late, and we have to do everything possible to express the discontent of the people.

For further information on the subject, see the article by polar bear specialist Nikita Ovsyanikov:

And, for additional information – check out some of these links:

Thanks for reading and an even bigger thanks for giving your time and your signature to help these beautiful mammals.

Immersion Event Series – 4 Weeks – August 2011

As many of you know, we’ve changed our Immersion Event courses to more closely match the requirements of the Microsoft Certified Masters – SQL Server 2008 certification. No, the exams are NOT part of the training. And, NO, you don’t need to be aiming for certification to benefit greatly from the training.

Being a master means mastering the materials, mastering the features and mastering SQL Server. This will NEVER come directly from JUST a class – EVER. If you could just pass the test after attending a class then what would that prove?

In fact, this is partially why these exams exist. They are meant to really test people’s knowledge of the product as well as a lot of their real-world experiences with it.

Wait – so then what do the classes do?

They help you get on the right path. They teach you proven, real-world best practices. They get you focused on solutions rather than just features. And, most importantly, these classes are not just teaching what the best practices are, but WHY they’re best practices.

Our goal is NOT to teach you the answer, but instead we teach you the information and steps necessary to be able to solve your problem scenarios.

But, even after attending a week – or two – of deep training, can you really go out and successfully implement ALL of these features and solutions without help and without error? I really doubt it; you need that real-world experience and reinforcement.

What are the Immersion Event Options?

That’s your choice – and you have quite a few!

  • You can attend ONLY the training weeks where you have some weaknesses
  • You can attend multiple classes – but over time – allowing that time to solidify the content learned and allowing your experience to build
  • You can watch the online videos – no, they’re not the entire set of class content. Far from it actually, they’re about 20-25% of the content. And, numerous attendees have said that they’re both helpful before AND after these courses. Before – to help them prepare for the class and to get the most out of it. After – for review and to help reinforce many of the concepts.
  • You can attend ALL four classes… in a row. Wow, this is probably the most challenging.

Dead-man’s Float – for 4 Solid Weeks – Can you do it?

Or, even better – should you do it? Well, it’s definitely NOT for everyone. These classes are intense as standalone one or two-week courses… And, we’ve heard repeatedly that many attendees are really happy that they have the option to spread out the training and reinforce the materials learned between each of the weeks. However, there are still folks that want to truly immerse themselves in the content and then go for the exams within a few months of the training. Will that work? Well, you knew I’d have to say it eventually – it depends!

Honestly, it’s going to be VERY difficult. However, there are folks out there that can do it. In fact, there are already folks* that have passed the exams without the formal training (yet another benefit from Microsoft having separated the training from the exams). And, those folks that have attended the training certainly understand *many* real-world practices – and WHY they’re best practices.

SO, if you do this – what should you do to be the most successful?

  • Prior to class:
    • First, watch as many of the online videos as you can
    • Second, review as many links from our MCM prep materials as you can
  • Attend the classes:
    • Try to completely immerse yourself in the classes and try to disconnect (this is very, VERY difficult but if you spend your entire day on email and/or fighting fires then what’s the point of attending the training?)
    • Nightly, review some of the class content – at least briefly
    • Nightly and/or on weekends – work through the hands-on labs (we give you a DVD in class that you can install and work with to solidify many of the concepts covered)
    • Hey, I said it wasn’t going to be easy!
  • After class: Use these best practices in your day-to-day jobs
    • Continue to watch the videos
    • Use any further MCM prep content that’s available

Incentive to swim with the sharks!

For those of you who want to tackle all four of our Immersion Events as a single 4 week series – you can! We don’t offer all four classes in the same city, four weeks in a row very often but we generally will at least once a year. This year, we’re offering all four in Bellevue, WA in the month of August. We chose August as it’s an absolutely wonderful time of year to be in the Northwest and if you do venture out of the hotel on nights or weekend – there’s tons to do!

What’s the deal?

If you attend all four weeks of Immersion Events in August and stay at our event hotel (the Bellevue Courtyard by Marriott) during these classes (at least 20 nights) we can get you an even lower nightly rate of $149 _and_ we will pay $1,500 of your total hotel bill.

But, the hotel is more expensive than some of the extended stay locations?

True – but with our taking $1,500 off of your bill, it's a lot closer. And, we think it’s ideal to stay at the hotel because of the length of the event. With breakfast starting at 7:45 am, the lecture running until 17:30+ and evening events (on most evenings) that end between 8 or 9pm, you’ll be glad you have a place to go drop your stuff (or grab something you forgot) or even just a place to go relax for a few minutes.

And, at the end of a long day – you won’t need to deal with a shuttle (or a taxi or other) to get back to where you're staying. And, if you do have your own car then you’ll need to pay for the rental car and hotel parking ($5 per day). What's even easier, if you stay at the hotel, is that you can rent a car only when you need it (there’s a Hertz facility close to the hotel).

And, finally, my favorite part about the hotel is the location; you can walk to a tremendous number of restaurants and shops and they’re only two blocks from the Bellevue Transit Center so you’re a quick bus ride to the city if you want to see some sights over the weekend!

So, while I don't expect a lot of folks to take the full-on 4 week Immersion Event series, we've already got two that are doing it and so this is why I thought I'd do the post. And, for those of you that end up hanging out in WA before/after our classes – I even send out a list of possible sight-seeing activities and things to do/see. August is a fantastic time to be in Bellevue and in WA in general. Last year over half of our August Immersion Event attendess brought SOs to enjoy the trip.

We hope to see you there!!


* Joseph Sack ( t | b ), Microsoft Certified Master – SQL Server 2008 blogger has blogged about each of the folks that has passed here: http://blogs.technet.com/b/themasterblog/ and we’d like to make a few special call outs to some of the recently certified masters. Congrats first – to our newest SQLskills member – Jonathan Kehayais ( t | b ). And, additional congrats to our good friends: Denny Cherry ( t | b ), Simon Sabin ( t | b ), and Maciej Pilecki ( t )!!!

Our February Immersion Event in Dallas, TX – Final Wrap-up

OK, I've written about our February Immersion Event in two other recent posts:

Sadly, this is going to be my last post about our February Immersion Event on Internals and Performance. It was a fantastic week getting to know our attendees – from Q&A during lectures to breaktime to chatting in the lounge in the evenings. Quite a few attendees were surprised that we hung out and chatted with them after class and during breaks/lunch. But, we still love what we're doing and we love hearing about all of the ways that you use SQL Server. And, to be honest, we also love hearing about your lives outside of SQL Server. I remember some great conversations about health (for me, it's trying to get healthy), animals (a few dog lovers!) and racing cars (and a few speeding tickets – but I won't name names here at all!).

We all learned a lot and we also had fun. I know quite a few of you have asked about when we're going to get to your city – so that you don't have to travel. However, I have to admit that traveling to some of these events (and actually being away from home) can be extremely beneficial. The networking and time spent hearing about other scenarios and even just the time spent relaxing (and not having to rush to get the kids or feed the dog or water the plants…) can allow you really dive in and think about nothing but learning for a week. I know this increases the cost of the week but if you get more out of the training itself by being less distracted then that certain offsets the costs. And, we're also trying to choose comfortable, quality hotels – for reasonable rates.

Getting away and disconnecting (to a point) will allow you to focus better and retain more. In fact, a couple of local attendees that made arrangements to stay late on Th and/or Fr evenings ended up commenting to me later (retrospectively) that they wished they had stayed late on the other evenings as well. On Friday, there were about 12 of us that had a bite in the bar and stayed until after 8. Quite a few booked their flights to fly out Saturday morning just to make sure that they didn't miss anything on Friday.

Summing it up

So, if I were to sum it up – an Immersion Event is about immersing yourself in the technology as well as in the process of learning. If you can disconnect for a week – and, REALLY disconnect – you might get even more out of it than just the daily lectures. Networking and learning from your peers is an important part of the experience. In fact, my favorite "fun" tweet of the week (there were TONS of great tweets on #SQLskills):

Love the class– great discussion even in the women's bathroom. (@Kendra_Little on Feb 21 after chatting with Vicky Harp)

There were side discussions – everywhere! And, Eric Humphrey created a twitter list of the class group here.

What did the attendees say DURING and AFTER the event?

Thanks to everyone for a great week!

Upcoming Immersion Events

We're looking forward the next few Immersion Events coming up as well:

  • Dallas, TX March 14-18: IE2 – Immersion Event on Performance Tuning
  • Chicago, IL May 2-6: IE1 – Immersion Event on Internals and Performance (same content as the class discussed in these posts)
  • London, UK June 20-24: IE1 – Immersion Event on Internals and Performance (same content as the class discussed in these posts)
  • Bellevue, WA August 1-5: IE1 – Immersion Event on Internals and Performance (same content as the class discussed in these posts)
  • Bellevue, WA August 8-12: IE2 – Immersion Event on Performance Tuning
  • Bellevue, WA August 15-18: IE3 – Immersion Event on High Availability and Disaster Recovery
  • Bellevue, WA August 22-26: IE4 – Immersion Event on Development Support

Immerse yourself!

PS – Become a SQLskills Insider to get special information (and sometimes early discounts/details) about our events as well as content available only to Insiders! Become a SQLskills Insider here: http://www.sqlskills.com/Insider.