Send me your wait stats and get my advice and 30 days of free Pluralsight in return

[Edit: 3/25/14 No more codes left - thanks for all the data! - please don't send any more.]

Yes, you read that correctly. Call me crazy, but I’ve been wanting to do this for a while.

Here’s the deal:

  1. You run the code from this post that creates a 24-hour snapshot of the wait stats on your server. (You can do it on as many servers as you want! Or multiple times on the same server. Seriously, the more data the better.)
  2. You send me the wait stats info in email.
  3. I’ll give you some feedback and insight on your wait stats.
  4. I’ll send the first 500 of you a code to get 30 days of free access to all 100+ hours of SQLskills Pluralsight training classes.
  5. I’ll also editorialize the results (possibly over several posts) in April

I was going to work with Pluralsight to get codes for just my SQL Server: Performance Troubleshooting Using Wait Statistics course, but they’re cool with giving away a month of all our courses. Nice!

What’s the catch? There is no catch. I get lots of real-life data, you get lots of real-life training. No credit-card required.

Why am I doing this? I *really* like looking at wait statistics data (kind of like Jonathan with his unhealthy obsession with analyzing deadlocks) and it’s been more than 3 years since my last huge wait stats survey. Also I *really* like helping people understand what wait statistics mean, I *really* like being sleep-deprived :-), and I’d *really* like to show you how awesome our online training courses are.

When does this offer end? You’ve got until the end of March to send me your wait stats info.

Rules? One code per email address that sends me wait stats, no matter how many servers you send in – I’ve got a limited supply. Please don’t post as a comment with the stats in (I’ll delete them) – email only. I’ll send you the code and info on how to redeem it.

Use the T-SQL code below please (or grab the zip file from here), and leave the WAITFOR set to 24 hours. Send me an email [Edit: the survey's done now - please don't send any more data] with the results in plain text or a spreadsheet. Everybody wins.

Look forward to hearing from you!

PS There is no perf hit from running this code, or any nasty side-effect. It simply creates two snapshots of the sys.dm_os_wait_stats output 24 hours apart.

PPS Thanks to those who pointed out the typo that crept in between coding and posting – sorry about that.

  File:     WaitStats2014.sql

  Summary:  24-hour snapshot of wait stats

  SQL Server Versions: 2005 onwards
  Written by Paul S. Randal,

  (c) 2014, All rights reserved.

  For more scripts and sample code, check out

  You may alter this code for your own *non-commercial* purposes. You may
  republish altered code as long as you include this copyright and give due
  credit, but you must obtain prior permission before blogging this code.


IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
	WHERE [name] = N'##SQLskillsStats1')
	DROP TABLE [##SQLskillsStats1];

IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
	WHERE [name] = N'##SQLskillsStats2')
	DROP TABLE [##SQLskillsStats2];

SELECT [wait_type], [waiting_tasks_count], [wait_time_ms],
       [max_wait_time_ms], [signal_wait_time_ms]
INTO ##SQLskillsStats1
FROM sys.dm_os_wait_stats;

WAITFOR DELAY '23:59:59';

SELECT [wait_type], [waiting_tasks_count], [wait_time_ms],
       [max_wait_time_ms], [signal_wait_time_ms]
INTO ##SQLskillsStats2
FROM sys.dm_os_wait_stats;

WITH [DiffWaits] AS
-- Waits that weren't in the first snapshot
	FROM [##SQLskillsStats2] AS [ts2]
	LEFT OUTER JOIN [##SQLskillsStats1] AS [ts1]
		ON [ts2].[wait_type] = [ts1].[wait_type]
	WHERE [ts1].[wait_type] IS NULL
	AND [ts2].[wait_time_ms] > 0
-- Diff of waits in both snapshots
		[ts2].[wait_time_ms] - [ts1].[wait_time_ms] AS [wait_time_ms],
		[ts2].[signal_wait_time_ms] - [ts1].[signal_wait_time_ms] AS [signal_wait_time_ms],
		[ts2].[waiting_tasks_count] - [ts1].[waiting_tasks_count] AS [waiting_tasks_count]
	FROM [##SQLskillsStats2] AS [ts2]
	LEFT OUTER JOIN [##SQLskillsStats1] AS [ts1]
		ON [ts2].[wait_type] = [ts1].[wait_type]
	WHERE [ts1].[wait_type] IS NOT NULL
	AND [ts2].[waiting_tasks_count] - [ts1].[waiting_tasks_count] > 0
	AND [ts2].[wait_time_ms] - [ts1].[wait_time_ms] > 0),
[Waits] AS
		[wait_time_ms] / 1000.0 AS [WaitS],
		([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS],
		[signal_wait_time_ms] / 1000.0 AS [SignalS],
		[waiting_tasks_count] AS [WaitCount],
		100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage],
		ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum]
	FROM [DiffWaits]
	WHERE [wait_type] NOT IN (
        N'BROKER_TASK_STOP',            N'BROKER_TO_FLUSH',
        N'CHKPT',						N'CLR_AUTO_EVENT',
        N'CLR_MANUAL_EVENT',            N'CLR_SEMAPHORE',
        N'EXECSYNC',					N'FSAGENT',
        N'SLEEP_SYSTEMTASK',            N'SLEEP_TASK',
	[W1].[wait_type] AS [WaitType],
	CAST ([W1].[WaitS] AS DECIMAL (16, 2)) AS [Wait_S],
	CAST ([W1].[ResourceS] AS DECIMAL (16, 2)) AS [Resource_S],
	CAST ([W1].[SignalS] AS DECIMAL (16, 2)) AS [Signal_S],
	[W1].[WaitCount] AS [WaitCount],
	CAST ([W1].[Percentage] AS DECIMAL (5, 2)) AS [Percentage],
	CAST (([W1].[WaitS] / [W1].[WaitCount]) AS DECIMAL (16, 4)) AS [AvgWait_S],
	CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (16, 4)) AS [AvgRes_S],
	CAST (([W1].[SignalS] / [W1].[WaitCount]) AS DECIMAL (16, 4)) AS [AvgSig_S]
FROM [Waits] AS [W1]
INNER JOIN [Waits] AS [W2]
	ON [W2].[RowNum] <= [W1].[RowNum]
GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS],
	[W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]
HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95; -- percentage threshold

-- Cleanup
IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
	WHERE [name] = N'##SQLskillsStats1')
	DROP TABLE [##SQLskillsStats1];

IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
	WHERE [name] = N'##SQLskillsStats2')
	DROP TABLE [##SQLskillsStats2];

Last minute half-price offer for Sydney and Melbourne classes

As we draw close to our IE1 Immersion Events in Sydney and Melbourne, we have a few seats still available so we’re offering a special offer. For each registration at the regular price, you can bring another person for half price (US$1995 + GST). First-come-first-served!

Drop us an email if you’re interested – here.


Survey: what is the most worrying wait type?

Here’s another survey for you, and I’m really looking forward to seeing and discussing the results of this one.

Imagine you’re the new DBA for a company (which shouldn’t be hard for most of you :-) and a week ago a new release of the client and server code was rolled out, and you cleared out the wait statistics using DBCC SQLPERF. You’ve never monitored wait statistics on this production SQL Server before, although you’ve done so extensively in previous places you’ve worked. Today you look at the aggregate wait statistics from the last week, using a script that presents them in descending order of total wait time for that wait type. You’ve been on vacation for the last week and so don’t know if there have been any complaints about performance since the roll-out.

Survey is now closed – see here for results

Without considering anything except the wait type name, which of the following wait types (listed in alphabetical order) would you be most concerned to see as the top wait type for the past week (i.e. the most aggregate wait time)?

It’s deliberate that there is  no ‘it depends’ option, and no ‘Other’ option for you to type something, so please don’t leave comments complaining about that.

I’ll editorialize the results next week, along with what my pick would be and rationales for picking each of them.


Corruption recovery using DBCC WRITEPAGE

A couple of week ago I kicked off a survey about the extent of your experience with the DBCC WRITEPAGE command. Here are the results:

UsingWRITEPAGE Corruption recovery using DBCC WRITEPAGE

The “Other” values are:

  • 9 x “Read your post on it, may practice it one day, but it’ll always be last resort to use.”
  • 6 x “I read your post on it.”
  • 4 x “I know about it (from your posts/immersion event) and it’s in my toolbox whenever I get called in to perform disaster recovery.”
  • 1 x “It’s like knowing the Unix recursive delete command as root, dangerous and powerful.”
  • 1 x “Not since 2k pages, and then PageEdit.exe was easier.”

For those of you who don’t know what DBCC WRITEPAGE is, it’s an undocumented command that allows you to make direct changes to data file pages. It’s an extremely powerful command that is very useful for creating corrupt databases, and in extreme cases, for helping to repair otherwise irretrievably-corrupt databases. I blogged about it a year ago in this post.

I never advocate using it to repair a corrupt database and then continue using that database in production, and in fact I refuse to do that if a client asks as there are just too many potential problems that could occur. I’ll only ever use it to help recover data from a damaged database, where there’s already a copy of the databases files in existence, and the client has signed an agreement acknowledging that any data file I use it on is a duplicate, is not on a production instance, and changes to it will in no way affect production.

There are all kinds of things I’ve used DBCC WRITEPAGE for to allow data recovery to progress, and in my Advanced Corruption Recovery Techniques course on Pluralsight course I demonstrate one of them, which I’m going to describe here.

Imagine a database that’s suffered corruption and there’s an off-row LOB data value that’s inaccessible because the owning data row is corrupt. Your mission is to retrieve the data. And to make it impractical to manually piece together the value using DBCC PAGE dumps, the LOB value is a few megabytes, so it’s stored on hundreds of linked text pages.

The process:

  • Create another row with an off-row LOB value (the ‘dummy’ row)
  • Find out the dummy row’s page and offset on the page
  • Calculate the offset of the dummy LOB value’s in-row root in the variable-length portion of the record
  • Calculate the offset of the dummy LOB in-row root’s size, text timestamp, and pointer fields
  • Find the corrupt row’s LOB in-row-root’s size, text timestamp, and pointer field values
  • Use DBCC WRITEPAGE to overwrite the dummy LOB in-row root fields
  • Select from the dummy row and you’ll get back the LOB value you wanted to save

Cool, eh? Desperate times call for clever measures, and with procedures like this I regularly recover data from client databases that Microsoft and other data recovery firms either can’t help with or have given up on. It’s not a fast process as it can take quite a while to figure out exactly what to modify to make things work, but when the data comes back, the clients are always ecstatic.

And with some knowledge of the data structures in a data file, careful use of DBCC WRITEPAGE, and plenty of practice, you can do it too.

Or call us to help you :-)

More on using Transaction SID from the transaction log

Back in 2012 I blogged about using fn_dblog and fn_dump_dblog to figure out the point at which something occurred that you’d like to restore to just before (e.g. a table drop). I also mentioned that you can use the SUSER_SNAME () function on on the [Transaction SID] column for the LOP_BEGIN_XACT log record of the operation to find out who performed the operation.

Yesterday in our IE2 Performance Tuning class in Tampa, someone asked me what the [Transaction SID] column would show if someone had run EXECUTE AS. As I wasn’t 100% certain, I decided to test and write a quick blog post.

First off I’ll set up a database and table to use:

USE [master];
USE [Test];
GO 5

Next I’ll create a Kimberly user for a SQL login, and a Katelyn user for a Windows login:

-- Create Kimberly login and user
CREATE LOGIN [KimberlyLogin] WITH PASSWORD = 'NiceWife';
CREATE USER [KimberlyUser] FOR LOGIN [KimberlyLogin];
EXEC sp_addrolemember N'db_owner', N'KimberlyUser';

-- Create Katelyn user
EXEC sp_addrolemember N'db_owner', N'KatelynUser';

Now I’ll delete a single row as me and each of the users and logins:

-- Delete as me
DELETE FROM [TestTable] WHERE [c1] = 1;

-- Now delete as Kimberly user
EXECUTE AS USER = N'KimberlyUser';
DELETE FROM [TestTable] WHERE [c1] = 2;

-- Now delete as Kimberly login
EXECUTE AS LOGIN = N'KimberlyLogin';
DELETE FROM [TestTable] WHERE [c1] = 3;

-- Now delete as Katelyn user
EXECUTE AS USER = N'KatelynUser';
DELETE FROM [TestTable] WHERE [c1] = 4;

-- Now delete as Katelyn login
DELETE FROM [TestTable] WHERE [c1] = 5;

Finally I’ll pull the [Transaction SID] for each of the delete operations and pass it into SUSER_SNAME ():

	[Operation], [Transaction Name], [Transaction SID],
	SUSER_SNAME ([Transaction SID]) AS [WhoDidIt?]
FROM fn_dblog (NULL, NULL)
AND [Transaction Name] = 'DELETE';
Operation       Transaction Name  Transaction SID                                             WhoDidIt?
--------------- ----------------- ----------------------------------------------------------- -------------------
LOP_BEGIN_XACT  DELETE            0x0105000000000005150000003A5014D05A957BF8F5C8882EE8030000  APPLECROSS\paul
LOP_BEGIN_XACT  DELETE            0x9A9A69BEACF67E4994E2F2DEE35BC02F                          KimberlyLogin
LOP_BEGIN_XACT  DELETE            0x9A9A69BEACF67E4994E2F2DEE35BC02F                          KimberlyLogin
LOP_BEGIN_XACT  DELETE            0x0105000000000005150000003A5014D05A957BF8F5C8882EFE030000  APPLECROSS\Katelyn
LOP_BEGIN_XACT  DELETE            0x0105000000000005150000003A5014D05A957BF8F5C8882EFE030000  APPLECROSS\Katelyn

So the answer is that the log record contains the SID of who you’re executing as. The only way to tell who is really running the code would be through auditing.


Learning to listen

Fellow MVP Chris Shaw (b | t) wrote a blog post earlier this week on learning from mistakes and tagged me to write a similar post – so here it is.

Let me start with a story first. Back in April 1999 I was working for Microsoft on the SQL team as a developer in the Storage Engine team. I’d just joined a few months earlier after spending 5 years at DEC and certainly wasn’t a rookie developer, but I was new at Microsoft. I was writing the DBCC INDEXDEFRAG code and upgrading DBCC SHOWCONTIG (and merging in the functionality of the old, undocumented TVF fn_indexinfo), and I was pretty excited by it (and I still am – defrag/reorg is cool :-).

I was so excited that after I’d had the code for one major part of the project code reviewed by a fellow dev, I checked it in. I remember standing in the corridor outside my office talking with that dev and a tester, when our boss comes along and the conversation turns to my project and I said something like ‘Yup, I just checked in the next part’. My boss’s face started to turn pink and he said ‘ come into my office…’

I then got bawled out because I wasn’t supposed to check in before the boss had reviewed the code it too. I messed up because I hadn’t been listening properly in our team meeting where code review policies had been explained. It wasn’t a nice experience and it was entirely my fault. I decided to start listening. Really listening.

Listening is a critical skill in life – both at work (interacting with clients and colleagues/friends) and at home (interacting with family and friends) – and in my opinion is the most important of the various communication skills (I talk about this and a lot more in my Pluralsight course Communications: How to Talk, Write, Present, and Get Ahead).

As much as possible I like to use what’s called ‘active listening’, where you’re making a conscious effort to focus on what is being said. This involves doing the following:

  • Stop typing and make sure the person can tell you’re paying attention. Even if you can type and listen at the same time, it’s rude and gives the impression that you’re not listening properly.
  • Make eye contact with the person (even just occasionally if that makes you feel uncomfortable). A lot of how humans express themselves involves using the face and eyes so nuanced conversations (e.g. business negotiations, spousal arguments) can be misunderstood without picking up on the visual clues.
  • Remove distractions that prevent you from listening (e.g. step away from a source of noise, turn your back on a television).
  • Ask for a repeat of something that’s not clear so you can make sure you understand it.
  • Summarize what you’ve just heard to make sure you heard what they think they said. This is a great way of being able to get someone to explain something again.

All of these things will make you a better listener, leading to a smoother work and home life. At least that’s the plan :-)

Occasionally I slip up and realize I didn’t listen properly to something important, and then I make doubly sure I’m doing it right for the next few times.

To end with, here are two quotes I like to do with listening:

“I like to listen. I have learned a great deal from listening carefully. Most people never listen.” – Ernest Hemingway
“We have two ears and one mouth so that we can listen twice as much as we speak.” – Epictetus

Survey: using DBCC WRITEPAGE

In this survey I’d like to know what your experience is using the DBCC WRITEPAGE undocumented command.

This survey is closed – see the editorial here – thanks!

I’ll editorialize the results in a week or two.


Disaster recovery 101: restore master or rebuild master

It’s been a hectic January with client work and preparations for our 2014 classes so I haven’t blogged as much as I wanted, but I have lots of blog posts coming up over the next month!

A few weeks ago I kicked off a survey on whether you’ve ever tried or had to restore or rebuild master – thanks to all those who responded. The results are below:

rebuildm Disaster recovery 101: restore master or rebuild master

The “other” values are:

  • 6 x “No neither and I’m ashamed”
  • 4 x “Done it years ago but not practiced recently”
  • 3 x “I both practice rebuilding and restoring and have also done both in DR scenarios”
  • 3 x “I’ve rebuilt master due to collation change”
  • 3 x “Moving the other databases to a new server seems easier…”
  • 2 x “I once carefully followed the documented procedure for restoring the master database and it failed every time. Currently, I have copies of the master data and log files saved for emergency restorations. Turn sql server off, if it isn’t already, copy the files…”
  • 1 x “I work for CSS – I feel like it’s cheating to say “Yep, both restore and rebuild, in a drill and in real life”
  • 1 x “I’ve successfully practiced both rebuilding and restoring…but…only after being burned by having to rebuild master on production without backups”
  • 1 x “Rebuild master in demo situation”

I’m very pleased to see the results – almost 60% of respondents have successfully restored or rebuilt the master database, either in practice or for real.

For the rest of you, if you’re responsible for recovering SQL Server during a disaster, then IMHO there’s no excuse for either not knowing how to fix master or not having tried it. Here are a few reasons why:

  1. If master is damaged so that the instance will not start, you can either rebuild master or re-install SQL Server. Which amount of resultant downtime would you rather deal with?
  2. If you have to rebuild master, you’re going to have to:
    1. Restore your master backup, or:
      1. Reattach all databases (do you have a script for that handy?)
      2. Create all server-scoped objects (do you have a script for that handy too? And, do you know what all the server-scoped objects were before the disaster?)
    2. Restore your backup of msdb (you have that, right? Otherwise you’ve lost all your Agent Jobs, backup history, SSIS packages,…)
    3. Restore your backup of model (you have that too, right? Otherwise you’ve lost all the customizations you made.)
  3. If you have to restore master and you don’t know how, or you don’t have a backup of master, then you have to rebuild master – go to reason #2 above.

You need to practice this.

Restoring master is as simple as:

  1. Backup master
  2. Start the server with the -m startup parameter
  3. Restore your master backup
  4. The server shuts down automatically when the restore ends
  5. Remove the -m startup parameter and then restart the server
  6. If any databases were created after the master backup, reattach them
  7. If any server-scoped objects were created after the master backup, recreate them

And you’re off and running again.

Steps 6 and 7 can be mitigated with documentation of all changes made to the instance (you all do that, right?) and making sure that a master backup is taken regularly (e.g. every night). I demonstrate it live when I’m teaching and I walk through the steps in a demo in my Advanced Corruption Recovery Techniques course on Pluralsight (Module 5).

Rebuilding master is also pretty simple and involves using the SQL Server installation media to run the setup.exe using the /ACTION=REBUILDDATABASE option (and maybe some others). Full details for SQL Server 2008 onward are in the Books Online topic Rebuild System Databases (and a bit more info in this post from my buddy Bob Ward). After that you’re going to have to walk through the steps in reason #2 above – so you better have backups of master, msdb, and model too. (For SQL Server 2005, you need to use setup.exe too and for SQL Server 2000 you need to use the old rebuildm utility – Google for people’s blogs and videos explaining how.)

Restoring master is not hard. Rebuilding master is not hard. But the very fact that it’s master makes it a bit scary. And rightly so – if you mess it up you may be looking at a re-install. You certainly don’t want to be doing either of these for the first time ever during a real-life disaster recovery situation.

Practice, practice, practice – is the key to successful disaster recovery, no matter what’s involved.

(Check out my online training courses: SQL Server: Detecting and Correcting Database Corruption and SQL Server: Advanced Corruption Recovery Techniques. We can also help you with disaster recovery.)

Spring SQLintersection conference in April in Orlando

Now that 2014 is upon us, it’s time for us to start promoting the Spring show of our SQLintersection conference. This year the Spring show will be in Orlando, FL at the JW Marriott Grand Lakes during the week of April 14th, with pre-con and post-con workshops.

Remember that this is *our* show – we actually own the conference company – so we control all aspects of the show (including the speakers, the sessions, the workshops, the format, the schedule, and everything else) to make sure you get the best possible conference experience and the best possible return on your time and budget investment.

It’s NOT Connections

There was a lot of confusion last year about SQLintersection/DevIntersection vs. SQL/Dev Connections. Connections is the Penton Media conference we used to run as the SQL Conference Chairs. We haven’t been affiliated with Connections since late 2011 – and neither have the management team and track chairs of the co-located conferences that made Connections great – we’ve all been building SQLintersection/DevIntersection to be the conference we’ve always wanted without any external requirements/demands.

Who’s going to be there?

In this show, we have the following people speaking:

  • Aaron Bertrand, Sr. Consultant, SQL Sentry, Inc. (blog | twitter)
  • Andrew J. Kelly, Mentor, SolidQ (blog | twitter)
  • Bob Beauchemin, Development Partner, (blog | twitter)
  • Bob Ward, Principal Architect Escalation Engineer, Microsoft (blog | twitter)
  • Brent Ozar, Brent Ozar Unlimited (blog | twitter)
  • David Pless, Senior Premier Field Engineer, Microsoft (blog | twitter)
  • Erin Stellato, Principal Consultant, (blog | twitter)
  • Glenn Berry, Principal Consultant, (blog | twitter)
  • Grant Fritchey, Product Evangelist, Red Gate Software (blog | twitter)
  • Jeremiah Peschka, Brent Ozar Unlimited (blog | twitter)
  • Jonathan Kehayias, Principal Consultant, (blog | twitter)
  • Jos de Bruin, Senior Program Manager, Microsoft
  • Kendra Little, Managing Director, Brent Ozar Unlimited (blog | twitter)
  • Kevin Kline, Director of Engineering Services, SQL Sentry (blog | twitter)
  • Kimberly L. Tripp, President/Founder, (blog | twitter)
  • Mike Zwilling, Principal Architext, Microsoft
  • Paul S. Randal, CEO / Owner, (blog | twitter)
  • Steve Jones, Editor, (blog | twitter)
  • Tim Chapman, Premier Field Engineer, Microsoft (blog | twitter)

We’re going to have an informal Ask the Experts area with some of our speakers, including our two Premier Field Engineer friends David Pless and Tim Chapman; it’s where you can bring your own problems, network with other attendees, and just generally intersect with like-minded techies!

We’re also going to have 4 sessions from Microsoft on Hekaton – very cool!

To top it all off, we have five pre-con workshops and two post-con workshops:

  • Practical Disaster Recovery Techniques with me on Sunday, April 13
  • Queries Gone Wild: Real-world Solutions  with Kimberly on Sunday, April 13
  • What’s New in SQL Server 2014 with Bob Beauchemin on Saturday, April 12
  • 50 Things All SQL Server Developers Need To Know! with Kevin Kline and Aaron Bertrand on Saturday, April 12
  • Developer’s Guide to SQL Server Operations with Jeremiah Peschka and Kendra Little on Sunday, April 13
  • Make SQL Server Apps Go Faster with Brent Ozar, Jeremiah Peschka, and Kendra Little on Thursday, April 17
  • Windows Azure SQL Database from A to Z with Bob Beauchemin on Thursday, April 17

And, of course, you get to pick and choose from sessions from the five co-located conferences on SharePoint, ASP.NET, Azure, Visual Studio, and AngleBrackets.

We hope to see you there!

You can get details about:

  • Speakers here
  • Sessions here
  • Workshops here
  • REGISTRATION here (please be sure to choose SQLintersection as the show you’ll be attending)

It’s going to be a really great show and we’re looking forward to seeing you there.

Survey: restoring and rebuilding master

In the first post of the year I’d like to kick off a survey. I’m interested to know whether you’ve ever practiced restoring or rebuilding master, or whether you’ve had to do either for real in a disaster recovery situation.

I’ll editorialize the results next week.