PASS Board of Directors Applications

Yesterday, August 7, 2013, was the last day to submit applications for the PASS Board of Directors election.  I have the honor of serving on the Nomination Committee this year, and while there is very little I can disclose, I do want to take a moment to extend a heartfelt thank you to those that submitted an application for a Board of Directors position.

Thank you for taking the time to complete the application, I know it can be daunting.

Thank you for taking the time to think through what you want to see happen as a Board member.  Whether you are elected or not, I hope that you will still endeavor to make those things happen.

Thank you for talking to your friends and colleagues and asking them to be references for you (and thank you to those that agreed to do so).

And thank you, in advance, for committing additional time to meet with the NomCom to share your story and your goals.

When you run for any office, or any board, in any arena, you take a risk.  You become vulnerable, and you put your own name out there for discussion.  That takes courage, and I applaud all of you for stepping forward and taking a risk so that you may serve the SQL Server community as a member of the PASS Board.  I wish all of you the best, and I am confident that three very qualified individuals will fill the Board positions this year.  Good luck!

What I Know For Sure…After One Year at SQLskills

Today, August 1, 2013, marks my one year anniversary as a member of the SQLskills team.  Simply put, it’s been a great year.  Challenging in many ways, exhausting at times, but absolutely what I wanted (and expected) to be doing in this role.  Over the past year I’ve been asked many times, “How’s your new job?!”  It’s not-so-new now, but since I didn’t blog much about the non-technical side of life during the past year, I thought I’d use this post to tell you about my new job.  Specifically, the five most important things I learned during the past year.

Talking, out loud, is important

Working for SQLskills means I work remotely, therefore I work from home.  This was quite an adjustment.  I knew it would significantly change the rhythm of each day, but I had no idea what it would look like.  I’ve considered writing about it many times, but a few months ago Merrill Aldrich wrote a post, Telecommuting, Month 9, that explained – very well – many of my own thoughts and observations.  In the comments my friend Jes Borland, who also works from home, clearly articulates one challenge of working remotely.

I found out that what I miss is being able to say, out loud, “I have this idea. What do you think of it?” and getting immediate feedback.

Yes.  YES!  I love the solitude of my office…having the entire house to myself.  Some days I don’t even turn on music or anything for background noise.  But when I want to talk about something, I want to talk about it right now…out loud (funny sidebar, this video makes me laugh…let’s taco ‘bout it).  Trying to discuss ideas over email or chat isn’t the same.  It doesn’t create the same excitement, or the cross-pollination of ideas that occurs during a true conversation.  As Joe says, “it’s where the magic happens.”  It’s true.

Half the battle is realizing the problem.  The other half is figuring out what to do about it.  I make notes about what I want to discuss, and then fire off an email or set up a WebEx.  Jon and I have had numerous late night WebEx sessions where we talk through something, and suddenly at 1 AM I find myself with a litany of post-it notes spread across my desk and ideas churning in my head.  I love those moments.  They are not as organic or spontaneous as they were in an office setting, but I can still make them happen with a little effort.

When theory meets execution

SQL Server is a vast product, and many of us have seen and done a lot…but we haven’t seen and done everything.  As such, there are scenarios and tasks that we’ve read about, that make sense, but we haven’t actually walked through on our own.  We know what’s required to set up an availability group.  We have the checklist, the steps are logical, we can estimate how long it will take, and we’ve read every supporting blog post and technical article we can find.  But I’ve yet to find anything that replaces the actual execution of the task.  In some cases, what’s expected is actually what happens.  And that’s a wonderful thing.  But there are other times where what is planned is not what occurs.  I like this quote I just read in Bob Knight’s book, The Power of Negative Thinking:

Don’t be caught thinking something is going to work just because you think it’s going to work.

Planning beats repairing.

Theory and execution are not always same – it’s certainly nice when they are and when the implementation goes as planned.  But don’t rely on it.  Ultimately, practice and preparation are required to consistently ensure success.

Nothing can replace experience

If you’ve worked in technology a while, you know that a core skill is troubleshooting.  And to be good at troubleshooting, you must have an approach, a methodology that you follow as you work through an issue.  But to be really good at troubleshooting, you also need to recognize patterns.

I came into this role with many years of experience troubleshooting database issues.  But I spent the majority of that time looking at the same database, across different customer installations (if you don’t know my background, I used to work for a software vendor and as part of my job I supported the application database).  I became familiar with the usual database-related problems, and knew how to quickly identify and fix them.  We typically call this pattern matching, and I found it well explained in this excerpt from The Sports Gene, where it’s defined as “chunking.”  From the article:

… rather than grappling with a large number of individual pieces, experts unconsciously group information into a smaller number of meaningful chunks based on patterns they have seen before.

In the past year I’ve seen a lot of new patterns.  And some days were extremely frustrating because I would look at a problem, get stuck, and then ask another member of team to look at the issue with me.  It was usually Jon, who would often look at the issue for a couple minutes and then say, “Oh it’s this.”  It was infuriating.  And I would ask Jon how he knew that was the problem.  And the first time I asked him I think he thought I was questioning whether he was right.  But in fact, I just wanted to know how he figured it out so quickly.  His response?  “I’ve seen it before.  Well maybe not this exact thing, but something similar.”  It’s pattern matching.  It’s chunking.  It’s experience.  You cannot read about it.  You cannot talk about it.  You just have to go get it.  And be patient.

I have a great team

I actually have two great teams: my team at work and my team at home.  I work with individuals who are experts in the SQL Server Community.  Their support is unwavering.  Their willingness to help knows no limits.  I am always appreciative for the time and the knowledge they share, and I am proud to not just work with them, but to call them friends.  To the SQLskills team: thank you for a fantastic first year – I look forward to what’s ahead!  (And happy birthday Glenn!)

My team at home is Team Stellato: my husband Nick and my two kids.  The first year of any job is an adventure, and for me there’s a lot of overhead – a lot of thought around what I’m doing, what I need to finish, what’s next, etc.  And much of that continues when I’m not at my desk.  I haven’t always been 100% present this past year and over last 12 months I’ve said, I don’t know how many times, that I’m still figuring it out.  And I am still figuring it out.  It’s hard to balance everything.  It’s hard to stay in the moment all the time.  I firmly believe I can do it, but I also believe I can do it better than I’m doing it today.  Thank you Nick for just being you – being supportive, understanding, and patient, and for making me laugh.  We’ll get there.  And thank you to my kids for trying to understand that being at home and being available aren’t always the same thing.  This year I will do better at being present during our time.

Make time for the gym

The last item to mention is something I need to be successful, but it may not be necessary for everyone.  It’s exercise.  It seems pretty straight-forward, right?  For some reason it’s a continual battle I fight in my head.  I don’t always have enough hours in the day to get done what I want to get done, so something has to give.  I’m very quick to sacrifice a run, a spin class, or a hot yoga session.  My though process is: “I will need 30/60/90 minutes for that workout.  That’s time I could spend working/hanging out with my family/having lunch with a friend.”  But when I give up that work out multiple days in a row, my mental and emotional health suffer…more than my physical health.  A work out clears my head – solutions come faster, ideas flow easier, I am more focused when I need to be – and it reduces my stress.  It’s ironic if you think about it…making time to work out introduces this stress (“Can I do everything?!”) but the act of working out makes everything else I need to do so much easier.  And it’s not about how far I run, or how many classes I get to in a week.  It’s the workout itself – whether it’s an intense 50 minutes of spin, a 1.5 mile run while the kids bike, or an hour in the yoga studio.

Year 2 and beyond

So, how’s my new job?  It’s great.  In many ways it is exactly what I expected, and in other ways it’s not – and that’s not a bad thing.  I didn’t anticipate every challenge I would have in working from home, but I am not afraid of them, nor do I think they’re unconquerable.  I have learned how to step back and critically look at where I am in my career, and evaluate what’s working well and what isn’t.  And this is working well.  It’s hard – hard because I am learning a ton and juggling many things, and that can be exhausting.  But I wouldn’t want it any other way.  I hate to be bored!  I absolutely love working with people who know so much, because it reminds me how much there is to know and what I can learn.  It is a fantastic motivator for me.  And the SQLskills team is fun.  A little weird at times :) but very fun and extremely supportive.  I cannot explain the importance of that, for me, enough.  And so begins year 2, let’s see what adventures this brings…IE0 anyone?!!

The Accidental DBA (Day 26 of 30): Monitoring Disk I/O

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. If you know someone who would benefit from this class, refer them and earn a $50 Amazon gift card – see class pages for details. You can find all the other posts in this series at Enjoy!

Database storage can seem like a black box. A DBA takes care of databases, and those databases often reside somewhere on a SAN – space simply presented to a DBA as a drive letter representing some amount of space. But storage is about more than a drive letter and a few hundred GBs. Yes, having enough space for your database files is important. But I often see clients plan for capacity and not performance, and this can become a problem down the road. As a DBA, you need to ask your storage admin not just for space, but for throughput, and the best way to back up your request is with data.

I/O Data in SQL Server
I’ve mentioned quite a few DMVs in these Accidental DBA posts, and today is no different. If you want to look at I/O from within SQL Server, you want to use the sys.dm_io_virtual_file_stats DMV. Prior to SQL Server 2005 you could get the same information using the fn_virtualfilestats function, so don’t despair if you’re still running SQL Server 2000!  Paul has a query that I often use to get file information in his post, How to examine IO subsystem latencies from within SQL Server. The sys.dm_io_virtual_file_stats DMV accepts database_id and file_id as inputs, but if you join over to sys.master_files, you can get information for all your database files. If I run this query against one of my instances, and order by write latency (desc) I get:

output from sys.dm_os_virtual_file_stats

output from sys.dm_os_virtual_file_stats

This data makes it look like I have some serious disk issues – a write latency of over 1 second is disheartening, especially considering I have a SSD in my laptop! I include this screenshot because I want to point out that this data is cumulative. It only resets on a restart of the instance. You can initiate large IO operations – such as index rebuilds – against a database that can greatly skew your data, and it may take time for the data to normalize again. Keep this in mind not only when you view the data at a point in time, but when you share findings with other teams. Joe has a great post that talks about this in more detail, Avoid false negatives when comparing sys.dm_io_virtual_file_stats data to perfmon counter data, and the same approach applies to data from storage devices that your SAN administrators may use.

The information in the sys.dm_io_virtual_file_stats DMV is valuable not only because it shows latencies, but also because it tells you what files have the have the highest number of reads and writes and MBs read and written. You can determine which databases (and files) are your heavy hitters and trend that over time to see if it changes and how.

I/O Data in Windows

If you want to capture I/O data from Windows, Performance Monitor is your best bet. I like to look at the following counters for each disk:

  • Avg. Disk sec/Read
  • Avg. Disk Bytes/Read
  • Avg. Disk sec/Write
  • Avg. Disk Bytes/Write

Jon talked about PerfMon counters earlier and the aforementioned counters tell you about latency and throughput.  Latency is how long it takes for an I/O request, but this can be measured at different points along the layers of a solution. Normally we are concerned with latency as measured from SQL Server. Within Windows, latency is the time from when Windows initiated the I/O request to the completion of the request. As Joe mentioned his post, you may see some variation between what you see for latency from SQL Server versus from Windows.

When we measure latency using Windows Performance Monitor, we look at Avg Disk sec/Read and Avg Disk sec/Write. Disk cache, on a disk, controller card, or a storage system, impact read and write values. Writes are typically written to cache and should complete very quickly. Reads, when not in cache, have to be pulled from disk and that can take longer.  While it’s easy to think of latency as being entirely related to disk, it’s not. Remember that we’re really talking about the I/O subsystem, and that includes the entire path from the server itself all the way to the disks and back. That path includes things like HBAs in the server, switches, controllers in the SAN, cache in the SAN and the disks themselves. You can never assume that latency is high because the disks can’t keep up. Sometimes the queue depth setting for the HBAs is too low, or perhaps you have an intermittently bad connection with a failing component like a GBIC (gigabit interface converter) or maybe a bad port card. You have to take the information you have (latency), share it with your storage team, and ask them to investigate. And hopefully you have a savvy storage team that knows to investigate all parts of the path. A picture is worth a thousand words in more complex environments. It often best to draw out, with the storage administrator, the mapping from the OS partition to the SAN LUN or volume. This should generate a discussion about the server, the paths to the SAN and the SAN itself. Remember what matters is getting the I/O to the application. If the IO leaves the disk but gets stuck along the way, that adds to latency. There could be an alternate path available (multi-pathing), but maybe not.

Our throughput, measured by Avg. Disk Bytes/Read and Avg. Disk Bytes/Write, tells us how much data is moving between the server and storage. This is valuable to understand, and often more useful than counting I/Os, because we can use this to understand how much data our disks will be need to be able to read and write to keep up with demand. Ideally you capture this information when the system is optimized – simple things like adding indexes to reduce full table scans can affect the amount of I/O – but often times you will need to just work within the current configuration.

Capturing Baselines

I alluded to baselines when discussing the sys.dm_os_virtual_file_stats DMV, and if you thought I was going to leave it at that then you must not be aware of my love for baselines!
You will want to capture data from SQL Server and Windows to provide throughput data to your storage administrator. You need this data to procure storage on the SAN that will not only give you enough space to accommodate expected database growth, but that will also give you the IOPs and MB/sec your databases require.

Beyond a one-time review of I/O and latency numbers, you should set up a process to capture the data on a regular basis so you can identify if things change and when. You will want to know if a database suddenly starts issuing more IOs (did someone drop an index?) or if the change is I/Os is gradual. And you need to make sure that I/Os are completing in the timeframe that you expect. Remember that a SAN is shared storage, and you don’t always know with whom you’re sharing that storage. If another application with high I/O requirements is placed on the same set of disks, and your latency goes up, you want to be able to pinpoint that change and provide metrics to your SAN administrator that support the change in performance in your databases.


As a DBA you need to know how your databases perform when it comes to reads and writes, and it’s a great idea to get to know your storage team. It’s also a good idea to understand where your databases really “live” and what other applications share the same storage. When a performance issue comes up, use your baseline data as a starting part, and don’t hesitate to pull in your SAN administrators to get more information. While there’s a lot of data readily available for DBAs to use, you cannot get the entire picture on your own. It may not hurt to buy your storage team some pizza or donuts and make some new friends :) Finally, if you’re interested in digging deeper into the details of SQL Server I/O, I recommend starting with Bob Dorr’s work:

The Accidental DBA (Day 25 of 30): Wait Statistics Analysis

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. If you know someone who would benefit from this class, refer them and earn a $50 Amazon gift card – see class pages for details. You can find all the other posts in this series at Enjoy!

For the last set of posts in our Accidental DBA series we’re going to focus on troubleshooting, and I want to start with Wait Statistics.  When SQL Server executes a task, if it has to wait for anything – a lock to be released from a page, a page to be read from disk into memory, a write to the transaction log to complete – then SQL Server records that wait and the time it had to wait.  This information accumulates, and can be queried using the sys.dm_os_wait_stats DMV, which was first available in SQL Server 2005.  Since then, the waits and queues troubleshooting methodology has been a technique DBAs can use to identify problems, and areas for optimizations, within an environment.

If you haven’t worked with wait statistics, I recommend starting with Paul’s wait stats post, and then working through Tom Davidson’s SQL Server 2005 Waits and Queues whitepaper.

Viewing Wait Statistics

If you run the following query:

FROM sys.dm_os_wait_stats
ORDER BY wait_time_ms DESC;

You will get back output that isn’t that helpful, as you can see below:

sys.dm_os_wait_stats output

sys.dm_os_wait_stats output

It looks like FT_IFTS_SCHEDULER_IDLE_WAIT is the biggest wait, and SQL Server’s waited for 1930299679 ms total.  This is kind of interesting, but not what I really need to know.  How I do really use this data?  It needs some filtering and aggregation.  There are some waits that aren’t going to be of interest because they occur all the time and are irrelevant for our purposes; we can filter out those wait types.  To make the most of our wait stats output, I really want to know the highest wait based on the percentage of time spent waiting overall, and the average wait time for that wait.  The query that I use to get this information is the one from Paul’s post (mentioned above).  I won’t paste it here (you can get it from his post) but if I run that query against my instance, now I get only three rows in my output:

sys.dm_os_wait_stats output with wait_types filtered out

sys.dm_os_wait_stats output with wait_types filtered out

If we reference the various wait types listed in the MSDN entry for sys.dm_os_wait_stats, we see that the SQLTRACE_WAIT_ENTRIES wait type, “Occurs while a SQL Trace event queue waits for packets to arrive on the queue.”

Well, this instance is on my local machine and isn’t very active, so that wait is likely due to the default trace that’s always running.  In a production environment, I probably wouldn’t see that wait, and if I did, I’d check to see how many SQL Traces were running.  But for our purposes, I’m going to add that as a wait type to filter out, and then re-run the query.  Now there are more rows in my output, and the percentage for the PAGEIOLATCH_SH and LCK_M_X waits has changed:

sys.dm_os_wait_stats output with SQLTRACE_WAIT_ENTRIES also filtered out

sys.dm_os_wait_stats output with SQLTRACE_WAIT_ENTRIES also filtered out

If you review the original query, you will see that the percentage calculation for each wait type uses the wait_time_ms for the wait divided by the SUM of wait_time_ms for all waits.  But “all waits” are those wait types not filtered by the query.  Therefore, as you change what wait types you do not consider, the calculations will change.  Keep this in mind when you compare data over time or with other DBAs in your company – it’s a good idea to make sure you’re always running the same query that filters out the same wait types.

Capturing Wait Statistics

So far I’ve talked about looking at wait statistics at a point in time.  As a DBA, you want to know what waits are normal for each instance.  And there will be waits for every instance; even if it’s highly tuned or incredibly low volume, there will be waits.  You need to know what’s normal, and then use those values when the system is not performing well.

The easiest way to capture wait statistics is to snapshot the data to a table on a regular basis, and you can find queries for this process in my Capturing Baselines for SQL Server: Wait Statistics article on  Once you have your methodology in place to capture the data, review it on a regular basis to understand your typical waits, and identify potential issues before they escalate.  When you do discover a problem, then you can use wait statistics to aid in your troubleshooting.

Using the Data

At the time that you identify a problem in your environment, a good first step is to run your wait statistics query and compare the output to your baseline numbers.  If you see something out of the ordinary, you have an idea where to begin your investigation.  But that’s it; wait statistics simply tell you where to start searching for your answer.  Do not assume that your highest wait is the problem, or even that it’s a problem at all.  For example, a common top wait is CXPACKET, and CXPACKET waits indicate that parallelism is used, which is expected in a SQL Server environment.  If that’s your top wait, does that mean you should immediately change the MAXDOP setting for the instance?  No.  You may end up changing it down the road, but a better direction is to understand why that’s the highest wait.  You may have CXPACKET waits because you’re missing some indexes and there are tons of table scans occurring.  You don’t need to change MAXDOP, you need to start tuning.

Another good example is the WRITELOG wait type.  WRITELOG waits occur when SQL Server is waiting for a log flush to complete.  A log flush occurs when information needs to be written to the database’s transaction log.  A log flush should complete quickly, because when there is a delay in a log write, then the task that initiated the modification has to wait, and tasks may be waiting behind that.  But a log flush doesn’t happen instantaneously every single time, so you will have WRITELOG waits.  If you see WRITELOG as your top wait, don’t immediately assume you need new storage.  You should only assume that you need to investigate further.  A good place to start would be looking at read and write latencies, and since I’ll be discussing monitoring IO more in tomorrow’s post we’ll shelve that discussion until then.

As you can see from these two examples, wait statistics are a starting point.  They are very valuable – it’s easy to think of them as “the answer”, but they’re not.  Wait statistics do not tell you the entire story about a SQL Server implementation.  There is no one “thing” that tells you the entire story, which is why troubleshooting can be incredibly frustrating, yet wonderfully satisfying when you find the root of a problem.  Successfully troubleshooting performance issues in SQL Server requires an understanding of all the data available to aid in your discovery and investigation, understanding where to start, and what information to capture to correlate with other findings.

The Accidental DBA (Day 23 of 30): SQL Server HA/DR Features

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. If you know someone who would benefit from this class, refer them and earn a $50 Amazon gift card – see class pages for details. You can find all the other posts in this series at Enjoy!

Two of the most important responsibilities for any DBA are protecting the data in a database and keeping that data available.  As such, a DBA may be responsible for creating and testing a disaster recovery plan, and creating and supporting a high availability solution.  Before you create either, you have to know your RPO and RTO, as Paul talked about a couple weeks ago.  Paul also discussed what you need to consider when developing a recovery strategy, and yesterday Jon covered considerations for implementing a high availability solution.

In today’s post, I want to provide some basic information about disaster recovery and high availability solutions used most often.  This overview will give you an idea of what options might be a fit for your database(s), but you’ll want to understand each technology in more detail before you make a final decision.


No matter what type of implementation you support, you need a disaster recovery plan.  Your database may not need to be highly available, and you may not have the budget to create a HA solution even if the business wants one.  But you must have a method to recover from a disaster.  Every version, and every edition, of SQL Server supports backup and restore.  A bare bones DR plan requires a restore of the most recent database backups available – this is where backup retention comes in to play.  Ideally you have a location to which you can restore.  You may have a server and storage ready to go, 500 miles away, just waiting for you to restore the files.  Or you may have to purchase that server, install it from the ground up, and then restore the backups.  While the plan itself is important, what matters most is that you have a plan.

Log Shipping

Log shipping exists on a per-user-database level and requires the database recovery model to use either full or bulk-logged recovery (see Paul’s post for a primer on the differences).  Log shipping is easy to understand – it’s backup from one server and restore on another – but the process is automated through jobs.  Log shipping is fairly straight forward to configure and you can use the UI or script it out (prior to SQL Server 2000 there was no UI).  Log shipping is available in all currently supported versions of SQL Server, and all editions.

You can log ship to multiple locations, creating additional redundancy, and you can configure a database for log shipping if it’s the primary database in a database mirroring or availability group configuration.  You can also use log shipping when replication is in use.

With log shipping you can allow limited read-only access on secondary databases for reporting purposes (make sure you understand the licensing impact), and you can take advantage of backup compression to reduce the size of the log backups and therefore decrease the amount of data sent between locations.  Note: backup compression was first available only in SQL Server 2008 Enterprise, but starting in SQL Server 2008 R2 it was available in Standard Edition.

While Log Shipping is often used for disaster recovery, you can use it as a high availability solution, as long as you can accept some amount of data loss and some amount of downtime.  Alternatively, in a DR scenario, if you implement a longer delay between backup and restore, then if data is changed or removed from the primary database – either purposefully or accidentally – you can possibly recover it from the secondary.

Failover Cluster Instance

A Failover Cluster Instance (also referred to as FCI or SQL FCI) exists at the instance level and can seem scary to newer DBA because it requires a Windows Server Failover Cluster (WSFC).  A SQL FCI usually requires more coordination with other teams (e.g. server, storage) than other configurations.  But clustering is not incredibly difficult once you understand the different parts involved.  A Cluster Validation Tool was made available in Windows Server 2008, and you should ensure the supporting hardware successfully passes its configuration tests before you install SQL Server, otherwise you may not be able to get your instance and up and running.

SQL FCIs are available in all currently supported versions of SQL Server, and can be used with Standard Edition (2 nodes only), Business Intelligence Edition in SQL Server 2012 (2 nodes only), and Enterprise Edition.  The nodes in the cluster share the same storage, so there is only one copy of the data.  If a failure occurs for a node, SQL Server fails over to another available node.

If you have a two-node WSFC with only one instance of SQL Server, one of the nodes is always unused, basically sitting idle.  Management may view this as a waste of resources, but understand that it is there as insurance (that second node is there to keep SQL Server available if the first node fails).  You can install a second SQL Server instance and use log shipping or mirroring with snapshots to create a secondary copy of the database for reporting (again, pay attention to licensing costs).  Or, those two instances can both support production databases, creating a better use of the hardware.  However, be aware of resource utilization when a node fails and both instances run on the same node.

Finally, a SQL FCI can provide intra-data center high availability, but because it uses shared storage, you do have a single point of failure.  A SQL FCI can be used for cross-data center disaster recovery if you use multi-site SQL FCIs in conjunction with storage replication.  This does require a bit more work and configuration, because you have more moving parts, and it can become quite costly.

Database Mirroring

Database mirroring is configured on a per-user-database basis and the database must use the Full recovery model.  Database mirroring was introduced in SQL Server 2005 SP1 and is available in Standard Edition (synchronous only) and Enterprise Edition (synchronous and asynchronous).  A database can be mirrored to only one secondary server, unlike log shipping.

Database mirroring is extremely easy to configure using the UI or scripting.  A third instance of SQL Server, configured as a witness, can detect the availability of the primary and mirror servers.  In synchronous mode with automatic failover, if the primary server becomes unavailable and the witness can still see the mirror, failover will occur automatically if the database is synchronized.

Note that you cannot mirror a database that contains FILESTREAM data, and mirroring is not appropriate if you need multiple databases to failover simultaneously, or if you use cross-database transactions or distributed transactions.  Database mirroring is considered a high availability solution, but it can also be used for disaster recovery, assuming the lag between the primary and mirror sites is not so great that the mirror database is too far behind the primary for RPO to be met.  If you’re running Enterprise Edition, snapshots can be used on the mirror server for point-in-time reporting, but there’s a licensing cost that comes with reading off the mirror server (as opposed to if it’s used only when a failover occurs).

Availability Groups

Availability groups (AGs) were introduced in SQL Server 2012 and require Enterprise Edition.  AGs are configured for one or more databases, and if a failover occurs, the databases in a group failover together.  They allow three synchronous replicas (the primary and two secondaries), whereas database mirroring allowed only one synchronous secondary, and up to four asynchronous replicas.  Failover in an Availability Group can be automatic or manual.  Availability Groups do require a Windows Failover Clustering Server (WFCS), but do not require a SQL FCI.  An AG can be hosted on SQL FCIs, or on standalone servers within the WFCS.

Availability Groups allow read-only replicas that allow for lower latency streaming updates, so you can offload reporting to another server and have it be near real-time.  Availability Groups offer some fantastic functionality, but just as with a SQL FCI, there are many moving parts and the DBA cannot work in a vacuum for this solution, it requires a group effort.  Make friends with the server team, the storage team, the network folks, and the application team.

Transactional Replication

Transactional Replication gets a shout out here, even though it is not always considered a high availability solution as Paul discusses in his post, In defense of transaction replication as an HA technology.  But it can work as a high availability solution provided you can accept its limitations.  For example, there is no easy way to fail back to the primary site…however, I would argue this is true for log shipping as well because log shipping requires you to backup and restore (easy but time consuming).  In addition, with transactional replication you don’t have a byte-for-byte copy of the publisher database, as you do with log shipping, database mirroring or availability groups.  This may be a deal-breaker for some, but it may be quite acceptable for your database(s).

Transactional Replication is available in all currently supported versions and in Standard and Enterprise Editions, and may also be a viable option for you for disaster recovery.  It’s important that you clearly understand what it can do, and what it cannot, before you decide to use it.  Finally, replication in general isn’t for the faint of heart.  It has many moving parts and can be overwhelming for an Accidental DBA.  Joe has a great article on SQL Server Pro that covers how to get started with transactional replication.


As we’ve seen, there are many options available that a DBA can use to create a highly available solution and/or a system that can be recovered in the event of a disaster.  It all starts with understanding how much data you can lose (RPO) and how long the system can be unavailable (RTO), and you work from there.  Remember that the business needs to provide RPO and RTO to you, and then you create the solution based on that information.  When you present the solution back to the business, or to management, make sure it is a solution that YOU can support.  As an Accidental DBA, whatever technology you choose must be one with which you’re comfortable, because when a problem occurs, you will be the one to respond and that’s not a responsibility to ignore.  For more information on HA and DR solutions I recommend the following:

The Accidental DBA (Day 19 of 30): Tools for On-Going Monitoring

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. If you know someone who would benefit from this class, refer them and earn a $50 Amazon gift card – see class pages for details. You can find all the other posts in this series at Enjoy!

In yesterday’s post I covered the basics of baselines and how to get started.  In addition to setting up baselines, it’s a good idea to get familiar with some of the free tools available to DBAs that help with continued monitoring of a SQL Server environment.

Performance Monitor and PAL

I want to start with Performance Monitor (PerfMon).  I’ve been using PerfMon since I started working with computers and it is still one of my go-to tools.  Beginning in SQL Server 2005, Dynamic Management Views and Functions (DMVs and DMFs) were all the rage, as they exposed so much more information than had been available to DBAs before.  (If you don’t believe me, try troubleshooting a parameter sniffing issue in SQL Server 2000.)  But PerfMon is still a viable option because it provides information about Windows as well as SQL Server.  There are times that it’s valuable to look at that data side-by-side.  PerfMon is on every Windows machine, it’s reliable, and it’s flexible.  It provides numerous configuration options, not to mention all the different counters that you can collect.  You have the ability to tweak it for different servers if needed, or just use the same template every time.  It allows you to generate a comprehensive performance profile of a system for a specified time period, and you can look at performance real-time.

If you’re going to use PerfMon regularly, take some time to get familiar it. When viewing live data, I like to use config files to quickly view counters of interest.  If I’ve captured data over a period of time and I want to get a quickly view and analyze the data, I use PAL.  PAL stands for Performance Analysis of Logs and it’s written and managed by some folks at Microsoft.  You can download PAL from CodePlex, and if you don’t already have it installed, I recommend you do it now.

Ok, once PAL is installed, set up PerfMon to capture some data for you.  If you don’t know which counters to capture, don’t worry.  PAL comes with default templates that you can export and then import into PerfMon and use immediately.  That’s a good start, but to get a better idea of what counters are relevant for your SQL Server solution, plan to read Jonathan’s post on essential PerfMon counters (it goes live this Friday, the 21st).  Once you’ve captured your data, you can then run it through PAL, which will do all the analysis for you and create pretty graphs.  For step-by-step instructions on how to use PAL, and to view some of those lovely graphs, check out this post from Jonathan, Free Tools for the DBA: PAL Tool.  Did you have any plans for this afternoon?  Cancel them; you’ll probably have more fun playing with data.

SQL Trace and Trace Analysis Tools

After PerfMon, my other go-to utility was SQL Trace.  Notice I said “was.”  As much as I love SQL Trace and its GUI Profiler, they’re deprecated in SQL Server 2012.  I’ve finally finished my mourning period and moved on to Extended Events.  However, many of you are still running SQL Server 2008R2 and earlier so I know you’re still using Trace.  How many of you are still doing analysis by pushing the data into a table and then querying it?  Ok, put your hands down, it’s time to change that.  Now you need to download ClearTrace and install it.

ClearTrace is a fantastic, light-weight utility that will parse and normalize trace files.  It uses a database to store the parsed information, then queries it to show aggregated information from one trace file, or a set of files.  The tool is very easy to use – you can sort queries based on reads, CPU, duration, etc.  And because the queries are normalized, if you group by the query text you can see the execution count for the queries.

A second utility, ReadTrace, provides the same functionality as ClearTrace, and more.  It’s part of RML Utilities, a set of tools developed and used by Microsoft.  ReadTrace provides the ability to dig a little deeper into the trace files, and one of the big benefits is that it allows you to compare two trace files.  ReadTrace also stores information in a database, and normalizes the data so you can group by query text, or sort by resource usage.  I recommend starting with ClearTrace because it’s very intuitive to use, but once you’re ready for more powerful analysis, start working with ReadTrace.  Both tools include well-written documentation.

Note: If you’re a newer DBA and haven’t done much with Trace, that’s ok.  Pretend you’ve never heard of it, embrace Extended Events.


If you’re already familiar with the tools I’ve mentioned above, and you want to up your game, then the next utility to conquer is SQLNexus.  SQLNexus analyzes data captured by SQLDiag and PSSDiag, utilities shipped with SQL Server that Microsoft Product Support uses when troubleshooting customer issues.  The default templates for SQLDiag and PSSDiag can be customized, by you, to capture any and all information that’s useful and relevant for your environment, and you can then run that data through SQLNexus for your analysis.  It’s pretty slick and can be a significant time-saver, but the start-up time is higher than with the other tools I’ve mentioned.  It’s powerful in that you can use it to quickly capture point-in-time representations of performance, either as a baseline or as a troubleshooting step.  Either way, you’re provided with a comprehensive set of information about the solution – and again, you can customize it as much as you want.

Essential DMVs for Monitoring

In SQL Server 2012 SP1 there are 178 Dynamic Management Views and Functions.  How do you know which ones are the most useful when you’re looking at performance?  Luckily, Glenn had a great set of diagnostic queries to use for monitoring and troubleshooting.  You can find the queries on Glenn’s blog, and he updates them as needed, so make sure you follow his blog or check back regularly to get the latest version.  And even though I rely on Glenn’s scripts, I wanted to call out a few of my own favorite DMVs:

  • sys.dm_os_wait_stats – I want to know what SQL Server is waiting on, when there is a problem and when there isn’t.  If you’re not familiar with wait statistics, read Paul’s post, Wait statistics, or please tell me where it hurts (I still chuckle at that title).
  • sys.dm_exec_requests – When I want to see what’s executing currently, this is where I start.
  • sys.dm_os_waiting_tasks – In addition to the overall waits, I want to know what tasks are waiting right now (and the wait_type).
  • sys.dm_exec_query_stats – I usually join to other DMVs such as sys.dm_exec_sql_text to get additional information, but there’s some great stuff in here including execution count and resource usage.
  • sys.dm_exec_query_plan – Very often you just want to see the plan. This DMV has cached plans as well as those for queries that are currently executing.
  • sys.dm_db_stats_properties – I always take a look at statistics in new systems, and when there’s a performance issue, initially just to check when they were last updated and the sample size.  This DMF lets me do that quickly for a table, or entire database (only for SQL 2008R2 SP2 and SQL 2012 SP1).

Kimberly will dive into a few of her favorite DMVs in tomorrow’s post.

Wrap Up

All of the utilities mentioned in this post are available for free.  But it’s worth mentioning that there are tools you can purchase that provide much of the same functionality and more.  As an Accidental DBA, you may not always have a budget to cover the cost of these products, which is why it’s important to know what’s readily available.  And while the free tools may require more effort on your part, using them to dig into your data and figure out what’s really going on in your system is one of the best ways to learn about SQL Server and how it works.

The Accidental DBA (Day 18 of 30): Baselines

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. If you know someone who would benefit from this class, refer them and earn a $50 Amazon gift card – see class pages for details. You can find all the other posts in this series at Enjoy!

Baselines are a part of our normal, daily life.  It usually takes 25 minutes to get to work?  Baseline.  You need 7 hours of sleep each night to feel human and be productive?  Baseline.  Your weight is…  Ok, we won’t go there, but you get my point.  Your database server is no different, it has baselines as well.  As a DBA it’s critical that you know what they are and how to use them.

The why…

“But wait,” you say, “why do I need baselines for my server?  It’s always working so there’s no commute, it hopefully never sleeps, and its weight never changes (so unfair).”  You need them; trust me.  A baseline of your database server:

  • Helps you find what’s changed before it becomes a problem
  • Allows you to proactively tune your databases
  • Allows you to use historical information when troubleshooting a problem
  • Provides data to use for trending of the environment and data
  • Captures data – actual numbers – to provide to management, and both server and storage administrators, for resource and capacity planning

There are many, viable reasons to capture baselines.  The challenge is the time it takes to figure out where to store the information, what to capture, and when to capture it.  You also need to create methods to report on it and really use that data.

Where to store your baseline data

You’re a DBA or developer, and you know T-SQL, so the most obvious place to store your baseline information is in a database.  This is your chance to not only exercise your database design skills, but put your DBA skills to work for your own database.  Beyond design, you also need space for the database, you need to schedule regular backups, and you also want to verify integrity and perform index and statistics maintenance regularly.  Most of the posts that have appeared in this Accidental DBA series are applicable for this database, as well as your Productions databases.

To get you started, here’s a CREATE DATABASE script that you can use to create a database to hold your baseline data (adjust file locations as necessary, and file size and growth settings as you see fit):

USE [master];

( NAME = N'BaselineData',
  FILENAME = N'M:\UserDBs\BaselineData.mdf',
  SIZE = 512MB,
( NAME = N'BaselineData_log',
  FILENAME = N'M:\UserDBs\BaselineData_log.ldf',
  SIZE = 128MB,


What to capture

Now that you have a place to store your data, you need to decide what information to collect.  It’s very easy to start capturing baseline data with SQL Server, particularly in version 2005 and higher.  DMVs and catalog views provide a plethora of information to accumulate and mine.  Windows Performance Monitor is a built-in utility used to log metrics related to not just SQL Server but also the resources it uses such as CPU, memory, and disk.  Finally, SQL Trace and Extended Events can capture real-time query activity, which can be saved to a file and reviewed later for analysis or comparison.

It’s easy to get overwhelmed with all the options available, so I recommend starting with one or two data points and then adding on over time.  Data file sizes are a great place to start.  Acquiring more space for a database isn’t always a quick operation; it really depends on how your IT department is organized – and also depends on your company having unused storage available.  As a DBA, you want to avoid the situation where your drives are full, and you also want to make sure your data files aren’t auto-growing.

With the statements below, you can create a simple table that will list each drive and the amount of free space, as well as the snapshot date:

USE [BaselineData];
     FROM    [sys].[tables]
     WHERE   [name] = N'FreeSpace' )
  DROP TABLE [dbo].[FileInfo]

CREATE TABLE [dbo].[FreeSpace] (
   [LogicalVolume] NVARCHAR(256),
   [MBAvailable] BIGINT,
   [CaptureDate] SMALLDATETIME

Then you can set up a SQL Agent job to capture the data at a regular interval with the query below:

INSERT INTO [dbo].[FreeSpace](
   ([vs].[available_bytes] / 1048576),
FROM [sys].[master_files] AS [f]
CROSS APPLY [sys].[dm_os_volume_stats]([f].[database_id],[f].[file_id]) AS [vs];

There is a catch with the above query – it’s only applicable if you’re running SQL Server 2008 R2 SP1 and higher (including SQL Server 2012).  If you’re using a previous version, you can use xp_fixeddrives to capture the data:

INSERT INTO [dbo].[FreeSpace](
EXEC xp_fixeddrives;

UPDATE [dbo].[FreeSpace]
SET [CaptureDate] = GETDATE()
WHERE [CaptureDate] IS NULL;

Capturing free space is a great start, but if you’ve pre-sized your database files (which is recommended) the free space value probably won’t change for quite a while.  Therefore, it’s a good idea to capture file sizes and available space within as well.  You can find scripts to capture this information in my Capturing Baselines on SQL Server: Where’s My Space? article.

When to capture

Deciding when you will collect data will depend on the data itself.  For the file and disk information, the data doesn’t change often enough that you need it to collect hourly.  Daily is sufficient – perhaps even weekly if the systems are low volume.  If you’re capturing Performance Monitor data, however, then you would collect at shorter intervals, perhaps every 1 minute or every 5 minutes.  For any data collection, you have to find the right balance between capturing it often enough to accumulate the interesting data points, and not gathering so much data that it becomes unwieldy and hard to find what’s really of value.

Separate from the interval at which to capture, for some data you also need to consider the timeframes.  Performance Monitor is a great example.  You may decide to collect counters every 5 minutes, but then you have to determine whether you want to sample 24×7, only on weekdays, or only during business hours.   Or perhaps you only want to capture metrics during peak usage times.  When in doubt, start small.  While you can always change your collection interval and timeframe later on, it’s much easier to start small to avoid getting overwhelmed, rather than collect everything and then try to figure out what to remove.

Using baseline data

Once you’ve set up your process for data capture, what’s next?  It’s very easy to sit back and let the data accumulate, but you need to be proactive.  You won’t want to keep data forever, so put a job in place that will delete data after a specified time.  For the free space example above, it might make sense to add a clustered index on the [CaptureDate] column, and then purge data older than three months (or six months – how long you keep the data will depend on how you’re using it).

Finally, you need to use that data in some way.  You can simply report on it – the query below will give you free disk information for a selected volume for the past 30 days:

FROM [dbo].[FreeSpace]
WHERE [LogicalVolume] = 'C'
   AND [CaptureDate] > GETDATE() - 30
ORDER BY [CaptureDate];

This type of query is great for trending and analysis, but to take full advantage of the data, as part of your daily Agent job, set up a second step that queries the current day’s values and if there is less than 10GB of free space, send you an email to notify you that disk space is low.

Getting started

At this point you should have a basic understanding of baselines, and you have a few queries to get you started.  If you want to learn more you can peruse my Baselines series on, and for an in-depth review, you can head over to Pluralsight to view my SQL Server: Benchmarking and Baselines course.  Once you’ve set up your baselines, you will be ready to explore quick methods to review or process the data.  There are many free tools that a DBA can use to not only see what happens in real-time, but also review captured data for analysis and trending.  In tomorrow’s post, we’ll look at a few of those utilities in more detail.

The Accidental DBA (Day 13 of 30): Consistency Checking

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. If you know someone who would benefit from this class, refer them and earn a $50 Amazon gift card – see class pages for details. You can find all the other posts in this series at Enjoy!

If you’ve been following along with our Accidental DBA series, you’ll know that the posts for the last week covered topics related to one of the most important tasks (if not the most important) for a DBA: backups.  I consider consistency checks, often referred to as CHECKDB, as one of the next most important tasks for a DBA.  And if you’ve been a DBA for a while, and if you know how much I love statistics, you might wonder why fragmentation and statistics take third place.  Well, I can fix fragmentation and out-of-date/inaccurate statistics at any point.  I can’t always “fix” corruption.  But let’s take a step back and start at the beginning.

What are consistency checks?

A consistency check in SQL Server verifies the logical and physical integrity of the objects in a database. A check of the entire database is accomplished with the DBCC CHECKDB command, but there are other variations that can be used to selectively check objects in the database: DBCC CHECKALLOC, DBCC CHECKCATALOG, DBCC CHECKTABLE and DBCC CHECKFILEGROUP. Each command performs a specific set of validation commands, and it’s easy to think that to in order to perform a complete check of the database you need to execute all of them. This is not correct.

When you execute CHECKDB, it runs CHECKALLOC, CHECKTABLE for every table and view (system and user) in the database, and CHECKCATALOG. It also includes some additional checks, such as those for Service Broker, which do not exist in any other command. CHECKDB is the most comprehensive check and is the easiest way to verify the integrity of the database in one shot. You can read an in-depth description of what it does from Paul, it’s author, here.

CHECKFILEGROUP runs CHECKALLOC and then CHECKTABLE for every table in the specified filegroup. If you have a VLDB (Very Large DataBase) you may opt to run CHECKFILEGROUP for different filegroups on different days, and run CHECKCATALOG another day, to break up the work.

How often should I run Consistency Checks?

If you can run a consistency check every day for your database, I recommend that you do so. But it’s quite common that a daily execution of CHECKDB doesn’t fit into your maintenance window – see Paul’s post on how often most people do run checks. In that case, I recommend you run your checks once a week. And if CHECKDB for your entire database doesn’t complete in your weekly maintenance window, then you have to figure out what’s possible within the time-frame available. I mentioned VLDBs earlier, and Paul has a nice post on options for breaking up checks for large database. You will have to determine out what works best for your system – there isn’t a one-size-fits-all solution. You may need to get creative, which is one of the fun aspects of being DBA. But don’t avoid running consistency checks simply because you have a large database or a small maintenance window.

Why do I need to run consistency checks?

Consistency checks are critical because hardware fails and accidents happen. The majority of database corruption occurs because of issues with the I/O subsystem, as Paul mentions here. Most of the time these are events that are out of your control, and all you can do is be prepared. If you haven’t experienced database corruption yet in your career, consider yourself lucky, but don’t think you’re exempt. It’s much more common that many DBAs realize and you should expect that it’s going to occur in one of your databases, on a day that you have meetings booked back-to-back, need to leave early, and while every other DBA is on vacation.

What if I find corruption?

If you encounter database corruption, the first thing to do is run DBCC CHECKDB and let it finish. Realize that a DBCC command isn’t the only way to find corruption – if a page checksum comes up as invalid as part of a normal operation, SQL Server will generate an error. If a page cannot be read from disk, SQL Server will generate an error. However it’s encountered, make sure that CHECKDB has completed and once you have the output from it, start to analyze it (it’s a good idea to save a copy of the output). Output from CHECKDB is not immediately intuitive. If you need help reviewing it, post to one of the MSDN or StackOverflow forums, or use the #sqlhelp hashtag on Twitter.

Understand exactly what you’re facing in terms of corruption before you take your next step, which is deciding whether you’re going to run repair or restore from backup. This decision depends on numerous factors, and this is where your disaster recovery run-book comes into play. Two important considerations are how much data you might lose (and CHECKDB won’t tell you what data you will lose if you run repair, you’ll have to go back and try to figure that afterwards) and how long the system will be unavailable – either during repair or restore. This is not an easy decision. If you decide to repair, make certain you take a full backup of the database first. You always want a copy of the database, just in case. I would also recommend that if you decide to run repair, run it against a copy of the database first, so you can see what it does. This may also help you understand how much data you would lose. Finally, after you’ve either run repair or restored from backup, run CHECKDB again. You need to confirm that the database no longer has integrity issues.

Please understand that I have greatly simplified the steps to go through if you find corruption. For a deeper understanding of what you need to consider when you find corruption, and options for recovering, I recommend a session that Paul did a few years ago on Corruption Survival Techniques, as what he discussed still holds true today.


There are two additional DBCC validation commands: DBCC CHECKIDENT and DBCC CHECKCONSTRAINTS. These commands are not part of the normal check process. I blogged about CHECKIDENT here, and you use this command to check and re-seed values for an identity column. CHECKCONSTRAINTS is a command to verify that data in a column or table adheres to the defined constraints. This command should be run any time you run CHECKDB with the REPAIR_ALLOW_DATA_LOSS option. Repair in DBCC will fix corruption, and it doesn’t take constraints into consideration; it just alters data structures as needed so that data can be read and modified. As such, after running repair, constraint violations can exist, and you need to run CHECKCONSTRAINTS for the entire database to find them.

What’s next?

We’ve only scratched the surface of consistency checking. This is a topic worthy of hours of discussion – not just in the how and why, but also what to do when corruption exists. If you plan on attending our Immersion Event for the Accidental DBA, and want to get a jump on the material, I recommend reading through the posts to which I’ve linked throughout, and also going through Paul’s CHECKDB From Every Angle blog category, starting with the oldest post and working your way forward. Hopefully your experience with database corruption will be limited to testing and what you hear about from colleagues…but don’t bet on it :)

The Nuance of DBCC CHECKIDENT That Drives Me Crazy

When I put together my DBCC presentation a couple years ago I created a demo for the CHECKIDENT command.  I had used it a few times and figured it was a pretty straight-forward command.  In truth, it is, but there is one thing that I don’t find intuitive about it.  And maybe I’m the only one, but just in case, I figured I’d write a quick post about it.

CHECKIDENT is used to check the current value for an identity column in a table, and it can also be used to change the identity value.  The syntax is:

     [, { NORESEED | { RESEED [, new_reseed_value ] } } ]

To see it in action, let’s connect to a copy of the AdventureWorks2012 database and run it against the SalesOrderHeader table:

USE [AdventureWorks2012];

DBCC CHECKIDENT ('Sales.SalesOrderHeader');

In the output we get:

Checking identity information: current identity value '75123', current column value '75123'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Hooray, seems pretty basic, right?  Well, did you know that running the command as I did above can change the identity seed if the identity value and column value do not match?  This is what I meant initially when I said it wasn’t intuitive.  I didn’t include any options with the command, therefore I do not expect it to make any changes.  In fact, you have to include an option to ensure you do not make a change.  Let’s take a look.

First we’ll create a table with an identity column and populate it with 1000 rows:

USE [AdventureWorks2012];

CREATE TABLE [dbo].[identity_test] (
   id INT IDENTITY (1,1),
   info VARCHAR(10));


INSERT INTO [dbo].[identity_test] (
   VALUES ('test data');
GO 1000

Now we’ll run CHECKIDENT, as we did above:

DBCC CHECKIDENT ('dbo.identity_test');

Checking identity information: current identity value '1000', current column value '1000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Our results are what we expect.  Now let’s reseed the identity value down to 10:

DBCC CHECKIDENT ('dbo.identity_test', RESEED, 10);

Checking identity information: current identity value '1000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

The output doesn’t tell us specifically that the identity has been reseeded, so we’ll run CHECKIDENT again, but this time with the NORESEED option (different than what we ran initially):

DBCC CHECKIDENT ('dbo.identity_test', NORESEED);

Checking identity information: current identity value '10', current column value '1000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now we can see that the identity value and the current column are different, and because we included the NORESEED option, nothing happened.  And this is my point: if you do not include the NORESEED option, if the identity and column values do not match, the identity will reseed:

--first execution
DBCC CHECKIDENT ('dbo.identity_test');
PRINT ('first execution done');

--second execution
DBCC CHECKIDENT ('dbo.identity_test');
PRINT ('second execution done');

Checking identity information: current identity value '10', current column value '1000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
first execution done

Checking identity information: current identity value '1000', current column value '1000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
second execution done

So just in case I’m not the only one for whom this isn’t obvious: Make sure to include the NORESEED option when running DBCC CHECKIDENT.  Most of the time, the identity value probably matches the value for the column.  But that one time where it doesn’t, you may not want to reseed it…at least not right away.

SQL Server Training and Conferences for the Fall

There has been a lot of conversation this week in Twitterverse related to training and conferences in the SQL Server community.  I wanted to share some details and my own thoughts related to a few specific events in which I am involved (and it’s all very exciting!).


First, Paul announced a new IE event that will kick off at the end of September: IE0: Immersion Event for the Accidental DBA.  I am thrilled to be an instructor for this course, and I’m really looking forward to teaching with Jonathan.  I worked with so many Accidental DBAs in my previous job – people who were the application administrator and also had to manage the application database.  We had a fairly general class that talked about databases, and we ended up tweaking that content to create a class solely focused on teaching those application administrators what they needed to do to support their SQL Server database.  In the beginning it was a half day class, but we kept coming up with more content we wanted to cover, and had expanded the training to a full day before I left.  How happy am I that Jon and I now have three days to help SQL Server application administrators, Accidental DBAs, and Junior DBAs learn the basics?!

If you’re interested in attending our class, or know someone who might like attend, please check out the syllabus and registration page.  And if you have any questions about the course, please do not hesitate to contact me or Jon!


Second, I am speaking at the SQLIntersection conference in Las Vegas this fall.  Kimberly blogged about it on Monday and you can see the entire lineup of sessions here.  I’ll be presenting three sessions:

  • Making the Leap From Profiler to Extended Events
  • Free Tools for More Free Time
  • Key Considerations for Better Schema Design

SQLintersection is a unique conference because it is pairs with DEVintersection and SharePointintersection, and attendees have access to sessions across multiple Windows technologies.  I have more detail about my Extended Events session below, and the Free Tools session will cover usage scenarios for some of the applications I’ve discussed before in my Baselines sessions (e.g. PAL, RMLUtilities).  The last session on schema design is geared toward developers – but is also appropriate for DBAs – and I have a lot of great ideas for the content as I’ve just finished recording my next Pluralsight course, Developing and Deploying SQL Server ISV Applications, which should go live next week!

And finally, I will be speaking at the PASS Summit this October in Charlotte, NC!  I am very honored to had the following session selected:

Making the Leap From Profiler to Extended Events

You know how you discover something wonderful and you want everyone you meet to try it?  That’s this session.  I had my light bulb moment with Extended Events and believe that everyone else should use it too.  But I get that there’s some hesitation, for a whole host of reasons, so I created this session to help people understand Extended Events better, using what they already know about Profiler and SQL Trace.  Change is hard, I get that, and people have used Profiler and Trace for years…over a decade in some cases!  But both are deprecated in 2012 and Extended Events is here to stay.  You need to learn XEvents not just because it’s what you’ll use for tracing going forward, but also because it can help you troubleshoot issues in ways you’ve never been able to before.

I will also be part of a panel discussion:

How to Avoid Living at Work: Lessons from Working at Home

When I joined SQLskills last summer and started working from home, I had to make significant adjustments.  Some days, working at home was just as challenging as work itself.  But 10 months in, I can’t imagine not working at home.  I’m really looking forward to being able to share my experiences, and also hear what my rock star colleagues have learned.  If you’re thinking of working from home, or even if you currently work from home, please join me, Tom LaRockAaron BertrandAndy LeonardSteve JonesGrant FritcheyKaren Lopez, and  Kevin Kline for what I’m sure will be an invaluable and engaging discussion.

Whew!  It’s going to be a busy fall filled with SQL Server events, but I wouldn’t have it any other way.  I am very much looking forward to all of these events – and I hope to see you at one of them!