SQLskills SQL101: Dealing with SQL Server corruption

As Kimberly mentioned last week, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

For my first SQL101 post, I’d like to touch on a subject that that has been core to my work since I graduated in 1994: dealing with corruption. You may not know that before joining the SQL Server engineering team at Microsoft in early 1999, I worked for the file system group at DEC (Digital Equipment), where among other things I was responsible for the VMS equivalent of the Windows chkdsk (called ANAL/DISK). It was this expertise with corruption and repairing it that led me to work on DBCC, rewriting much of the DBCC CHECKDB check and repair code for SQL Server 2005.

All through my professional career I’ve seen people make mistakes when they encounter corruption, so here I’d like to offer some quick guidelines for how to approach SQL Server corruption.

Don’t panic

When corruption appears, it can be scary. Suddenly your main database has all these errors and you don’t know what to do. The absolute best thing you can do is to keep calm and make rational decisions about how to proceed. If you knee jerk or jump to conclusions or let someone pressure you into make a snap decision, the odds are you will make a mistake and make the situation worse.

Make use of the run book

Check to see if your team or department has a disaster recovery handbook (often called a run book). This should give you useful information for you like:

  • How to access the backups
  • How to access Windows and SQL Server installation media and product keys
  • Who to call in various other departments for assistance with infrastructure
  • Who to call for help in your department
  • Who to notify of the problem (think CIO, CTO, I.T. Director)
  • How to proceed with various scenarios (e.g. restoring the main production database, or performing a bare-metal install of a new server)

Your run book might say to immediately fail over to a synchronous Availability Group replica, or some other redundant copy of the database, no matter what the problem is and then figure out the problem on the main production database afterwards. If that’s the case, that’s what you do.

And if you’re reading this and thinking ‘Hmm – we don’t have one of those…’, then that’s a big problem that should be addressed, as well as making sure that even the most junior DBA can follow the various procedures in it.

Consult my comprehensive flow chart

A few years ago I wrote a large flow chart for SQL Server Magazine, and it’s available in PDF poster form here (archived on a friend’s blog).

This can also form the basis of a run book if you don’t have one.

Understand the extent of the corruption

It is a very good idea to run DBCC CHECKDB on the database (if you haven’t already) to determine the extent of the corruption.

Depending on where the corruption is, you may be able to restore in a lot less time than restoring the entire database. For instance, if only a single page is damaged, you might be able to do a single-page restore. If only a single filegroup is damaged, you might be able to do a single filegroup restore.

Depending on what the corruption is, you may not even have to restore. For instance, if the corruption is confined to nonclustered indexes (all the corruption messages list index IDs higher than 1), you can rebuild the corrupt indexes manually with code like the following:

BEGIN TRANSACTION;
GO

ALTER INDEX CorruptIndexName ON TableName DISABLE;
GO
ALTER INDEX CorruptIndexName ON TableName REBUILD WITH (ONLINE = ON);
GO

COMMIT TRANSACTION;
GO

That means you don’t have to restore or use repair, both of which incur downtime.

Consider the ramifications of the actions you’re planning

If you’ve never dealt with corruption before and you’re not an experienced DBA, there are actions that might be tempting that could cause you bigger headaches than just having corruption.

Some examples:

  • If you have a corrupt database, don’t try to detach it from the instance as you likely won’t be able to attach it again because of the corruption. This especially applies if the database is marked as SUSPECT. If you ever have this scenario, you can reattach the database using the steps in my post Disaster recovery 101: hack-attach a damaged database.
  • If your SQL Server instance is damaged, and the database is corrupt, don’t try to attach it to a newer version of SQL Server, as the upgrade might fail and leave the database in a state where it can’t be attached to either the old or new versions of SQL Server.
  • If crash recovery is running, don’t ever be tempted to shut down SQL Server and delete the log file. That is guaranteed to cause at least data inconsistencies and at worst corruption. Crash recovery can sometimes take a long time, depending on the length of open transactions at the time of the crash that must be rolled back.

If you’re planning or have been told to do something, make sure you understand what the ramifications of that thing are.

Don’t just jump to repair

The repair option is called REPAIR_ALLOW_DATA_LOSS because you’re telling DBCC CHECKDB that it can lose data to perform repairs. The repairs (with a few exceptions) are written as ‘delete what’s broken and fix up all the links’. That’s because that’s usually the only way to write a repair algorithm for a certain corruption that fixes it in 100% of cases without making things worse. After running repair, you will likely have lost some data, and DBCC CHECKDB can’t tell you what it was. You really don’t want to run repair unless you can avoid it.

Also, there are some cases of corruption that absolutely cannot be repaired (like corrupt table metadata) so then you *have* to have backups or a redundant copy to use.

There is a last resort that we make a documented feature back in SQL Server 2005 – EMERGENCY-mode repair – for when the transaction log is damaged. That will try to get as much data out of the transaction log as possible and then run a regular repair. Although that may get the database back online, you’ll likely have data loss and data inconsistencies. It really is a last resort, and it’s not infallible either.

You really want to have backups to use or a redundant copy to fail over to instead.

But if you *have* to use repair, try to do it on a copy of the corrupt database. And then go fix your backup strategy so you aren’t forced to use repair again in future.

Be very careful with 3rd-party tools

There are some 3rd-party tools that will try to do repairs or extract data out. I’ve seen them work sometimes and I’ve seen them spectacularly fail and totally trash a database at other times. If you’re going to try one of these out, do it on a copy of the corrupt database in case something goes wrong.

Ask for help (but be careful)

If you don’t know what to do and you’re concerned that you’ll make things worse or make a wrong decision, try asking for help. For free, you could try using the #sqlhelp hashtag on Twitter, you could try posting to a forum like http://dba.stackexchange.com/ or one of the https://www.sqlservercentral.com/Forums/. Sometimes I’ll have time to respond to a quick email giving some direction, and sometimes I’ll recommend that you get some consulting help to work on data recovery.

You can also call Microsoft Customer Support for assistance, but you’ll always need to pay for that unless the source of the corruption turns out to be a SQL Server bug.

Wherever you get the help from though, be careful that the advice seems sound and you can verify the suggestion with well-known and reputable sources.

Do root cause analysis

After you’ve recovered from the corruption, try to figure out why it happened in the first place as the odds are that it will happen again. The overwhelmingly vast majority of corruptions are caused by the I/O subsystem (including all the software under SQL Server), with a very small percentage being caused by memory chip problems, and a smaller percentage being caused by SQL Server bugs. Look in the SQL Server error log, Windows event logs, ask the Storage Admin if anything happened, and so on.

Practice and research

It’s a *really* good idea to practice recovering from corruption before you have to do it for real. You’ll be more comfortable with the procedures involved and you’ll be more confident. I have some corrupt databases that you can download and practice with here.

There’s also a lot of instructional information on my blog under the following categories:

And there are two Pluralsight online training courses I’ve recorded which will give you an enormous boost in practical knowledge:

Summary

Ok – so it turned out to not be quite as quick as I thought! However, this is all 101-level information that will help you work through a corruption problem or exercise. I’ll be blogging a lot more of these 101-level posts this year. If there’s anything in particular you’d like to see us cover at that level, please leave a comment.

Enjoy!

Who is overriding MAXDOP 1 on the instance?

In Monday’s newsletter I discussed an email question I’d been sent and promised to blog about it.

The question was “how can latch waits for ACCESS_METHODS_DATASET_PARENT occur when the instance MAXDOP is set to 1?” This shouldn’t happen, as that latch is only used for coordination between threads performing a parallel table/index scan. However, just because the instance MAXDOP is set to 1, that doesn’t mean no parallelism can occur.

Anyone can override the instance MAXDOP setting using a query hint.

There are two ways to prevent this:

  • Set the instance cost threshold for parallelism setting to be extremely high, so no serial plans can be deemed expensive enough to run in parallel
  • Make everyone run within a Resource Governor (Enterprise Edition only) workload group that has the MAX_DOP set to 1

Or you can figure out where the query hint is being used and remove it.

In this post I’m going to show you a simple Extended Events session that will capture information about what’s causing CXPACKET waits (you can’t have ACCESS_METHOD_DATASET_PARENT latch waits without CXPACKET waits happening too) and then refine it to use a less expensive event.

First off I’ll set my MAXDOP to 1:

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
GO

EXEC sp_configure 'max degree of parallelism', 1;
RECONFIGURE WITH OVERRIDE;
GO

SELECT [name], [value_in_use] FROM sys.configurations WHERE [name] LIKE '%degree%';
GO
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'max degree of parallelism' changed from 0 to 1. Run the RECONFIGURE statement to install.
name                       value_in_use
-------------------------- -------------
max degree of parallelism  1

Then I’ll check for CXPACKET waits (using my waits query) after running the following query, that scans a 6.7 million row table (you can get the SalesDB database from here):

SELECT * FROM [SalesDB].[dbo].[Sales] WHERE [Quantity] > 10000;
GO

No CXPACKET waits.

But if I add a MAXDOP query hint and then check for waits:

SELECT * FROM [SalesDB].[dbo].[Sales] WHERE [Quantity] > 10000 OPTION (MAXDOP 8);
GO
WaitType            Wait_S  Resource_S Signal_S WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
------------------- ------- ---------- -------- --------- ---------- --------- -------- -------- ---------------------------------------------
LATCH_EX            0.20    0.16       0.05     93        79.61      0.0022    0.0017   0.0005   https://www.sqlskills.com/help/waits/LATCH_EX
CXPACKET            0.05    0.05       0.00     16        20.00      0.0032    0.0032   0.0000   https://www.sqlskills.com/help/waits/CXPACKET

And the instance MAXDOP was successfully overridden.

Now I’ll set up a simple Extended Events session to track down the offending code (based on the query from here). It’s very important that you query the sys.dm_xe_map_values DMV to find the correct number to use in the query for the CXPACKET wait, as these numbers often change from release to release, and even in Service Packs. For instance, CXPACKET was 191 in SQL Server 2014 RTM, but is 190 in the 2014 build I’m using.

Be very careful about running this in production, as the event will fire for *every* wait that occurs and so will likely affect your workload throughput, even though it’ll short-circuit if the wait isn’t CXPACKET. I’ll show you a better event to use lower down.

IF EXISTS (
    SELECT * FROM sys.server_event_sessions
        WHERE [name] = N'InvestigateWaits')
    DROP EVENT SESSION [InvestigateWaits] ON SERVER
GO

CREATE EVENT SESSION [InvestigateWaits] ON SERVER
ADD EVENT [sqlos].[wait_info]
(
    ACTION (
        sqlserver.client_hostname,
        sqlserver.nt_username,
        sqlserver.sql_text)
    WHERE [wait_type] = 190 -- CXPACKET only
    AND [opcode] = 1 -- Just the end wait events
)
ADD TARGET [package0].[ring_buffer]
WITH
(
    MAX_MEMORY = 50 MB,
    MAX_DISPATCH_LATENCY = 5 SECONDS)
GO

ALTER EVENT SESSION [InvestigateWaits] ON SERVER STATE = START;
GO

And then when I run the select statement again I can look in the ring buffer and see the events. I put in a DISTINCT to minimize the number of lines of output. The code is:

SELECT 
    DISTINCT ([data1].[value] ('(./@timestamp)[1]', 'datetime')) AS [Time],
    [data1].[value] ('(./action[@name="client_hostname"]/value)[1]', 'VARCHAR(MAX)') AS [Host],
    [data1].[value] ('(./action[@name="nt_username"]/value)[1]', 'VARCHAR(MAX)') AS [User],
    [data1].[value] ('(./action[@name="sql_text"]/value)[1]','VARCHAR(MAX)') AS [Statement]
FROM (
    SELECT CONVERT (XML, [target_data]) AS data
    FROM sys.dm_xe_session_targets [xst]
    INNER JOIN sys.dm_xe_sessions [xs]
        ON [xst].[event_session_address] = [xs].[address]
    WHERE [xs].[name] = N'InvestigateWaits') AS t
CROSS APPLY data.nodes('//event') n (data1);
GO
Time                    Host       User            Statement
----------------------- ---------- --------------- ---------------------------------------------------------------------------------
2017-02-16 00:20:16.937 APPLECROSS APPLECROSS\Paul SELECT * FROM [SalesDB].[dbo].[Sales] WHERE [Quantity] > 10000 OPTION (MAXDOP 8);
2017-02-16 00:20:16.987 APPLECROSS APPLECROSS\Paul SELECT * FROM [SalesDB].[dbo].[Sales] WHERE [Quantity] > 10000 OPTION (MAXDOP 8);
2017-02-16 00:20:16.937 APPLECROSS                 SELECT * FROM [SalesDB].[dbo].[Sales] WHERE [Quantity] > 10000 OPTION (MAXDOP 8);

Cool – so that works, but the session will likely affect workload performance, as I mentioned above. A better event to use is degree_of_parallelism, which was introduced in SQL Server 2012 and only fires once per batch execution, rather than once for every wait that occurs.

The updated event session is:

CREATE EVENT SESSION [InvestigateWaits] ON SERVER
ADD EVENT [sqlserver].[degree_of_parallelism]
(
    ACTION (
        sqlserver.client_hostname,
        sqlserver.nt_username,
        sqlserver.sql_text)
    WHERE [dop] > 0 -- parallel plans
)
ADD TARGET [package0].[ring_buffer]
WITH
(
    MAX_MEMORY = 50 MB,
    MAX_DISPATCH_LATENCY = 5 SECONDS)
GO

And the code to parse the XML, and sample output from my query is:

SELECT 
    [data1].[value] ('(./@timestamp)[1]', 'datetime') AS [Time],
    [data1].[value] ('(./data[@name="dop"]/value)[1]', 'INT') AS [DOP],
    [data1].[value] ('(./action[@name="client_hostname"]/value)[1]', 'VARCHAR(MAX)') AS [Host],
    [data1].[value] ('(./action[@name="nt_username"]/value)[1]', 'VARCHAR(MAX)') AS [User],
    [data1].[value] ('(./action[@name="sql_text"]/value)[1]','VARCHAR(MAX)') AS [Statement]
FROM (
    SELECT CONVERT (XML, [target_data]) AS data
    FROM sys.dm_xe_session_targets [xst]
    INNER JOIN sys.dm_xe_sessions [xs]
        ON [xst].[event_session_address] = [xs].[address]
    WHERE [xs].[name] = N'InvestigateWaits') AS t
CROSS APPLY data.nodes('//event') n (data1);
GO
Time                    DOP Host       User            Statement
----------------------- --- ---------- --------------- ---------------------------------------------------------------------------------
2017-02-16 00:36:37.347 8   APPLECROSS APPLECROSS\Paul SELECT * FROM [SalesDB].[dbo].[Sales] WHERE [Quantity] > 10000 OPTION (MAXDOP 8);

Pretty cool, eh?

You can mess around with more complex code that does things like grab the tsql_stack action and then parses it out, and more information to identify the culprit, like the application name – however complex you want to get – but now you know the base event session to capture when the query hint is being used.

Enjoy!

Presenting at SQLSaturday Dublin in June

 SQLSaturday #620 - Dublin 2017

Although we won’t be presenting any Immersion Events in Europe in 2017, Kimberly and I will both be presenting workshops and sessions at SQLSaturday Dublin in June – our first SQLSaturday ever!

 

My workshop details:

  • Performance Troubleshooting Using Waits and Latches
  • One of the first things you should check when investigating performance issues are wait and latch statistics, as these can often point you in the right direction for further analysis. Unfortunately many people misinterpret what SQL Server is telling them and jump to conclusions about how to solve the problem – what is often called ‘knee-jerk performance tuning’.
    In this full-day workshop, you’ll learn how to investigate and interpret wait and latch statistics – practical techniques you can take home and start using immediately. You’ll also learn what a myriad of wait and latch types actually mean and how you can investigate lightly-documented types to gain insight into what causes them. Don’t waste time when it comes to performance troubleshooting; wait statistics are the fastest route to understanding your problems and this workshop will help you get there faster.
  • Thursday, June 15th
  • Eventbrite registration link

Kimberly’s workshop details:

  • Queries Gone Wrong: Statistics, Cardinality, Solutions
  • Have you ever wondered why SQL Server did what it did when processing your query? Have you wondered if it could have done better? Query estimates/statistics are often at the key to understanding; this session will explain the what, why, and how about estimates!
    Most of the time SQL Server returns data quickly – except when it doesn’t. Ultimately, what you see in the plan, just doesn’t seem to make sense. Why? Where did it come up with this plan? From one side, Transact-SQL is a declarative language that details what data you need but without information about how SQL Server should get it. Join order, predicate analysis – this is what SQL Server has to decide based on your query’s input? But, what should be processed first? Which table should we use to “drive” the join? Usually, it’s the table with the smallest number of rows that match your query’s predicates. But, how do they know which has the smallest set before they process the data? Statistics!
    There are numerous reasons why query performance can suffer and in this full-day workshop, Kimberly will cover a number of critical areas and for each – show you the behaviors, execution plans, troubleshooting techniques, and most importantly, possible solutions. This full-day workshop is about solving your query performance problems. Each problem has a different way of approaching it and you’ll walk away with a plethora of strategies to troubleshoot and tackle even gnarly query problems. Stop with the “sledgehammer” approaches (updating statistics, rebuilding indexes, recompiling plans, clearing cache, restarting SQL Server) and solve the problem. In this full-day workshop, you’ll learn much more finessed ways to solve query plan quality problems.
    Topics covered include understanding / maintaining statistics, handing VLTs / skewed data, distribution problems, troubleshooting common and advanced scenarios, and how to best utilize the cardinality estimation models (and trace flags) available in SQL Server versions 2008-2016.
  • Friday, June 16th
  • Eventbrite registration link

My Saturday session details:

  • Advanced Data Recovery Techniques
  • Disasters happen – plain and simple. When disaster strikes a database you’re responsible for, and backups and repair fail, how can you salvage data, and possibly your company and your job? This is where advanced data recovery techniques come in. Using undocumented tools and deep knowledge of database structures, you can manually patch up the database enough to extract critical data. This demo-heavy session will show you never-seen-before methods I’ve used extensively in the last year to salvage data for real-life clients after catastrophic corruption. You won’t believe what it’s possible to do!

Kimberly’s Saturday session details:

  • Plan Cache Pollution: Dealing with Ad Hoc Madness
  • How you execute your data requests can have a profound effect on performance and plan reuse. Did you ever wonder where that estimate (in the showplan) comes from? Is it correct or, is it way off? Why? You’ve probably heard that many of your problems are related to statistics. Potentially, you’ve even rebuilt statistics only to find that it fixes the problem. However, what you might be seeing is a false positive. And, it doesn’t always work. Come to this session to find out how you should be executing your statements for the best performance, caching, and reuse! We’ll look at ad hoc statements, dynamically constructed statements, and sp_executesql (e.g. forced statement caching) and I’ll debunk the mysteries around estimates so that you can solve performance problems the RIGHT way! If you want to demystify SQL Server’s decisions used for statement execution, query plans, and plan caching – this is the place to be!

The link to the main SQLSaturday Dublin web page is here.

There are a ton of excellent speakers attending from around the world, so it’s going to be a really great weekend – we hope to see you there!