New online course: Why Physical Database Design Matters

Kimberly’s first Pluralsight online training course went live today: SQL Server: Why Physical Database Design Matters

It’s 4 hours long and is the first in a multi-part series on performance.

The modules in this course are:

  • Introduction
  • Data Types and Row Size
  • Data Types and Index Size
  • Data Types and Query Performance

You can get to the course here.

Pluralsight starts at US$29/month, for more than 80 hours of SQLskills SQL Server training (growing all the time) and more than 770 total developer and IT Pro courses(also growing all the time).

Enjoy!

Real world story of DBCC PAGE saving the day

Last week I answered a question on Twitter about DBCC PAGE and asked if I could be sent more details about the problem. The person was gracious enough to send me a detailed explanation, and it’s such an interesting scenario that I asked for and received permission to share this cut-down, anonymous version of the story with you.

Basically, it’s a real-world application of using DBCC PAGE to solve a nasty business problem that couldn’t be solved any other way.

The company involved produces government-regulated medical products, where tracking products and their ingredients is absolutely critical to make sure that any problems with materials leads to the affected products being withdrawn to prevent possible loss of life.

The company has an electronic document management application that stores all the tracking documents about all the ingredients, processes, and products. Earlier this year the application started to suffer performance problems, which turned out to be running out of threads in the connection pool. The vendor fix was to reboot. This cleared the problem, but then it started again. Rinse and repeat. After a few cycles of this, the performance problems started appearing more and more frequently after the reboot, leading to lots of complaints.

The DBA got involved and noticed a bunch of blocked sessions, each blocked by unresolved DTC transactions, holding locks on table records. As more records became locked on each reboot, blocking got worse and worse.

The solution? Rollback the DTC transactions.

The problem? These DTC transactions in some cases were days old, and some documents tied to those transactions had already been destroyed. Rolling back the DTC transactions would have meant questionable integrity of large amounts of product inventory, thus endangering patients. It might have been possible to check through everything by hand, but that would have been a prohibitively expensive effort.

DBCC PAGE to the rescue. Using the DMVs, the DBA figured out all the X-locked records and dumped all the associated pages (about 50) with DBCC PAGE dump style 3. By working with the application vendor and the application admin, the information in the records allowed the exact documents involved to be determined, and further querying of the application database confirmed that the documents had indeed been successfully imported. This meant all the hung transactions could be safely terminated.

The ability to look into the pages and figure out which documents were involved saved the company from having to write off a ton of inventory.

Bottom line: undocumented commands are *NOT* just for the intellectual interest of poking about in the internals or for dealing with corruption.

How to tell who changed a log file characteristic?

My good friend Orson Weston (@weston12) posted a question to #sqlhelp on Twitter earlier: Is there a way out of the box to find when and who changed the max file size for a log file?

You can’t tell this from the default trace as that just logs that the database was altered, not the log file.

But yes, I can tell you who made a change to the configuration of the log file for a database, using transaction log analysis.

I just ran the following code to set up the demo:

CREATE DATABASE foo2;
GO
ALTER DATABASE foo2
MODIFY FILE (NAME = N'foo2_log', MAXSIZE = 100GB);
GO

If I want to know who changed the log file properties, I can rely on the fact that the log file is file ID 2, and that the log file characteristics are stored in the sysprufiles hidden system table. Slot 0 in that table is the MDF, slot 1 is the LDF. Using fn_dblog to look in the transaction log for changes to that slot in that table, I can run the following code:

SELECT
	SUSER_SNAME ([Transaction SID]) AS [User],
	[Begin Time] AS [Change Time]
FROM
	fn_dblog (NULL, NULL)
WHERE
	[Transaction ID] =
		(SELECT
			[Transaction ID]
		FROM
			fn_dblog (NULL, NULL)
		WHERE
			[Operation] = N'LOP_MODIFY_ROW'
			AND [AllocUnitName] = N'sys.sysprufiles.clst'
			AND [Slot ID] = 1)
	AND [Operation] = N'LOP_BEGIN_XACT';
GO

 

User                                        Change Time
------------------------------------------- ------------------------
APPLECROSS\paul                             2013/09/18 14:01:25:310

It’s harder to tell exactly what changed, but you can work that out by looking at the structure of the sysprufiles records and matching the Offset column in the log record in the fn_dblog output. Not impossible, but tedious. In most cases it’s enough to know that someone changed the log file characteristics at all.

If you’re interested in a different log file (why do you have more than one?) then figure out which slot ID it is in sysprufiles and substitute that slot ID in the code.

And if you don’t have any information in the current transaction log, you can scan your log backups using the same code and using fn_dump_dblog instead (see here for details, syntax, and a usage warning).

Hope this helps!