New SSIS Immersion Events with Andy Leonard and Tim Mitchell

We’re very excited to announce that we’re partnering with our friends at Linchpin People to offer two new Immersion Events on SSIS, both debuting in 2016 along side our existing classes.

IESSIS1: Immersion Event on Learning SQL Server Integration Services

This is a 5-day course taught by Andy Leonard, with the following modules:

  1. Introduction
  2. Creating Your First SSIS package
  3. Introduction to the Data Flow Task
  4. Data Flow Task 201
  5. Data Flow Task 202
  6. The Control Flow
  7. Loop Containers
  8. Data Flow Task 301
  9. Data Flow Task 302
  10. Data Flow Task 303
  11. Event Handlers, Logging, and Configurations
  12. Security, Deployment, and Execution
  13. ETL Design Patterns
  14. Enterprise Execution Patterns

You can read through the detailed curriculum here.

The first offering of this new class will be in Chicago, at our usual location, from May 2-6, 2016 – all the details are here.

IESSIS2: Immersion Event on Advanced SQL Server Integration Services

This is a 4-day course taught by Andy Leonard and Tim Mitchell, with the following modules:

  1. SSIS Catalog
  2. SSIS Security
  3. Load Performance
  4. Data Flow Internals
  5. Testing Patterns
  6. SSIS Scripting
  7. Data Quality and SSIS
  8. Hacking the SSIS Catalog
  9. Advanced Logging
  10. Advanced SSIS Load Patterns
  11. Error Handling
  12. Data Edge Cases
  13. Business Intelligence Markup Language (Biml)

You can read through the detailed curriculum here.

The first offering of this new class will be in Bellevue, at our usual location, from September 19-22, 2016 – all the details are here.

Cool stuff – hope to see you there!

2016 Chicago and Bellevue classes open for registration

I’ve just released our first sets of US classes for 2016 for registration!

Our classes in April/May will be in Chicago, IL:

  • IE0: Immersion Event for Junior/Accidental DBAs
    • April 18-20 (US$120 discount for registering in 2015)
  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • April 18-22 (US$200 discount for registering in 2015)
  • IEPTO2: Immersion Event on Performance Tuning and Optimization – Part 2
    • April 25-29 (US$200 discount for registering in 2015)
  • IEBI: Immersion Event on Business Intelligence
    • April 25-29 (US$200 discount for registering in 2015)
  • IEHADR: Immersion Event on High Availability and Disaster Recovery
    • May 2-6 (US$200 discount for registering in 2015)
  • **NEW** IESSIS1: Immersion Event on Learning SQL Server Integration Services
    • May 2-6 (US$200 discount for registering in 2015)

Our classes in September will be in Bellevue, WA:

  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • September 12-16 (US$200 discount for registering in 2015)
  • IEPTO2: Immersion Event on Performance Tuning and Optimization – Part 2
    • September 19-23 (US$200 discount for registering in 2015)
  • **NEW** IESSIS2: Immersion Event on Advanced SQL Server Integration Services
    • September 19-22 (US$160 discount for registering in 2015)

We’ll likely have some US classes in the second half of 2016 (details in Spring 2016), and there will be classes in London in June that I’ll announce in the next few days.

As you can see, we’re offering discounts off the early-bird price for all our 2016 classes if you register before the end of this year. The regular early-bird prices will apply from January 1st, 2016. If you’ve previously attended an Immersion Event, there’s a larger discount that’s always available to you whenever you register.

You can get all the logistical, registration, and curriculum details by drilling down from our main schedule page.

We hope to see you there!

“Attempt to read or write protected memory” error from SSMS for System.Data

I just spent a couple of hours fruitlessly trying to solve a problem and thought I’d blog the solution in case anyone else hits the issue.

A few months back I got a new laptop and installed SQL Server 2014 on it and everything worked fine. A few weeks ago I installed SQL Server 2008 R2, SQL Server 2012, and Visual Studio 2013. After that, any time I tried to use the Dedicated Admin Connection (DAC) through SSMS, I got this error:

Untitled

I did some research online and couldn’t find any solutions that worked. So I tried repairing the installation, installing 2014 SP1 CU2 (latest build at time of writing), and a bunch of other things – to no avail. I was just about to give up and post a request for help when I thought I’d do one more search online.

When I did that, I noticed that all the solutions I’d tried had revolved around the assembly name in the error being something to do with SQL Server. In my case, it was System.Data, which is a .NET assembly. I added that into my search and found a bunch of new hits. Lo and behold, buried in the comments on a Stack Overflow question, I found the solution.

Turns out the problem was because of an upgraded .NET version, and the solution was to run the following from a command line and then reboot:

netsh winsock reset

And after that SSMS worked perfectly.

Hope this post helps others find the answer quickly in future!

My upcoming sessions at PASS and SQLintersection

The two major Fall conferences (PASS Summit and SQLintersection) are coming up in October so I wanted to give you a heads-up about what I’ll be presenting.

The workshop I’ll be doing at both conferences is based on one of my favorite topics – wait statistics and using them for performance troubleshooting. I’m very passionate about helping people learn about wait statistics and how to use them *correctly* as an invaluable aid when troubleshooting performance problems in SQL Server. You may have seen one of my user group sessions over the year where I’ve talked about wait stats for 60-90 minutes, but here I’ll have an entire day to expand into techniques for further troubleshooting, rather than just evaluating the wait and latch statistics themselves.

The no-frills title of the workshop is: Performance Troubleshooting Using Waits and Latches

Here’s the abstract:

One of the first things you should check when investigating performance issues are wait and latch statistics, as these can often point you in the right direction for further analysis. Unfortunately many people misinterpret what SQL Server is telling them and jump to conclusions about how to solve the problem – what is often called ‘knee-jerk performance tuning’. In this full-day workshop, you’ll learn how to investigate and interpret wait and latch statistics – practical techniques you can take home and start using immediately. You’ll also learn what a myriad of wait and latch types actually mean and how you can investigate lightly-documented types to gain insight into what causes them. Don’t waste time when it comes to performance troubleshooting; wait statistics are the fastest route to understanding your problems and this workshop will help you get there faster.

If you come along I promise you that you’ll learn a ton of really, really useful information and techniques – I’m looking forward to teaching it tremendously.

You can get more details about this year’s PASS Summit in Seattle, October 27-30 at this website.

You can get more details about this year’s Fall SQLintersection in Las Vegas, October 26-29 at this website.

Yes, unfortunately both conferences are unavoidably on the same week, which means some creative flying around the country on our part.

Whichever conference you go to, I’m looking forward to meeting you and I hope to see you in my workshop!

Cheers

Disaster recovery 101: fixing a broken boot page

One of the corruptions that can stymie all efforts at disaster recovery is broken boot page. If the boot page can’t be processed, the database can’t be brought online or even put into emergency mode. I first demonstrated how to work around this in my session on Advanced Data Recovery Techniques at PASS in 2014 and here I’d like to walk through the steps of what to do.

First of all, I’ll create a broken boot page in a test database:

-- Drop old database
USE [master];
GO

IF DATABASEPROPERTYEX (N'Company', N'Version') > 0
BEGIN
	ALTER DATABASE [Company] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
	DROP DATABASE [Company];
END

-- Create database and table
CREATE DATABASE [Company] ON PRIMARY (
    NAME = N'Company',
    FILENAME = N'D:\SQLskills\Company.mdf')
LOG ON (
    NAME = N'Company_log',
    FILENAME = N'D:\SQLskills\Company_log.ldf');
GO

USE [Company];
GO

CREATE TABLE [Sales] (
	[SalesID] INT IDENTITY,
	[CustomerID] INT DEFAULT CONVERT (INT, 100000 * RAND ()),
	[SalesDate] DATETIME DEFAULT GETDATE (),
	[SalesAmount] MONEY DEFAULT CONVERT (MONEY, 100 * RAND ()));

CREATE CLUSTERED INDEX [SalesCI] ON [Sales] ([SalesID]);
GO

-- Populate the table
SET NOCOUNT ON;
GO

INSERT INTO [Sales] DEFAULT VALUES;
GO 5000

-- Create some nonclustered indexes
CREATE NONCLUSTERED INDEX [SalesNCI_CustomerID] ON [Sales] ([CustomerID]);

CREATE NONCLUSTERED INDEX [SalesNCI_SalesDate_SalesAmount] ON [Sales] ([SalesDate]) INCLUDE ([SalesAmount]);
GO

-- Create a good backup
BACKUP DATABASE [Company] TO DISK = N'C:\SQLskills\OldCompany.bck'
WITH INIT;

-- And detach it
USE [master]
GO

EXEC sp_detach_db N'Company';
GO

Now I’ll corrupt it using a hex editor. The one I like to use is called HxD and it’s a freeware tool you can download from here.

image1

And then go to the offset of the boot page. It’s page 9 always, so the offset is 8192 x 9 = 73728.

image2

And make sure to select the ‘dec’ option to input the number in decimal, offset from beginning of the file:

image3

You’ll see the boot page contents, including the name of the database:

image4

Highlight all the lines down to the database name, and then right-click and select Fill selection…

image5

And then select the default to fill that area with zeroes:

image6

Which will make it look like this:

image7

Then hit the disk icon to save the file. Ignore any security errors you get about the ownership of the backup file.

Throughout the rest of these steps, if you get “Access is denied” from SQL Server, you need to change the security in the directory you’re using so the SQL Server service account has the correct file permissions.

You will also see different messages through some of these steps depending on which version of SQL Server you’re using – I’m using SQL Server 2014.

Exit from HxD.

Now we’re ready to try to salvage this database.

First I’ll try to attach it:

USE [master];
GO

-- Try attaching it again
EXEC sp_attach_db @dbname = N'Company', 
    @filename1 = N'D:\SQLskills\Company.mdf', 
    @filename2 = N'D:\SQLskills\Company_log.ldf';
GO
Msg 1813, Level 16, State 2, Line 5
Could not open new database 'Company'. CREATE DATABASE is aborted.
Msg 824, Level 24, State 2, Line 5
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:9; actual 0:0). It occurred during a read of page (1:9) in database ID 6 at offset 0x00000000012000 in file 'D:\SQLskills\Company.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

The attach failed. You can verify this yourself by looking in sys.databases for the database – it’s not there.

I’ll try a hack-attach. I copied off the corrupt database files, and then create the dummy database and set it offline (to release SQL Server’s locks on the files):

CREATE DATABASE [Company] ON PRIMARY (
    NAME = N'Company',
    FILENAME = N'D:\SQLskills\Company.mdf')
LOG ON (
    NAME = N'Company_log',
    FILENAME = N'D:\SQLskills\Company_log.ldf');
GO

ALTER DATABASE [Company] SET OFFLINE;
GO

Then delete the dummy database files, and copy in the original corrupt database files. And then try to bring the database online again, completing the hack-attach:

ALTER DATABASE [Company] SET ONLINE;
GO
Msg 5181, Level 16, State 5, Line 33
Could not restart database "Company". Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 33
ALTER DATABASE statement failed.
Msg 824, Level 24, State 2, Line 33
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:9; actual 0:0). It occurred during a read of page (1:9) in database ID 6 at offset 0x00000000012000 in file 'D:\SQLskills\Company.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Hmmm… but did it work?

SELECT DATABASEPROPERTYEX (N'Company', N'STATUS');
GO
SUSPECT

Yes! Now let’s try doing an emergency-mode repair:

ALTER DATABASE [Company] SET EMERGENCY;
GO
ALTER DATABASE [Company] SET SINGLE_USER;
GO
Msg 824, Level 24, State 2, Line 43
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:9; actual 0:0). It occurred during a read of page (1:9) in database ID 6 at offset 0x00000000012000 in file 'D:\SQLskills\Company.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Nope – it’s not going to work because the broken boot page won’t allow the database to be accessed at all.

Now we’ll fix it, again using a hex editor.

First off I’ll set the database offline again, copy off the broken files, and drop the database so it’s gone from SQL Server.

ALTER DATABASE [Company] SET OFFLINE;
GO

-- ***** Copy off the corrupt files

DROP DATABASE [Company];
GO

Now I’ll restore an older copy of the database and set it offline so I can open the files with HxD:

RESTORE DATABASE [Company] FROM
DISK = N'C:\SQLskills\OldCompany.bck'
WITH REPLACE;
GO

ALTER DATABASE [Company] SET OFFLINE;
GO

In HxD, I then open the restored copy of the database AND the corrupt database, and go to the boot page offset in both, just as I did in the setup phase above:

image8

And you can see that the boot page is intact in the restored copy.

Next, I’ll highlight from 12000 (hexadecimal) down to, but not including, 14000, and then right-click and hit Copy to copy the whole 8192 bytes. These offsets are the same in every database.

image9

Then go to the corrupt file, at offset 12000 (same in every database), right-click and select Paste Write to overwrite the broken boot page:

image10

And you’ll see everything between 12000 and 14000 (same in every database) go red:

image11

Now save the file and exit HxD.

Delete the restored files, and rename the corrupt files to their correct names. You’ll notice that HxD created a backup copy of the file we just changed – you can ignore it.

ALTER DATABASE [Company] SET ONLINE;
GO
Msg 5181, Level 16, State 5, Line 79
Could not restart database "Company". Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 79
ALTER DATABASE statement failed.
Msg 9003, Level 20, State 9, Line 79
The log scan number (45:16:0) passed to log scan in database 'Company' is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup.
Msg 3414, Level 21, State 1, Line 79
An error occurred during recovery, preventing the database 'Company' (6:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.

Good – the 824 message is gone, but now you can see we have another issue: all the LSNs in the boot page are incorrect now as we’re now using an older boot page that doesn’t match the more recent transaction log. Emergency mode and/or emergency-mode repair is necessary to either access the data or repair the corrupt database.

ALTER DATABASE [Company] SET EMERGENCY;
GO
ALTER DATABASE [Company] SET SINGLE_USER;
GO
DBCC CHECKDB (N'Company', REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS;
GO
Warning: The log for database 'Company' has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files.

In this case there were no other corruptions, so all emergency-mode repair had to do is rebuild the transaction log.

Now we can get in to the database and access the data.

Note: the data is likely to be transactionally inconsistent. If you continue to use the database in production following this procedure, you do so entirely at your own risk.

If you don’t have a backup of the original database, you can use any database as a source for a good boot page – just make sure it has the same name as the one you’re trying to fix. Bear in mind that the further away from a recent backup of the original database, the more trouble you’ll have trying to get crash recovery to work.

And there you have it – no longer do broken boot pages have to curtail data recovery efforts.

Enjoy!

T-SQL Tuesday #67 – monitoring log activity with Extended Events

TSQL2sDay150x150_388014A5

On the second Tuesday of each month, many people in the SQL Server community join together to all blog on the same topic – a cool idea from Adam Machanic many years ago.

This month’s topic is Extended Events, hosted by Jes Borland (b | t) – you can see her original post here.

Yesterday in class I was discussing monitoring transaction log activity with a student, to show that when a transaction commits, there is a write to the transaction log file. This is easy to do with Extended Events.

I’m going to use the file_write_completed event to track writes occurring and the transaction_log event to watch log records being generated so we can see transactions committing.

First of all I’ll setup my simple scenario:

USE [master];
GO

IF DATABASEPROPERTYEX (N'Test', N'Version') > 0
BEGIN
	ALTER DATABASE [Test] SET SINGLE_USER
		WITH ROLLBACK IMMEDIATE;
	DROP DATABASE [Test];
END
GO

CREATE DATABASE [Test] ON PRIMARY (
    NAME = N'Test_data',
    FILENAME = N'D:\SQLskills\Test_data.mdf')
LOG ON (
    NAME = N'Test_log',
    FILENAME = N'C:\SQLskills\Test_log.ldf',
    SIZE = 1MB,
    FILEGROWTH = 0MB);
GO

USE [test];
GO

CREATE TABLE TestTable (
	c1 INT IDENTITY,
	c2 CHAR (1000) DEFAULT 'a');
GO

INSERT INTO [TestTable] DEFAULT VALUES;
GO

And here’s the simple Extended Event session:

-- Drop the session if it exists. 
IF EXISTS (
	SELECT * FROM sys.server_event_sessions
		WHERE [name] = N'MonitorLog')
    DROP EVENT SESSION [MonitorLog] ON SERVER
GO

-- Create the event session
CREATE EVENT SESSION [MonitorLog] ON SERVER
	ADD EVENT [sqlserver].[file_write_completed],
	ADD EVENT [sqlserver].[transaction_log]
	ADD TARGET [package0].[ring_buffer]
		WITH (MAX_MEMORY = 50MB, max_dispatch_latency = 1 seconds)
GO

-- Start the session
ALTER EVENT SESSION [MonitorLog] ON SERVER
STATE = START;
GO

I ran both of those scripts, then executed the INSERT a few times.

Now if I go to Object Explorer, I can use the Live Data Viewer (in SQL Server 2012 onwards). Find the session, right-click on it and select Watch Live Data.

WatchLiveData

Choose the following columns to view (right click on any column name and select Choose Columns…): name, timestamp, database_id, file_id, size, operation.

And then run another INSERT (you might have to run two to make the first set of data show in the Viewer) and you’ll see data like below.

LiveData

And then you can play around and watch stuff happening.

Enjoy! (and check out the other posts from this T-SQL Tuesday)

November classes in Chicago

We’ve released our Fall lineup of classes for registration!

Our November classes in Chicago, IL will be:

  • IE0: Immersion Event for Junior/Accidental DBAs
    • November 16 – 18
  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • November 16 – 20
  • IEPDS: Immersion Event on Practical Data Science
    • November 16 – 19

You can get all the logistical, registration, and curriculum details by drilling down from our main schedule page.

We hope to see you there!

New course: Index Fragmentation Internals, Analysis, and Solutions

My latest Pluralsight course has been published!

It’s called SQL Server: Index Fragmentation Internals, Analysis, and Solutions and is just over 2.5 hours long.

The modules are:

  • Introduction
  • Index Structure and Index Uses
  • Types of Index Fragmentation
  • Causes of Index Fragmentation
  • Detecting Index Fragmentation
  • Avoiding Index Fragmentation
  • Removing Index Fragmentation

Check it out here.

We now have more than 130 hours of SQLskills online training available, all for as little as $29/month through Pluralsight (including more than four thousand other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Finding a transaction in the log for a particular user

In the last IEHADR class we just had in Chicago, I was doing a demo of looking in the transaction log to find the point at which a table was dropped so a restore could be performed (as described in this blog post). One of the students asked how to find a transaction for a particular user, so I demo’d that and thought it would make a good little post.

This can be done using fn_dblog, or if the relevant log isn’t available on the system any more, using fn_dump_dblog, albeit more slowly.

The two pieces of information needed are the user and the rough time that the transaction occurred.

The user can’t be used to search in the log directly, but every LOP_BEGIN_XACT log record contains the SID of who ran the transaction. The SID can be obtained from the username using the SUSER_SID function (remember that it’s more complicated if someone’s used EXECUTE AS, as that can mask who they really are – details in this post).

For instance, on my laptop:

SELECT SUSER_SID ('APPLECROSS\PAUL') AS [SID];
GO
SID
-----------------------------------------------------------------
0x0105000000000005150000008E888D4129BB677CAA278B76E8030000

Then you can use that SID as a filter for fn_dblog (or fn_dump_dblog), like so:

SELECT
	[Current LSN],
	[Operation],
	[Transaction ID],
	[Begin Time],
	LEFT ([Description], 40) AS [Description]
FROM
	fn_dblog (NULL, NULL)
WHERE
	[Transaction SID] = SUSER_SID ('APPLECROSS\PAUL');
GO
Current LSN             Operation                       Transaction ID Begin Time               Description
----------------------- ------------------------------- -------------- ------------------------ ----------------------------------------
00000021:0000049d:0001  LOP_BEGIN_XACT                  0000:000006c8  2015/06/03 11:18:13:790  Backup:CommitDifferentialBase;0x01050000
00000021:000004a5:0001  LOP_BEGIN_XACT                  0000:000006c9  2015/06/03 11:18:13:810  Backup:CommitLogArchivePoint;0x010500000
00000021:000004a5:0002  LOP_BEGIN_XACT                  0000:000006ca  2015/06/03 11:18:13:810  Backup:CommitLogArchivePoint;0x010500000
00000021:000004a7:0003  LOP_BEGIN_XACT                  0000:000006cb  2015/06/03 11:18:13:820  INSERT;0x0105000000000005150000008e888d4
00000021:000004a7:0004  LOP_BEGIN_XACT                  0000:000006cc  2015/06/03 11:18:13:820  AllocHeapPageSimpleXactDML;0x01050000000
00000021:000004a7:0007  LOP_BEGIN_XACT                  0000:000006cd  2015/06/03 11:18:13:820  AllocFirstPage;0x01050000000000051500000
00000021:000004ad:0002  LOP_BEGIN_XACT                  0000:000006ce  2015/06/03 11:18:13:820  INSERT;0x0105000000000005150000008e888d4
00000021:000004ae:0001  LOP_BEGIN_XACT                  0000:000006cf  2015/06/03 11:18:16:112  INSERT;0x0105000000000005150000008e888d4
00000021:000004af:0001  LOP_BEGIN_XACT                  0000:000006d0  2015/06/03 11:19:17:306  INSERT;0x0105000000000005150000008e888d4
00000021:000004b0:0001  LOP_BEGIN_XACT                  0000:000006d1  2015/06/03 11:22:35:451  DELETE;0x0105000000000005150000008e888d4
00000021:000004b1:0001  LOP_BEGIN_XACT                  0000:000006d2  2015/06/03 11:27:42:998  INSERT;0x0105000000000005150000008e888d4
00000021:000004b2:0001  LOP_BEGIN_XACT                  0000:000006d3  2015/06/03 11:29:56:044  DELETE;0x0105000000000005150000008e888d4

.
.
.

Obviously the transactions above are a contrived example. You can imagine the case of lots of transactions spread out over a few hours (or even over a day, being investigated through log backups with fn_dump_dblog) and to narrow it down to the transaction you want you could look through the list manually for the rough time or specify a time range on the SELECT using predicates on the Begin Time column in the fn_dblog output.

For example:

SELECT
	[Current LSN],
	[Operation],
	[Transaction ID],
	[Begin Time],
	LEFT ([Description], 40) AS [Description]
FROM
	fn_dblog (NULL, NULL)
WHERE
	[Transaction SID] = SUSER_SID ('APPLECROSS\PAUL')
AND ([Begin Time] > '2015/06/03 11:18:15' AND [Begin Time] < '2015/06/03 11:18:25');
GO
Current LSN             Operation                       Transaction ID Begin Time               Description
----------------------- ------------------------------- -------------- ------------------------ ----------------------------------------
00000021:000004ae:0001  LOP_BEGIN_XACT                  0000:000006cf  2015/06/03 11:18:16:112  INSERT;0x0105000000000005150000008e888d4
00000021:000004af:0001  LOP_BEGIN_XACT                  0000:000006d0  2015/06/03 11:19:17:306  INSERT;0x0105000000000005150000008e888d4
00000021:000004b0:0001  LOP_BEGIN_XACT                  0000:000006d1  2015/06/03 11:22:35:451  DELETE;0x0105000000000005150000008e888d4

And if you knew what the operation was, you could narrow it down by the Description too.

Then it’s a simple case of taking the Current LSN of the LOP_BEGIN_XACT log record of the transaction you’re interested in, and restoring a copy of the database using the STOPBEFOREMARK trick (that I showed in my previous post on using this stuff) to restore to a point just before that transaction.

Enjoy!

New course: part 2 of Kimberly’s Optimizing Stored Procedure Performance

Kimberly’s latest Pluralsight course is live!

This is part 2 of her very popular course: SQL Server: Optimizing Stored Procedure Performance. The course is more than 3 hours long and has the following modules:

  • Introduction
  • Session Settings
  • Session Settings (section 2)
  • Special Considerations
  • Troubleshooting Techniques

Check it out here.

We now have more than 130 hours of SQLskills online training available, all for as little as $29/month through Pluralsight (including more than four thousand other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!