DBCC CHECKDB Parallel Checks and SQL Server Edition

It’s been a few weeks since the PASS Summit but I’m still getting the occasional email from people who attended one of my sessions. I consider this a good thing – I believe that if someone follows up with me a few weeks after they attended a session, then something I said stuck with them and it was worth their time to follow up. Some people have had questions about something I said during a session, and others are following up on discussions we had during the week.

I had one question about minimizing the resources used by DBCC CHECKDB, and one of the suggestions I provided was to reduce MAXDOP when CHECKDB was running, to limit the number of processors used by CHECKDB. However, I want to point out that CHECKDB only runs in parallel on Enterprise Edition. You may be aware of this already, as Paul mentions it in one of his posts, and also in the SQL Server 2008 Internals book. But, as I was perusing the DBCC CHECKDB entry online, I discovered that it contains incorrect information. From BOL (emphasis mine):

“Checking Objects in Parallel

By default, DBCC CHECKDB performs parallel checking of objects. The degree of parallelism is automatically determined by the query processor. The maximum degree of parallelism is configured just like parallel queries. To restrict the maximum number of processors available for DBCC checking, use sp_configure. For more information, see Configure the max degree of parallelism Server Configuration Option. Parallel checking can be disabled by using trace flag 2528. For more information, see Trace Flags (Transact-SQL).”

The first sentence is incorrect. DBCC CHECKDB does not perform parallel checking of objects in Standard Edition. At all. CHECKDB is always single-threaded in Standard Edition and I confirmed this with the SQL Server development team.

For Enterprise Edition, CHECKDB respects the maximum degree of parallelism setting configured for the instance. If you have MAXDOP set to 0 for the instance, but want to restrict CHECKDB to only use 2 processors, you need to either change MAXDOP for the instance to 2, or you need to use Resource Governor (also an Enterprise-only feature).

I did enter a Connect item for this, and hopefully the documentation will be fixed soon. I hope this clears up any confusion that may exist.

What Checks Update dbccLastKnownGood?

One of the methods I’ve used to check when a client last ran DBCC CHECKDB on their database is the dbccLastKnownGood value on the boot page (page 9 in the PRIMARY file).  When working with a client a couple weeks ago, the value for dbccclastknowngood was from 2006, but the customer stated they were running integrity checks.  When I investigated their method for checks, I discovered that due to the database size they were running DBCC CHECKALLOC and DBCC CHECKCATALOG weekly, and checking different user tables with DBCC CHECKTABLE at night.  Breaking out the checks between the different commands does not update dbccLastKnownGood – running CHECKDB is the only thing that does.  Let’s have a look…

I have a copy of the AdventureWorks2012 database installed, and I can look at the boot page using DBCC PAGE or DBCC DBINFO.  Both commands are undocumented, but as Paul notes in the links provided, they are safe to run.  I’ll use DBINFO:

DBCC DBINFO ('AdventureWorks2012');

The text below is a snippet of the output to show only what’s relevant:

dbi_dbmOldestXactLsn = 0:0:0 (0×00000000:00000000:0000)                  dbi_CloneCpuCount = 0

dbi_CloneMemorySize = 0             dbi_updSysCatalog = 1900-01-01

dbi_LogBackupChainOrigin = 0:0:0

dbi_dbccLastKnownGood = 1900-01-01 00:00:00.000                          dbi_roleSequence
= 0

dbi_dbmHardenedLsn = 0:0:0 (0×00000000:00000000:0000)                    dbi_localState = 0

dbi_safety = 0                      dbi_modDate = 2012-11-15

dbi_verRDB = 184552376              dbi_lazyCommitOption = 0           

dbi_svcBrokerGUID = 10048a46-646c-4c44-8037-8ea6c83f4e2c                 dbi_svcBrokerOptions =

If we run DBCC CHECKALLOC, DBCC CHECKCATALOG, and DBCC CHECKTABLE for all system and user tables, you can see that dbccLastKnownGood good does not change (please don’t holler because I’m using a cursor…it does what I need for this example!):


DBCC CHECKCATALOG ('AdventureWorks2012');

USE [AdventureWorks2012];

DECLARE @schemaname varchar(500);
DECLARE @tablename varchar(500);
DECLARE @fulltable varchar(1000);
DECLARE @string varchar(5000); </span>

SELECT [ss].[name],[so].[name]
FROM [sys].[objects] AS so
JOIN [sys].[schemas] AS ss ON [so].[schema_id]=[ss].[schema_id]
WHERE [so].[type] in ('S','U')
ORDER BY [so].[name]; 

OPEN TableList

INTO @schemaname, @tablename;

SET @fulltable = @schemaname + '.' + @tablename;
SET @string = N'DBCC CHECKTABLE (''' + @fulltable + ''') WITH ALL_ERRORMSGS, NO_INFOMSGS;'
EXEC (@string)
INTO @schemaname, @tablename;

CLOSE TableList;

<sDBCC DBINFO ('AdventureWorks2012');


dbi_dbmOldestXactLsn = 0:0:0 (0×00000000:00000000:0000)                  dbi_CloneCpuCount = 0

dbi_CloneMemorySize = 0             dbi_updSysCatalog = 1900-01-01

dbi_LogBackupChainOrigin = 0:0:0

dbi_dbccLastKnownGood = 1900-01-01 00:00:00.000                          dbi_roleSequence
= 0

dbi_dbmHardenedLsn = 0:0:0 (0×00000000:00000000:0000)                    dbi_localState = 0

dbi_safety = 0                      dbi_modDate = 2012-11-15

dbi_verRDB = 184552376              dbi_lazyCommitOption = 0           

dbi_svcBrokerGUID = 10048a46-646c-4c44-8037-8ea6c83f4e2c                 dbi_svcBrokerOptions =

No change in dbccLastKnownGood.  What happens if we run DBCC CHECKFILEGROUP?  I created a second filegroup in the AdventureWorks2012 database and created one table in the filegroup, then ran the check on that filegroup:

ALTER DATABASE [AdventureWorks2012]

ALTER DATABASE [AdventureWorks2012]
ADD FILE ( NAME = N'TestFile', FILENAME = N'D:\Databases\SQL2012\AdventureWorks2012\TestFile.ndf' ,
SIZE = 4096KB , FILEGROWTH = 1024KB )

CREATE TABLE [dbo].[test] (
[col1] INT,
[col2] INT
ON [TestFG]; 

INSERT INTO [dbo].[test] (

DBCC DBINFO ('AdventureWorks2012');


dbi_dbmOldestXactLsn = 0:0:0 (0×00000000:00000000:0000)                  dbi_CloneCpuCount = 0

dbi_CloneMemorySize = 0             dbi_updSysCatalog = 1900-01-01

dbi_LogBackupChainOrigin = 0:0:0

dbi_dbccLastKnownGood = 2012-11-15
= 0

dbi_dbmHardenedLsn = 0:0:0 (0×00000000:00000000:0000)                    dbi_localState = 0

dbi_safety = 0                      dbi_modDate = 2012-11-15

dbi_verRDB = 184552376              dbi_lazyCommitOption = 0           

dbi_svcBrokerGUID = 10048a46-646c-4c44-8037-8ea6c83f4e2c                 dbi_svcBrokerOptions =

Look at that, it updated.  And finally, we know that CHECKDB will update the value, but what happens if we include the WITH PHYSICAL_ONLY option, where the checks are not quite as involved? (See Paul’s post on check options for VLDBs for additional notes on WITH PHYSICAL_ONLY.)

DBCC DBINFO ('AdventureWorks2012');


dbi_dbmOldestXactLsn = 0:0:0 (0×00000000:00000000:0000)                  dbi_CloneCpuCount = 0

dbi_CloneMemorySize = 0             dbi_updSysCatalog = 1900-01-01

dbi_LogBackupChainOrigin = 0:0:0

dbi_dbccLastKnownGood = 2012-11-15
= 0

dbi_dbmHardenedLsn = 0:0:0 (0×00000000:00000000:0000)                    dbi_localState = 0

dbi_safety = 0                      dbi_modDate = 2012-11-15

dbi_verRDB = 184552376              dbi_lazyCommitOption = 0           

dbi_svcBrokerGUID = 10048a46-646c-4c44-8037-8ea6c83f4e2c                 dbi_svcBrokerOptions =

The dbccLastKnownGood value does update, as it does with CHECKDB without the PHYSICAL_ONLY.  Therefore, if you’re relying on dbccLastKnownGood to tell you when CHECKDB was last run, realize it may not give you the entire story.

Note: post edited 11/16/2012 to replace screen shots of DBCC DBINFO output with text, as the images did not render well in all browsers.

T-SQL Tuesday #36: What Does Community Mean?


tsql2sday150x150 thumb T SQL Tuesday #36: What Does Community Mean? I find this month’s T-SQL Tuesday topic quite appropriate, as it follows the 2012 PASS Summit, a time when I get to see so many members of the SQL Server Community.  Chris Yates (@YatesSQL) is hosting this month and asks, “What Does Community Mean to You?”  My short answer? #sqlfamily, #sqlhelp and opportunity.

When I think of the SQL Community, it is the people that come to mind – so many people that I have met and become friends with, and so much of it is due to Twitter.  When I think of the SQL Community, I think of Twitter.  I do.  It may seem crazy, and I know that there is more to the community than conversations on Twitter – there is the Summit, there are User Groups, there are SQLSaturdays, there are blogs – the community encompasses so much.  But how do I communicate with everyone?  So frequently, so quickly?  How I have met so many people from all over the US?  All over the world?  Twitter.

Our community is incredibly unique.  None of my friends or family have this kind of cohort – they don’t have the friendships that span the globe.  They don’t regularly converse with colleagues all around the world, working together to answer questions and solve problems.  I’ve lost so many hours of sleep due to chats with Rob Farley (I curse that 17.5 hour time difference!) and Jonathan Kehayias (I’m not convinced that he sleeps) but it’s all worth it.

I have found some really close friends in the Community.  Friends that I would spend time with even if they didn’t work with SQL Server.  Friends that I know I can call or email at any time for anything.  Just over a year ago I blogged about #sqlfamily, and what I wrote then still holds true today.

The SQL Community is also about helping others.  The generosity of individuals continually astounds me.  So many members of the community volunteer for PASS, helping to make the organization so successful.  Many people help others solve technical problems via the #sqlhelp hashtag on Twitter.  And when it’s too much for 140 characters, blog posts are written, questions posted to forums or emails exchanged.  I’ve seen it happen to others, it’s happened to me.  The people of this community share information and provide help, and then celebrate success.

And finally, the SQL Community can provide significant opportunities for people.  The opportunity to meet a favorite blogger at Summit, the opportunity to talk to someone at Microsoft to understand how it really works, the opportunity to meet someone who will become that friend you call when you have life changing news.  And yes, opportunity can be a job opening.  How many jobs have been found through the community?  Too many to count.  I found my team through the SQL Community.  If I were not involved, I do not believe I would have the job I have today.  I love what I do.  I work with five amazing individuals.  People that I respect personally and professionally.  My job, just like the Community, is exhausting, rewarding, and a whole lot of fun.

If you’re not involved, I recommend you think about it.  You never know where it will take you.  And for those of you in this community – I thank you for making it what it is today, and I cannot wait to see where we go in the future.

PASS Summit 2012: Day 2

Happy second day of Summit!  Holy buckets, yesterday was a busy day.  But it was great.  After the keynote I headed off to present my DBCC session which went well.  Thank you to everyone that attended and to all of those who had questions and provided feedback.  I am presenting again today, same time (10:15 AM PST) and same room (618-620), but this time it’s Demystifying Database Statistics.  Statistics is another of my favorite topics, I expect the time will fly by just as it did yesterday.

I just finished my PASS TV interview which was short but fun and I am at the blogger’s table again this morning, quite curious to see what the keynote brings.  I know we will hear from Douglas McDowell, Thomas LaRock and Quentin Clark.  Douglas and Thomas are Vice Presidents on the PASS Board of Directors and Quentin is Corporate Vice President, SQL Program Management at Microsoft.

And we’re off…

8:22 AM Douglas takes the stage first and mentions that he ran into many people yesterday who could not decide what session to attend for a time slot, because there were multiple that were of interest.  Douglas’ suggestion?  Buy the DVDs.  It’s a good recommendation, although remember that if you’ve attended the Summit, you’ll also have access to the sessions online.  If you’ll want to watch the sessions when you’re not on the internet, you’ll need to buy the DVDs.

Douglas is talking about finance, as that’s his role, and points out that this year they’ve hired an additional Community Evangelist.  Karla Landrum was the first Community Evangelist, but she has been so busy that another person was needed.  This past quarter Niko Neugebaur was hired and I know he is thrilled about joining the team.  Karla and Niko provide amazing support to the SQL Server community – if you see one of them this week please say thank you!

Douglas has spent some time talking about the revenue and expenses for PASS. The Summit is the largest revenue generator for PASS – not surprising with over 3000 attendees here this week.

8:35 AM Tom LaRock takes the stage and he gets to announce the PASSion Award Winner, who is Jen Stirrup.  Congratulations Jen!!

Next year’s Summit will be in Charlotte, NC, from October 15-18. Early registration is already available at a very discounted rate.

Tonight is the Community Appreciation Party at Seattle’s Experience Music Project and it starts at 7 PM (ends at 10 PM).  I have never been to EMP and look forward to checking it out!

8:55 AM Quentin Clark has taken the stage and now we’re getting a demo of some new PDW functionality.  I know that big data is a hot topic and of interest right now…but I’m still an engine girl at heart.  Julie Strauss from Microsoft is on stage for a demo, and she’s talking about movie data (I want that data).  There’s a side conversation with Quentin – there’s obviously an ongoing joke about how it’s hard for her to pronounce some words (she’s a Viking).  And then she says, “We don’t talk we just do.”  Love it.

Ok, I need to head off to get ready for my session.  Did I mention that I went to Bob Ward’s session on the SQLOS yesterday?  It was three hours of fantastic technical knowledge icon smile PASS Summit 2012: Day 2

PASS Summit 2012: Day 1

Happy first day of Summit everyone!  Ok, if you attended any pre-conference sessions then today probably does not feel like the first day, but it’s the first day with a keynote and general sessions.  My day started with a 3 mile-ish #sqlrun down along the waterfront.  It was a great way to kick off the day, huge thanks to my roommate Jes Borland for organizing!  Today I present my DBCC Commands: The Quick and the Dangerous session at 10:15 AM PST in room 618-620 and I hope to see you there!

I was fortunate to have been selected to sit at the blogger’s table this week, so I will post through the keynote sessions today and tomorrow.  There is no keynote session on Friday, but I plan to attend Dr. DeWitt’s session, Big Data Meets SQL Server, and will try to live blog that as long as I have decent connectivity.

For those of you not in Seattle, don’t despair!  You can watch this morning’s keynote, and catch many other sessions (yes, actual sessions!) and events from the Summit on PASS TV.  I kid you not: PASS TV.  Open up your favorite browser and go to the main PASS TV page and you can connect from there.  I will be on PASS TV tomorrow morning before the keynote so make sure to tune in!  I wonder if they will have someone to do my hair and make up?  Can I get a rider that includes coffee and M&Ms?  icon wink PASS Summit 2012: Day 1

And if you find the PASS site a little busy, try the Microsoft SQL Server site, which will also live stream today and tomorrow’s keynotes, and Dr. DeWitt’s session on Friday.  If you’re reading this later in the day, check out the main page for the Summit to catch up on what you missed.

Finally…before we start, check out what I noticed on the back of my Summit Program Guide:

photo thumb PASS Summit 2012: Day 1









Interesting stuff!  A Business Analytics conference hosted by PASS in April – the PASS teams are definitely keeping busy.  I will post more details as I have them.

What’s on tap for today?  I know we will hear from PASS President Bill Graziano, as well as Ted Kummert, Corporate Vice President, Data Platform Group, Microsoft.  Now, I have also been told that there is a huge announcement coming about SQL Server.  This is not a shocker, there is always a huge announcement during the keynote.  Any guesses?  It will probably be up first, so stay tuned! 

8:15 AM – lights have dimmed, and here we go!  Testimonials from various members of the BoD are played first (I am a sucker for video montages, BTW)…

8:18 AM – Bill Graziano takes the stage to welcome everyone.  There are 3894 Summit attendees this year, at the 14th Summit, with 57 different countries represented.  Bill’s highlighting PASS TV and the fact that it will stream the keynotes, session and interviews with different members of the community.  There are 120,000 members of PASS, and Bill has challenged everyone to speak in the next year.  If that’s something that interests you – I encourage you to reach out to local speakers (whether you know them personally or not) and ask for help getting started.  As a speaker, I would be more than willing to work with someone to help start presenting within the SQL Server community.  Bill mentions that on Friday there will be a Board of Directors Q&A in room 401 at 9:45 AM.

8:24 AM – Bill mentions the Virtual Chapters – woohoo!  We are up to 20 different Virtual Chapters which is fantastic.  They are a great resource for members of the community and I recommend checking them out.  And of course, in an addition to Local Chapters, there are SQLSaturday opportunities for training.  There have been 79 SQLSaturdays this year, with many of them held overseas.  PASS is growing – in just the couple years I’ve been involved, I am amazed at the increased number of events and members.

Bill confirms the Business Analytics Conference in April!  Registration is now open…but I cannot find the site…yet…

Hm, Bill mentions an amendment to the by-laws that would designate certain spots on the board based on location around the world.  Look for a proposal for these changes in the next 60 days. 

Bill thanks the sponsors for Summit.  As I mentioned before, definitely stop by the Exhibit Hall if you can.  It’s a great place to try out technology and software that might help your SQL Server solution, and again, the sponsors help make the Summit happen.  Bill highlights Microsoft’s involvement at Summit, which includes the SQL Server Clinic, hosted by the Customer Advisory Team (CAT), Developer Chalk Talks, Hands on Labs, Certification Training, Focus Groups and Solutions Theatre.  There are over 300 engineers from Microsoft here at Summit.  If you’re having technical issues in your solution, I highly recommend taking advantage of Microsoft’s presence here at Summit.

8:39 AM – Ted Kummert takes the stage.  I’m going to guess the big announcement is coming soon, and I’m thinking it’s going to be about Hekaton, and you find more information here as well.  Ted mentions that SP1 for SQL Server 2012 is now available.  There was no applause from the crowd.  Interesting.  Ted’s talking about big data and approaching the tipping point (which of course reminds me of Kimberly Tripp and statistics…stay focused on big data, no stats!). 

Ted’s talking about how if everything fits in memory, and you design around that assumption, huge transformations in business process are possible.  It’s one thing to accelerate queries, but if you want to accelerate business process, you have to consider all workloads involved.  Desktop all the way to the data warehouse.  You have to think about the entire architecture as it relates to the business process.  Today they’re announcing an in-memory transactional capability to SQL Server – and there you go, Ted says Hekaton and that it will ship in the next major release of SQL Server.  There’s the big announcement kids.

8:54 AM – Let’s see some demos!  Let’s see what Hekaton does when you have a highly transactional, in memory database.    We’re starting with OLTP…demo processes a bunch of sales orders.  Starts around 2000 transactions/sec, with CPU around 33% but there’s a ton of latches.  Switching over to Hekaton (note, what we’re seeing is NOT the final experience).  You can optimize based on table…SalesOrderDetail gets converted so it can run in memory with no application changes.  But all the data for the SalesOrderDetail table and indexes are in memory.  Running the tool again, looking at performance increase and it’s 10x improvement (with no change in the application or hardware).  Using more CPU (near 90%) but no latches with 10,000 transactions/sec.    Hekaton can also look at code (like Stored Procedures) to see if they can be optimized.  The tool picks a specific SP to optimize – so we migrate it over to Hekaton which recompiles so the SP runs native in memory (no code changes).  And now…performance is over 50,000 transactions/sec (but we don’t get to see CPU).  Oh, here comes information about Columnstore Indexes…  Before SQL 2012 there were lots of data warehouse queries that would result in full table scans.  Columnstore Indexes provide huge performance gains (taking a one plus minute query down to a second).  And in the next major release of SQL Server these indexes will be UPDATABLE (woohoo this is HUGE!) AND they can be CLUSTERED.  Ohhh, shiny.  Ohhhh, the opportunity to incorrectly architect.  Just because you can create the updatable Columnstore Indexes and use Hekaton doesn’t mean you still don’t have to consider schema design.  You still need to architect!

9:03 AM – Customer testimonial from BWIN, who has been working with Hekaton – they are largest online gaming site in the world and require the best technology speed to stay at the top of their game.  First test run took transactions/sec from 15,000 to over 250,000.  Holy cow.  The next release of PDW will be available in the first half of 2013.  Time for a demo of PDW from Christian Kleinerman…and he logs in as sa.  *sigh*  But he’s funny.  The changes in PDW take a query with a terabyte of data from 2 minutes to seconds.  The technologies used here are Columnstore and PDW.  Now Christian is talking about PolyBase, the ability to query Hadoop data through an interface in PDW.  Interesting.

9:24 AM – PolyBase is also going to be in the next release of PDW – it is built for big data – along with xVelocity and Columstore improvements.

9:32 AM – Amir Netz takes the stage for some BI demos.  I love Amir’s demos because you uses movie data.  You all know how much I love movies.  I really, really want a copy of the database he uses…  Oh, and Amir is funny, and he talks fast, and he is dynamic, and has a great accent.  He is fun to watch on stage (I know that’s not technical, but I appreciate speakers who keep the audience engaged).  I do love Amir’s demos, though I know so little about PowerView, BI and so on.

9:45 AM – ok friends, I have a session in a half hour so I am ducking out to get ready.  I’ll try to update a bit more later, but otherwise, catch you tomorrow morning!  And a quick reminder that tonight is the Exhibitor Reception, which starts at 6:15 PM and goes until
9:15 PM.  Again, lease take time to visit the sponsors throughout the week – they
provide amazing support for the PASS Summit and for numerous other events in the
SQL Server Community throughout the year.


RML Utilities and SQL Server 2012

RML Utilities is a free utility created by Microsoft and used by the SQL Server support team.  It’s available for download and if you work with SQL Trace data at all I’d recommend giving it a look. I know, I know, Trace is deprecated in SQL Server 2012.  But not everyone is running 2012 so until then, Trace files will still exist.

I first heard about RML Utilities years ago from Andrew Kelly ( b | t ) and have used it to analyze trace data from customers, which is really just a fraction of what it can do.  Today I’ve been working with it to compare two trace files.  Yes, that’s right…compare two trace files.  Imagine you run a load test in your production environment, then run that same test in your development environment which might have different hardware or updated code.  You can capture PerfMon metrics to compare Memory, CPU and I/O…but what about the performance of individual queries?  You can get that from the DMVs, but it’s really nice if you can look at the information side by side.  And I don’t mean two instances of Profiler open, I mean really compare the data.  RML Utilities will do that for you!

But, before I could get RML Utilities to do that, I had to get it to process a trace file.  This was my issue today.  I had two VMs, one with 2008R2 and 2012 installed, one with only 2012 installed.  The readtrace.exe file is used to process trace files, and the commands require the trace file (input), an output directory, a SQL Server instance and a database.  I had everything set up, but my process would fail with this set of messages:

09/27/12 15:49:44.262 [0X00000AA4] Readtrace a SQL Server trace processing utility.
Version 9.01.0109 built for x64.
Copyright (c) Microsoft Corporation 1997-2008. All rights reserved.
…extra text removed…
09/27/12 15:49:44.263 [0X00000AA4] –IC:\PerfLogs\Trace\File_RML_2008R2.trc
09/27/12 15:49:44.263 [0X00000AA4] –oC:\SQLskills\RMLOutput
09/27/12 15:49:44.263 [0X00000AA4] –SWIN2008R2PS\SQL2012
09/27/12 15:49:44.263 [0X00000AA4] –dBaselineData
09/27/12 15:49:44.263 [0X00000AA4] Using language id (LCID): 1024 [English_United States.1252] for character formatting with NLS: 0×00060101 and Defined: 0×00060101
09/27/12 15:49:44.263 [0X00000AA4] Attempting to cleanup existing RML files from previous execution
09/27/12 15:49:44.264 [0X00000AA4] Using extended RowsetFastload synchronization
09/27/12 15:49:44.264 [0X00000AA4] Establishing initial database connection:
09/27/12 15:49:44.264 [0X00000AA4] Server: WIN2008R2PS\SQL2012
09/27/12 15:49:44.264 [0X00000AA4] Database: BaselineData
09/27/12 15:49:44.265 [0X00000AA4] Authentication: Windows
09/27/12 15:49:44.266 [0X00000AA4] Unable to connect to the specified server.
09/27/12 15:49:44.266 [0X00000AA4] ERROR: Performance analysis failed to initialize.  See previous errors and correct the problem before retrying.
09/27/12 15:49:44.266 [0X00000AA4] *******************************************************************************
* ReadTrace encountered one or more ERRORS. An error condition typically      *
* stops processing early and the ReadTrace output may be unusable.            *
* Review the log file for details.                                            *
09/27/12 15:49:44.266 [0X00000AA4] ***** ReadTrace exit code: –9

My first thought was that it was having an issue with SQL Server 2012.  The documentation does not list SQL Server 2012 as a supported version.  So I tried it on my local machine against a 2012 instance, and it worked.  I reviewed the error message: “Unable to connect to the specified server.”  This made no sense, so I verified that I could connect with sqlcmd. Then I tried it against my other VM, thinking maybe it was something with VMWare.  That worked.  Then I started comparing the output.  From the successful process, I had the following:

09/27/12 15:59:07.349 [0X00000CD4] Readtrace a SQL Server trace processing utility.
Version 9.01.0109 built for x64.
Copyright (c) Microsoft Corporation 1997-2008. All rights reserved.
…extra text removed…
09/27/12 15:59:07.350 [0X00000CD4] –IC:\PerfLogs\Trace\File_RML_2008R2.trc
09/27/12 15:59:07.350 [0X00000CD4] –oC:\SQLskills\RMLOutput
09/27/12 15:59:07.350 [0X00000CD4] –SWIN2008R2-1\SQL2012
09/27/12 15:59:07.350 [0X00000CD4] –dBaselineData
09/27/12 15:59:07.350 [0X00000CD4] Using language id (LCID): 1024 [English_United States.1252] for character formatting with NLS: 0×00060101 and Defined: 0×00060101
09/27/12 15:59:07.350 [0X00000CD4] Attempting to cleanup existing RML files from previous execution
09/27/12 15:59:07.351 [0X00000CD4] Using extended RowsetFastload synchronization
09/27/12 15:59:07.351 [0X00000CD4] Establishing initial database connection:
09/27/12 15:59:07.351 [0X00000CD4] Server: WIN2008R2-1\SQL2012
09/27/12 15:59:07.351 [0X00000CD4] Database: BaselineData
09/27/12 15:59:07.351 [0X00000CD4] Authentication: Windows
09/27/12 15:59:07.390 [0X00000CD4] Using SQL Client version 10
09/27/12 15:59:07.390 [0X00000CD4] Creating or clearing the performance database
09/27/12 15:59:07.892 [0X00000CD4] Processing file: C:\PerfLogs\Trace\File_RML_2008R2.trc (SQL 2008)
09/27/12 15:59:07.892 [0X00000CD4] Validating core events exist
09/27/12 15:59:07.892 [0X00000CD4] Validating necessary events exist for analysis
…more extra text removed…

My issue was that RML Utilities uses version 10 of the Client.  On the VM I only had SQL 2012 installed, which is version 11.  Once I downloaded and installed the 2008R2 Client (from here, you just need sqlncli_amd64.msi for an x64 machine), everything ran fine.  Thus, even though SQL 2012 is not listed as a supported version for RML Utilities, it looks like it will run as long as you have version 10 of the Client installed.  And as for comparing those files?  That’s for another post!

A Poll: Recommendations for New Speakers

This morning I wrote a post on my original blog about presenting. I’m cross-posting here because I want to ask other speakers, and my SQLskills colleagues, for the number one piece of advice they provide to new speakers. Write a quick post, or leave a comment. New presenters want to hear from you, and I know that I can always get better. Don’t over think it; just share what comes to mind first. Go!

Customizing the Default Counters for Performance Monitor

I am a huge fan of Performance Monitor (PerfMon). Yes, I know, that’s a geeky statement, but I don’t care. There is such a wealth of information available from PerfMon; you can use it to look at performance real-time, or to capture metrics about performance over time.  And, the functionality is built in to Windows. It’s there no matter what Windows server you’re working on – and when you work on a lot of different servers having a tool you can consistently rely on is extremely useful.

But one challenge I had with PerfMon that took me a while to figure out was how to change the default settings. When I start PerfMon, it only shows the % Processor Time counter. Now that’s a useful counter, especially when there’s a performance issue going on, but I also want to look at other counters such as memory utilization and disk latency. When the system is having a problem, I dislike spending an extra minute or two to add all the counters I want to see. I want them to just be there when I open PerfMon.

For those of you running Windows 2008 and higher (and Windows Vista and higher for workstations), there is an easy solution.  If you’re on Windows XP or Windows 2003, I have a solution for you, too, it’s just a few extra steps.

Windows 2008+ and Windows Vista+

On your local or machine or server, select Start | Run and then open up Performance Monitor with the /sys Command-Line option: perfmon /sys

This opens Performance Monitor in a stand-alone mode (if you enter just perfmon, you get additional options such as Data Collector Sets and Reports).  Within PerfMon, add the counters you want to monitor.  You can either click on the green plus (+) to add counters, or right-click in the graph and select Add Counters…  Once you have added the counters, close PerfMon.  Trust me.  Just close it.

Go back to Start | Run and enter perfmon /sys again.  The counters you added should be selected.  Running Process Monitor on my machine showed that PerfMon saved a configuration file (Perfmon.PerfmonCfg) in Users\<username>\AppData\Local.  The location may vary depending on OS or roaming profiles (if you cannot find it, simply run Process Monitor and filter on perfmon.exe to find where it writes the file).

You can take this one step further by creating multiple .PerfmonCfg files – and they can be stored anywhere.  Once you have selected the counters in PerfMon, select File | Save Settings As… and create a new .PerfmonCfg file with the appropriate name, either locally or in a share.  Modify the counters as needed, then save the configuration as a different .PerfmonCfg file.  When you want to launch PerfMon for a specific .PerfmonCfg file, just double-click on the file.  A great benefit of the different files is that you can share them between servers.  However, take note of how you add the counters.  If you add a counter for a specific drive letter that doesn’t exist on every server, the counter will appear in the list, but no data will appear in the graph.

Windows 2003 and Windows XP

If you’re still running Windows XP or Windows 2003, don’t despair, I have another method.

On your local or machine or server, open up Performance Monitor (Start | Run| perfmon). Add the counters you want to monitor.  Once you have the counters added, right-click again in the graphing area and select Save As… and save it as a .html file. Then close PerfMon.

Start up PerfMon again, you should see that you only have the % Processor Time counter. Open the .html file you just saved in a text editor. Highlight everything (CTRL + A) and then copy it (CTRL + C). Switch back to PerfMon, click in the graph area and paste (CTRL + V). You can also click on the paste icon in the toolbar. The counters will immediately show up.  As with the configuration file, you can create multiple .html files to save locally or share.

Many routes to the same goal

Having the ability to open PerfMon with specific counters already selected is one of those little things that just makes my life easier.  Because there are usually many different ways to accomplish the same task in Windows (think of how you open Windows Explorer – do you right-click on the Start button and select Open Windows Explorer, or do you use CTRL + E, or something else?), I’m interested to know if there are other methods people have used to get counters to show up by default.  Feel free to leave a comment or send me an email if you know of other options.  I hope this helps!

Figuring Out When Wait Statistics Were Last Cleared

One of the topics covered during IE2 is Waits and Queues, and this week one of the attendees asked if there was a way to know when wait statistics were last cleared. It turns out there is a post about it on the SQLOS & Cloud Infrastructure Team Blog, and what you do is look for the SQLTRACE_INCREMENTAL_FLUSH_SLEEP wait_type in sys.dm_os_wait_stats. The value for wait_tims_ms is the time since wait stats were last cleared.

Since I talk about clearing wait stats in my DBCC session I decided this information would be useful to include, and then decided to just write a query to figure it out.  The query is below, as is a query to show the last time the instance was restarted.  Edit: Note that this query is only for SQL Server 2008R2 and higher.  The SQLTRACE_INCREMENTAL_FLUSH_SLEEP wait_type did not exist in previous versions of SQL Server.  If I can find an alternate wait_type for use, I will post back.

Just to be clear, I am not advocating that you immediately clear wait stats on your server.  In fact, I’d be more interested to know whether wait stats have been cleared since the instance was last restarted.  Let me back up a bit…  If you’re not familiar with wait statistics, I suggest starting with Paul’s post, Wait statistics, or please tell me where it hurts.  Whether you have performance issues on your server or not, I highly recommend looking at wait stats regularly to understand what your “normal” waits are.  If you are having issues, waits can be a starting point to sorting out what’s causing the problem.  But wait stats should not be viewed in a vacuum – always use other counters or information to correlate findings.  Once you have a basic understanding of your wait stats, you may decide to capture them on a regular basis.  But wait stats are cumulative, and will only reset when you restart the instance, or if you clear them manually.  They can be cleared using:

DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR);

There are a variety of reasons to clear wait stats, but it shouldn’t be done arbitrarily.  You may clear them because you’ve made a change to the system and want to understand if and how that change affects wait statistics.  You may clear wait stats every morning at 5 AM because backups and other maintenance tasks run at night, but you want to understand performance during business hours when users are the system and therefore decide to filter out (or investigate separately) waits from those tasks.  In the latter case, you probably have a scheduled job to clear the wait stats, and other DBAs may or may not know about this job.  If another DBA decides to clear wait stats for some reason, without telling you, they may affect the wait information you’ve been capturing.  As such, I would always verify with colleagues whether or not wait stats are cleared regularly and this script will give you an idea of whether are being cleared or not.  However, even if they haven’t been cleared since the last restart, I would still perform some due diligence and verify it won’t alter anyone else’s data.  I am interested to hear different ideas for how this information could be used, so feel free to post back or email me.  And if you’re not currently looking at your wait stats regularly, please start!

/* when were wait stats last cleared? */
DATEADD(ms,-[wait_time_ms],getdate()) AS [Date/TimeCleared],
WHEN [wait_time_ms] < 1000 THEN CAST([wait_time_ms] AS VARCHAR(15)) + ' ms'
WHEN [wait_time_ms] between 1000 and 60000 THEN CAST(([wait_time_ms]/1000) AS VARCHAR(15)) + ' seconds'
WHEN [wait_time_ms] between 60001 and 3600000 THEN CAST(([wait_time_ms]/60000) AS VARCHAR(15)) + ' minutes'
WHEN [wait_time_ms] between 3600001 and 86400000 THEN CAST(([wait_time_ms]/3600000) AS VARCHAR(15)) + ' hours'
WHEN [wait_time_ms] > 86400000 THEN CAST(([wait_time_ms]/86400000) AS VARCHAR(15)) + ' days'
END [TimeSinceCleared]
FROM [sys].[dm_os_wait_stats]

/* check SQL Server start time - 2008 and higher */
FROM [sys].[dm_os_sys_info];

New Statistics DMF in SQL Server 2008R2 SP2

There is a wonderful new DMF that is available in Service Pack 2 for SQL 2008R2: sys.dm_db_stats_properties.  There are many other DMFs and DMVs that include “stats” in their name – sys.dm_db_index_physical_stats, dm_exec_query_stats and sys.dm_os_wait_stats to name a few.But how many of those are about the statistics that describe data distribution in a column?  Up until now, not a one, and as someone who really likes statistics, you might start to understand why this is my new favorite DMF.

First, you will need SP2 to utilize this DMF, and note that it is not available in 2012 yet.  (Edit: After verifying with Microsoft, it will be available in SP1 of 2012.)  If you haven’t downloaded SP2 for 2008R2 yet, you can get it here.  Once you have that installed, we can review what information we can gather from sys.dm_db_stats_properties.  The basic syntax is as follows:

(object_id, stats_id)

Ok, so we need the ID of the object and the statistic.  I’m using a copy of an AdventureWorks database, and for this example I’ll use the Sales.SalesOrderDetail table.  We can get the IDs we need with a query against sys.objects and sys.stats:

SELECT sch.name +
‘.’ + so.name, so.object_id, ss.name, ss.stats_id

FROM sys.stats ss

JOIN sys.objects so ON ss.object_id = so.object_id

JOIN sys.schemas sch ON so.schema_id = sch.schema_id

WHERE so.name =  N’SalesOrderDetail’;

Let’s take object_id and stats_id and plug them into the DMF:


FROM sys.dm_db_stats_properties(642101328,1);

And my output:

stats New Statistics DMF in SQL Server 2008R2 SP2

You should recognize some of the information you could previously only find in DBCC SHOW_STATISTICS:

last_udpated: the
last time the statistic was updated

rows: the number
of rows in the table at the time of the update, or if there is a filter
applied, this number will be the number of rows that meet the filter criteria

rows_sampled: the
number of rows actually sampled

steps: the number
of steps in the histogram

unfiltered_rows: the
number of rows in the table at the time of the update

As with DBCC SHOW_STATISTICS, the only way you know whether a full scan was performed when the statistic was updated is to compare rows against rows_sampled. 

Then, there’s one more column in the table that could provide incredibly useful for some people:

number of modifications for the column which leads the statistic, since the
last update

This means you no longer have to go to sys.sysrscols to monitor changes!  This is excellent if you want to do targeted optimizations for volatile tables.  I’ve talked about this many times with clients, and it has always been a challenge to easily find the statistics most affected by inserts, updates and deletes.  You can monitor index usage using  sys.dm_db_index_usage_stats, but using the modification_counter in sys.dm_db_stats_properties really provides the ability to drill down into specific statistics and target updates appropriately.  In addition, sys.dm_db_stats_properties allows you to quickly find statistics that are outdated.  Previously, the only way to do this was to roll through DBCC SHOW_STATISTICS for every statistic, logging the output to a table.  With sys.dm_db_stats_properties
we can capture this information with simpler methods.  Here’s a sample query you can use to find the statistics that haven’t been updated in over 30 days:


    sch.name + ‘.’ + so.name AS

    ss.name AS


            WHEN ss.auto_Created = 0 AND ss.user_created = 0 THEN ‘Index Statistic’

            WHEN ss.auto_created = 0 AND ss.user_created = 1 THEN ‘User Created’

            WHEN ss.auto_created = 1 AND ss.user_created = 0 THEN ‘Auto Created’

            WHEN ss.AUTO_created = 1 AND ss.user_created = 1 THEN ‘Not Possible?’

      END AS
“Statistic Type”,


            WHEN ss.has_filter = 1 THEN ‘Filtered Index’

            WHEN ss.has_filter = 0 THEN ‘No Filter’

      END AS


            WHEN ss.filter_definition

            WHEN ss.filter_definition
IS NOT NULL THEN ss.filter_definition

      END AS “Filter

    sp.last_updated AS
“Stats Last Updated”,

    sp.rows AS “Rows”,

    sp.rows_sampled AS
“Rows Sampled”,

    sp.unfiltered_rows AS
“Unfiltered Rows”,

      sp.modification_counter AS
“Row Modifications”,

      sp.steps AS
“Histogram Steps”

FROM sys.stats ss

JOIN sys.objects so ON ss.object_id = so.object_id

JOIN sys.schemas sch ON so.schema_id = sch.schema_id

OUTER APPLY sys.dm_db_stats_properties(so.object_id, ss.stats_id) AS sp


AND sp.last_updated <
getdate() - 30

ORDER BY sp.last_updated

You can take this query one step further and modify it to show statistics that have a row_modfications value above a specific threshold (WHERE sp.modification_counter > 10000) or statistics that don’t exist at all (WHERE sp.last_updated IS NULL).At its core, this DMF can make statistics management much easier, and in a future post I will share a script that can be used to update statistics based on different factors such as last updated date and number of modifications.  Until then, have fun playing with this in your environment!