Resources and Q&A from our SQLConnection [PRE] pre-conference workshop: Database Maintenance – from Planning to Practice to Post-Mortem

OK, well, the first day is over and we’re starting to relax… just had a nice meal in our room and we’re off to each do a blog post (or what might turn into a couple :) regarding the things we each discussed in our full day workshop today.


So, it was a great day and a great way to start the Connections event. We had roughly 170 people attend our Sunday workshop that started at 9am which is especially impressive in Vegas!! And, it was a great way to start because it felt like everyone was ready with questions and a few folks (especially those with jet-lag) said that they didn’t think they’d make it through the day but then were surprised at how fast it went and that we managed to keep them awake :)…


As for the questions, we answered a lot of them during the session but as it always happens, we each remembered additional references, sites, and/or details that we always want to post after the fact – this post is the result. I’ll try to put headers on the sections but I think this will be a long one!


Here’s a link to Paul’s Q&A blog post from today’s session:
https://www.sqlskills.com/blogs/paul/2007/11/05/ConferenceQuestionsPotPourri1IndexesStatsCorruptionAndEnterpriseonlyFeatures.aspx


So, have fun and we both look forward to seeing you tomorrow in the pre-conference workshop on Disaster Recovery – from Planning to Practice to Post-Mortem. The rest of this post contains random but helpful stuff (and in NO particular order at all). And, there’s at least one more post coming after this one as I still have more to add!


Resource Reference List



  • KB#909369: SQL Server 2000 Bug where checkpoint is not appropriately clearing the inactive portion of the log
  • KB#329526: File allocation extension in SQL Server 2000 (64-bit) and SQL Server 2005 (this allows 256KB block allocations)
  • KB#328551: PRB: Concurrency enhancements for the tempdb database (this refers to trace flag -T1118 for SQL Server 2000 AND how to create tempdb on multiple files when on multiproc machines). A good and very complementary read is the “Working with tempdb” whitepaper. AND, the primary author of this whitepaper (Sunil) will be here at Connections this week, so there’s another session to visit.
  • KB#224071: How to move SQL Server databases to a new location by using Detach and Attach functions in SQL Server
  • Performance Dashboard Reports (only in SQL Server Management Studio for SQL Server 2005 SP2)
  • sysinternals Zoomit tool: this is what I was using to magnify various parts of the screen, etc.
  • SQL Server Customer Advisory Team Blog and specifically the entry on DMVStats. Also, here’s an excellent whitepaper called Troubleshooting Performance Problems in SQL Server 2005 as this has numerous DMV queries and helpful details on troubleshooting.
  • SQLskills Whitepapers list

Progress Report: Online Index Operation
As for the details on this – you want to capture these events: EventClassTextDataEventSubClassObjectIDObjectName, and BigIntData1


And, for the EventSubClass there are multiple values that will be returned. What I think is the most interesting is that many of these events return only one of the values for either ObjectId or ObjectName…so, beware of filtering!


  1-Start (both ObjectID/ObjectName)
  2-Stage1 start (null for both ID/Name)
  6-Insert row count… (ObjectID only)
  3-Stage1 end (null for both ID/Name)
  7-Complete (both ObjectID/ObjectName)


EventSubClass 6 is definitely the most helpful. In the BigIntData1 column, they will show the current row number being processed (essentially) and so, you can guage roughly how far you have gone as well as how long you have to go. Here’s a screen shot to help you get some insight into what you will see: OnlineProgressReport.pdf (690.78 KB).


Some operations (e.g. DBCC CHECKDB (and related), SHRINKFILE, ALTER INDEX…REORGANIZE) produce a value for the percent_complete column in sys.dm_exec_requests. Both of these are helpful for assessing where you’re at…


Index Creation/Rebuild Order
Paul’s written about this AND he’s linking to some relevant posts in his blog post from today’s session but here’s a VERY quick highlight of a few things we chatted about:



Index CREATION order DOES matter (you should always create the clustered index first and then create the non-clustered (as non-clustered indexes depend on the clustering key)
Index REBUILD order does NOT matter as the physical location of the data isn’t interesting to the non-clustered indexes as they reference the data by the row’s clustering key


VLFs – Virtual Log Files… too many (and typically very small) OR even possibly too large of VLFs….
We had some great discussions around VLFs and I know I’ve posted a bit on this in the past but I’m not sure I’ve posted this much detail. So, here are a few relevant points.


First, the size and number of VLFs is determined by the size of the “fragment” added to the transaction log. The “fragment” is added at the time the log is created or anytime the log grows (either manually or automatically). The number of VLFs can be predicted from the following quick guide:



  • If the fragment is less than 64MB, then up to 4 VLFs will be added (the reason I say “up to” 4 VLFs are added is because really small fragments – fragments under 1MB – will actually add even fewer VLFs and I can’t remember (nor do I care :) what the exact cut-offs are for < 1MB.
  • If the fragment is greater than or equal to 64MB and less than 1GB, then 8 VLFs will be added.
  • If the fragment is greater than or equal to 1GB, then 16 VLFs will be added.

So, if you create a database with a 100MB log (just as a simple example), you will get 8 – 12.5MB logs… if you then increase the log to 150MB then you will add 4 more VLFs. This really isn’t all that bad and in general, I recommend that you have less than 100 VLFs. Often I’ll find folks that have thousands and this results in VLF fragmentation. I blogged about this originally here: https://www.sqlskills.com/blogs/kimberly/2005/06/25/8StepsToBetterTransactionLogThroughput.aspx and the steps to help minimize it are there as well. Also, as an update to that post – use DBCC SHRINKFILE with the NOTRUNCATE as it appears to have special meaning on the transaction log… but, I still need to investigate this one a bit more.


Finally, if you pre-create too large of a transaction log then you might have new/different problems… the most likely is that the log doesn’t clear as often (because you haven’t spilled into a new VLF OR you have a transaction that happens to span two VERY large VLFs) and the effect can be that performance is slowed by the less frequent clearing that has to happen on a very large log… As a way to minimize this, the best way to create larger logs (logs in the 10GB+ range), is to create them in multiple chunks (for example 4-8GB chunks so that you end up with VLFs that are 256MB-512MB instead of 4GB because you created a 48GB log to start).


Installation Instructions for the post-conference workshop AND the HOLs DVD
And, for those of you who want to play with the DVD we handed out today… here are the “generic” setup instructions:
Generic HOLs DVD SETUP Instructions.pdf (20.31 KB)


OK, that’s it for today and we look forward to another infomation/question packed session tomorrow!


Cheers,
kt

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.