Desert Island Disks

There's another DBA 'chain-blog' going around, this time started by Tim Ford, and I was tagged by Tom LaRock. The idea is that you're stuck on a desert island for six months with WiFi and you have to spend it doing something related to work. What would you spend the six months doing? If I break the chain then I won't get enough birthday cards to get in the Guinness Book of Records, or something else depressing, so I'd better join in.

At the moment, I'd probably say continue losing my life to Twitter, SQL forums, and blogging – but then I'd be blogging about blogging and there's the risk I'd get sucked into an infinitely-recursive blog post and I wouldn't get anything done. And Kimberly already thinks I blog too much, so I'd steer clear of that. I'd really prefer to spend the six months diving in the reefs around the island and checking out the stars in a sky with no light pollution, but I think it would be hard to make either scuba gear or a telescope from just coconut shells and sand, and I'm pretty certain that even if I could, the coconut-shell based air compressor for filling the scuba tanks wouldn't hold together when I powered it on either. But hey, where would the power come from? But if there's no power, how would the WiFi work? Hmm – the scenario's falling apart quickly – best cut to the chase.

I spent 15 minutes looking for a clean desert island joke to include here, then gave up, sat watching a heron fly past the deck and then figured out what I'd do. Now, you all know that I'm not a DBA, so I wouldn't have anything to do on a set of systems, but I do have a lot of things I'd like to do around SQL Server but have never had the time. Here are my top four:

Tool to explain corruption messages. The expert system of how to analyze all the corrution error messages that CHECKDB produces to find out what exactly it's telling you, and what may be deleted by repair, basically resides in my head. There's another copy that resides in Ryan Stonecipher's head (the guy that took over DBCC from me), but he's even less likely to have time to do this since he still works at MS. I'd love to program that expert system and make it available. Just like I'd like to put together the 'what exactly does each error message mean' PDF. I did it for 2000 and 2005 while at MS, and it took me 3 weeks. I have all that in my head too. I thought that writing the 2008 Internals book with Kalen would allow by in-head lazywriter to clean out some buffers, but it's all still there.

Write the book on maintenance and DR for involuntary DBAs. I keep getting asked what the good info is out there, and there's no one good place to go. The closest I've come to doing this is the first article I wrote for TechNet Magazine back in August 2008 – Top Tips for Effective Database Maintenance. There's a big need for a book aimed at non-SQL professionals as I see the same mistakes being made over and over in forums. This is another thing where I've got all the details in my head and spread through my blog and Kimberlys blog, but just need time to pull it together into a coherent story.

Write a script that will figure out the size of the next log backup. This is on my list of cool tools to write and give away on my blog. I've already done the how big will the next differential backup be, and the how much data will the next log backup include, but this one is a *lot* harder. I have a good idea how to do it, but I need a bunch of time to sit down and figure it all out in a solid script. And then test it lots.

Write a script that will produce a page checksum on every page. This has been on my list even longer. I know several ways to do this, but they all have undesirable side-effects, except one, which I haven't tried yet. This will be useful because once you enable page checksums after upgrading – nothing happens. A page doesn't get a page checksum until it's read into the buffer pool, altered, and then written out again. And there's no 'touch every page' tool.

Get hold of the SQL source code and add the following: online index rebuild of a partition, diff-based mirroring of FILESTREAM data, proper page-split monitoring, a new shrink algorithm that doesn't cause fragmentation, CHECKDB of a database within a backup without restoring the backup (my patent).

Ok – I guess I should tag some people too otherwise the chain will break and our house will be devoured by giant military squirrels… or something equally unlikely but tangibly scary. I choose my good friends Ward Pond, Greg Linwood, and Adam Machanic.

PS The title is a play on the BBC Radio 4 show 'Desert Island Discs'. It's supposed to be a geeky play on words. Well, I though it was funny and Tom had already used the 'Lovely Bunch of Coconuts' line. Thanks Tom.

4 thoughts on “Desert Island Disks

  1. Hey Paul, I liked your article, August 2008 – Top Tips for Effective Database Maintenance.

    In the article you mentioned that "When a corruption or other disaster occurs, the most effective way to recover is to restore the database from backups". Not only have I found backup/restore to be effective, but also *faster* than DBCC CHECKDB(database_name, REPAIR_ALLOW_DATA_LOSS).

    I’m curious to know whether backup/restore is faster for my particular environment, or is it faster than DBCC for all environments?

  2. It’s very much an ‘it depends’ answer. Running repair makes CHECKDB go single-threaded, and there are a ton of things that determine how long CHECKDB takes – see https://www.sqlskills.com/blogs/paul/post/CHECKDB-From-Every-Angle-How-long-will-CHECKDB-take-to-run.aspx.

    How fast your restore takes depends on whether you’re using compression, hardware speeds, how widely you striped the backup set, whether you have instant file initialization on, and so on.

    So – it depends :-)

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.