That's a quote from one of my favorite comedy shows, Blackadder

With Hurricane Sandy bearing down on the North East Coast of the US today, whether you're in its path or not, it's a good time to consider what your reaction will be when faced with a disaster that affects the data you're responsible for.

Will you be hiding under the table or calmly and confidently working through your disaster recovery plan to bring your company's data life-blood back online? Hopefully the latter.

For those who will be working through power-outages and data loss today, and as food for thought for those who won't, I've brain-dumped a quick list of some helpful questions, advice, and useful links around disaster recovery.

In no particular order:

  • Now is the time to move your on-site backups onto a different I/O subsystem from the databases. 
  • Make sure you start gathering off-site backups before the incoming disaster strikes and communications and travel become difficult. 
  • Make sure you have a disaster recovery plan (see this blog post), that's been tested (see this blog post), and that those on call have practiced. You shouldn't be learning the syntax for RESTORE during a real disaster.
  • Don't assume that everyone will be online and pulling for the company. People's lives and families will come first. See this blog post.
  • How will you all communicate if the land-lines and cell-phone towers are inoperable?
  • If you have to do a bare-metal install, where is the media? Do you have ISOs on a SAN somewhere? If not, download them now before communications drop out.
  • If your data center is damaged, what's the alternate location to spin up some servers? For a small business, someone's garage will do in a pinch.
  • Don't forget that if the SAN is down, you can get 1TB drives at electronics stores – and they're better than nothing if you just want to get your business online again, albeit slowly.
  • If your generators don't start, go to Home Depot and expense a few household generators to plug critical servers into. I've seen this done. Make sure you know someone with a big pickup truck.
  • Make sure to enable instant file initialization on new instances before restoring databases from backups to save time.
  • Make sure you know the order which databases have to be restored, based on importance to the business.
  • Do you have the contact info for the on-call personnel from the server, networking, SAN, customer service, and any other teams you need to liaise with?
  • Who is in control of the DR effort? Who decides whether to do one thing or another if there's disagreement?
  • Who decides when to escalate?
  • Who decides when to ask for third-party help?
  • Who needs to be kept informed of progress? Executives? Customers? Partners?
  • How to: rebuild system databases (2012, 2008R2, 2008)
  • How to: install 2005 (including system databases)
  • How to: restore master (2012 through 2005)

Basically the more prepared you can be, and the more eventualities you can think through, the more likely you'll be able to get back up and running within your downtime and data-loss SLAs.

Whatever happens over the next few days for those of you in Sandy's path, conduct a post-mortem to see what went right and wrong, and rework your HA/DR plan accordingly.

And don't forget that #sqlhelp on Twitter can be invaluable for advice.

Good luck out there!