2013 review: the year by the numbers

The last post of the year! It’s been a really excellent year all round and time for my traditional post counting down some of the numbers that have been my life this year.

  • 9233243 (roughly): the number of page views across my blog according to Google (up around 15% from 2012)
  • 112289: the number of miles I flew on United
  • 29660: my current tweet total (up 3271 from 2012)
  • 12060: the number of emails I sent (up 1238 from 2012)
  • 10290: the number of subscribers to our Insider mailing list (up 1793 from 2012)
  • 7681: the number of people who follow my Twitter ramblings (up 2011 from 2012)
  • 1330: the number of books (real ones) that I own (up 144 from 2012)
  • 707: the number of books I own but haven’t read yet (up 90 from 2012)
  • 198: the number of nights away from home (all with Kimberly, so not *too* bad)
  • 142: miles per hour I hit driving a NASCAR car around Charlotte Motor Speedway after PASS in October
  • 122: the number of dives I did this year in Kona, Thailand, and Turks & Caicos, taking my total to 307
  • 109: the number of feet down on my deepest dive this year
  • 83: the number of minutes of my longest dive this year
  • 75: the number of days in Immersion Events and conferences
  • 66: the number of SQLskills blog posts, including this one
  • 56: the number of books I read (see this post)
  • 54.25: the percentage of time we were away from home (which is why we call it our vacation home!)
  • 42: the number of flights this year
  • 26: the number of different places we slept apart from our house and on planes
  • 20: the number of new bird species I saw, taking my total to 464
  • 19: the number of new Pluralsight courses we published this year, taking our total to 30
  • 17: the number of airports I flew through this year
  • 15: the number of monthly magazines I subscribe to
  • 6: the number of  SQLskills full-time employees, all of whom are fabulous and indispensable
  • 5: the number of countries we visited this year
  • 4: the number of new airports I flew through, taking my total to 80
  • 2: the number of new countries I visited this year (Japan and Turks & Caicos), taking my total to 32
  • 2: the number of awesome daughters we have, who got Nitrox certified and passed their Advanced Open Water SCUBA certifications last week
  • 1: rated session at the PASS Summit for my Waits, Latches, and Spinlocks session – woohoo!
  • 1: rated pre-con workshop at the PASS Summit for my Practical Disaster Recovery Techniques workshop – woohoo again!
  • Finally, the one and only best person in my life: Kimberly, without whom I would be lost…

Thank you to everyone who reads our blogs, follows us on Twitter, sends us questions, watches our videos, comes to our classes, and generally makes being deeply involved in the SQL community a joy.

I sincerely wish you all a happy, healthy, and prosperous New Year!

Cheers!

(Diving with the girls over Christmas on the excellent Turks & Caicos Aggressor, courtesy of the boat crew)

DSC 1928 2013 review: the year by the numbers

(Swimming with Caribbean Reef Sharks, courtesy of Kimberly)

1556470 10152179839005452 1725834409 o 2013 review: the year by the numbers

2013: the year in books

Back in 2009 I started posting a summary at the end of the year of what I read during the year (see my posts from 200920102011, 2012) and people have been enjoying it, so I present the 2013 end-of-year post. I set a moderate goal of 60 books this year and I managed 56. Next year I want to get through some of the really large historical and biographical books I’ve been putting off so I’m setting myself a goal of 40.

For the record, I read ‘real’ books – i.e. not in electronic form – I don’t like reading off a screen. Yes, I’ve seen electronic readers – we both have iPads – and I’m not interested in ever reading electronically. I also don’t ‘speed read’ – I read quickly and make lots of time for reading.

There were several strong candidates for my favorite book this year (including Cutting for Stone by Abraham VergheseThe Known World by Edward P. Jones, and Great North Road by Peter F. Hamilton) but I have to choose The Hydrogen Sonata by Iain M. Banks, in honor of my all-time favorite author for the last 25 years who died tragically early in June this year after a short fight with gallbladder cancer. I’ve read all of his sci-fi novels several times and most of his regular fiction (published as Iain Banks) and I wish I could have met him to thank him for his work. I actually cried when I heard he’d died and even writing this I have a lump in my throat. RIP Iain – you are sorely missed by your fans.

Now the details. I enjoy putting this together as it will also serve as a record for me many years from now. I hope you get inspired to try some of these books – push yourself with new authors and very often you’ll be surprisingly pleased.

Once again I leave you with a quote that describes a big part of my psychological make-up:

In omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro!

Analysis of What I Read

I read 24800 pages, or 68.0 pages a day, and a book every 6.5 days or so. The chart below shows the number of pages (y-axis) in each book I read (x-axis).

2013lengths 2013: the year in books

2013genres 2013: the year in books

The average book length was 443 pages, 26 pages shorter than last year. Quite interestingly, I read about half the amount of historical fiction this year compared to 2012, and made up for that with more non-fiction and history, and many more travel books.

The Top-10 Top-15

Once again I really struggled to pick a top-10 this year as I read a lot of truly *superb* books in 2013. I didn’t want to leave any out of my short list and deny you the chance of having them suggested so I present you with my top-15! If you don’t read much, at least consider looking at some of these in 2014. It’s impossible to put them into a priority order so I’ve listed them in the order I read them, along with the short Facebook review I wrote at the time.

HydrogenSonata 2013: the year in books #1 The Hydrogen Sonata; Iain M. Banks; 517pp; Science Fiction; January 5; (True to Banks’ usual form, his latest Culture novel is excellent. Lots of ship and Mind conversation, clever action, and great characters. Strongly recommended, a must for Banks fans.)

EmptyQuarter 2013: the year in books #2 Empty Quarter: A Photographic Journey to the Heart of the Arabian Desert; George Steinmetz ; 208pp; Photographic; January 8; (Stunning aerial photography of the Empty Quarter of Arabia in Oman, Yemen, Saudi Arabia, and the United Arab Emirates. I’ve been fascinated by this area of the world since reading books like Thessiger’s Arabian Sands and its on my list of places to see before I die. This is the next best thing – 2– plates shot from a microlight 1000 feet up – just stunning! I’ve got several more of Steinmetz’s photography books in my plans this year. Strongly recommended!)

GreatNorthRoad 2013: the year in books #5 Great North Road; Peter F. Hamilton; 948pp; Science Fiction; January 31; (Just finished book #5 this year. Absolutely excellent, long, sci-fi romp from one of my favorite authors.)

Stonemouth 2013: the year in books #6 Stonemouth; Iain Banks; 356pp; Fiction; February 3; (Read most of it on the plane down from Seattle to Tampa – a real page turner. I haven’t read much of Banks’ non-science fiction since I was put off by Dead Air a while back. This book is really excellent though and I’d recommend it to anyone. Reminds me of some of my life in my early twenties.)

Lionheart 2013: the year in books #7 Lionheart; Sharon Kay Penman; 608pp; Historical Fiction; March 5; (Finished this on the flight from Kushiro to Tokyo. Great part 4 of the series covering the Angevin empire in the 1100s. This book really covers the 3rd Crusade and Richard’s involvement in it – a topic I’ve read a lot on but never read a fictionalized account of. Penman doesn’t disappoint and I strongly recommend this book to historical fiction fans.)

KingHereafter 2013: the year in books #8 King Hereafter; Dorothy Dunnett; 736pp; Historical Fiction; March 17; (Brilliant fictionalization of King Macbeth in the 11th Century. A very long book (730pp) with very dense writing and small font that made it feel like 1500pp really. Dunnett does an awesome job of describing the people, places, and society and I’m looking forward to reading her Lymond and Niccolo series. Strongly recommended.)

secretum 2013: the year in books #22 Secretum; Monalid & Sorti; 812pp; Historical Fiction; June 18; (An excellent sequel to Imprimatur that I read last year, this is set 17 years later, in 1700 in Rome, with the same protagonists involved in intrigue, subterfuge, and adventure. Intermingling meticulously-researched history and real characters, the book deals with the wrangling over the Spanish Succession leading up to the death of Charles II of Spain. Highly recommended.)

AmericanRust 2013: the year in books #28 American Rust; Philipp Meyer; 384pp; Fiction; July 9; (Excellent drama set in recently declining Pennsylvania steel country. Hard to describe the story line except to say it’s told from the points of view of six people linked to single act of violence that sets a bunch of stuff in motion. Great new author for me to discover – recommended!)

KnownWorld 2013: the year in books #31 The Known World; Edward P. Jones; 432pp; Historical Fiction; July 14; (Really excellent and powerful novel about slavery in mid-1800s Virginia, and one of the best books I’ve read this year, although I stopped midway to read #29 and #30. Barbarous to think of people as property, and that a husband could be freed and then have to work up to paying for the eventual freedom of his wife and children. Runaways, if they were caught, could expect to be ‘hobbled’ by severing one of their Achilles tendons – nasty. Strongly recommended.)

orchardist 2013: the year in books #34 The Orchardist; Amanda Coplin; 448pp; Historical Fiction; July 18; (Excellent novel, set just outside Leavenworth, WA around the end of the 19th Century. Complicated story, which I’ll spoil if I relate too much here. Based around a man who had his own orchard and his life is interrupted and changed forever when two runaway girls enter his land and his life. Definitely a candidate for best book so far this year and highly recommended.)

cutting 2013: the year in books #36 Cutting for Stone; Abraham Verghese; 688pp; Fiction; July 23; (Wonderful book! Hard to describe the plot without giving away what happens in the book. Written by a surgeon, with lots of factually accurate descriptions of surgeries and medical issues. Follows the life of a family in Addis Ababa in the ’60s and ’70s, with drama and betrayals. Masterfully written, became a two-day page-turner for me after dithering about buying it since it came out. Very strongly recommended.)

technologists 2013: the year in books #41 The Technologists; Matthew Pearl; 555pp; Historical Fiction; August 17; (I really enjoyed Pearl’s The Dante Club a few years ago so was looking forward to this book. The story is set in 1868 when the first class of M.I.T. is about to graduate. Boston is rocked by a serious of large-scale terrorist acts and some of the M.I.T. students investigate and bring the perpetrator to justice. Excellent novel – recommended.)

djinns 2013: the year in books #50 City of Djinns; Walter Dalrymple; 352pp; Travel; December 5; (This excellent book covers a year that Dalrymple and his wife lived in Delhi researching the city’s history and meeting people in it. He nicely weaves in the history of the city’s various rulers, conquests, and cultures, while also vividly portraying the people in his life there. I’m looking forward to reading the next book of his on my list, and visiting some of the shrines and forts he describes on our next trip to India.)

monalisa 2013: the year in books #52 I, Mona Lisa; Jeanne Kalogridis; 544pp; Historical Fiction; December 14; (The novel is set in Florence in the late 1400s, and covers the time around the downfall of the Medicis and the rise and fall of Fra Savonarola. It centers on Madonna Lisa (the subject of da Vinci’s painting), who becomes intricately involved in all the politics and subterfuge of the time. Excellent and recommended.)

amazon 2013: the year in books #53 Walking the Amazon; Ed Stafford; 320pp; Travel; December 16; (What an incredible journey this guy did! 8000km over more than two years, going through some of the worst jungle on the planet, and all walking. The book’s very well written and conveys the mental and physical adversities Stafford overcame to complete the journey. A great read and inspiring too.)

The Complete List

And the complete list, with links to Amazon so you can explore further.

  1. The Hydrogen Sonata; Iain M. Banks; 517pp; Science Fiction; January 5
  2. Empty Quarter: A Photographic Journey to the Heart of the Arabian Desert; George Steinmetz ; 208pp; Photographic; January 8
  3. Red Mars; Kim Stanley Robinson; 592pp; Science Fiction; January 21
  4. America: Flying High; Jim Wark; 640pp; Photographic; January 30
  5. Great North Road; Peter F. Hamilton; 948pp; Science Fiction; January 31
  6. Stonemouth; Iain Banks; 356pp; Fiction; February 3
  7. Lionheart; Sharon Kay Penman; 608pp; Historical Fiction; March 5
  8. King Hereafter; Dorothy Dunnett; 736pp; Historical Fiction; March 17
  9. Headhunting in the Solomon Islands; Caroline Mytinger; 372pp; Travel; March 28
  10. Elizabeth’s Spymaster: Francis Walsingham and the Secret War that Saved England; Robert Hutchinson; 400pp; History; April 1
  11. An Empire of Wealth: The Epic History of American Economic Power; John Steele Gordon; 496pp; History; April 6
  12. Underwater Eden: 365 Days; Jeff Rotman; 744pp; Photographic; April 20
  13. Do Androids Dream of Electric Sheep?; Philip K. Dick; 244pp; Science Fiction; April 28
  14. The Mongoliad: Book 3; Neal Stephenson et al; 716pp; Historical Fiction; May 13
  15. Burghley: William Cecil at the Court of Elizabeth I; Stephen Alford; 432pp; History; May 21
  16. An American Life; Ronald Reagan; 752pp; Autobiography; May 25
  17. Oracle Bones: A Journey Through Time in China; Peter Hessler; 528pp; Travel; May 27
  18. Absolute Monarchs: A History of the Papacy; John Julius Norwich; 528pp; History; May 30
  19. Among the Russians; Colin Thubron; 224pp; Travel; June 1
  20. In Siberia; Colin Thubron; 304pp; Travel; June 6
  21. The Prince of the Marshes: And Other Occupational Hazards of a Year in Iraq; Rory Stewart; 432pp; Nonfiction; June 9
  22. Secretum; Monalid & Sorti; 812pp; Historical Fiction; June 18
  23. Around India in 80 Trains; Monisha Rajesh; 288pp; Travel; June 25
  24. A Bend in The River; V. S. Naipaul; 288pp; Fiction; June 28
  25. Trespassers On the Roof Of the World: The Race for Lhasa; Peter Hopkirk; 288pp; History; June 29
  26. The Marsh Arabs; Wilfred Thesiger; 256pp; Travel; July 3
  27. To a Mountain in Tibet; Colin Thubron; 220pp; Travel; July 5
  28. American Rust; Philipp Meyer; 384pp; Fiction; July 9
  29. Howards End is On the Landing; Susan Hill; 240pp; Nonfiction; July 12
  30. The Yellow-Lighted Bookshop; Lewis Buzbee; 224pp; Nonfiction; July 13
  31. The Known World; Edward P. Jones; 432pp; Historical Fiction; July 14
  32. Used and Rare: Travels in the Book World; Lawrence & Nancy Goldstone; 224pp; Nonfiction; July 14
  33. Tolstoy and the Purple Chair; Nina Sankovitch; 256pp; Nonfiction; July 17
  34. The Orchardist; Amanda Coplin; 448pp; Historical Fiction; July 18
  35. The Victorian Internet; Tom Standage; 256pp; History; July 19
  36. Cutting for Stone; Abraham Verghese; 688pp; Fiction; July 23
  37. Wuthering Heights; Emily Bronte; 416pp; Fiction; July 25
  38. The Old Curiosity Shop; Charles Dickens; 608pp; Fiction; July 31
  39. South Sea Tales; Jack London; 384pp; Fiction; August 6
  40. Great Expectations; Charles Dickens; 544pp; Fiction; August 11
  41. The Technologists; Matthew Pearl; 555pp; Historical Fiction; August 17
  42. Tess of the d’Urbervilles; Thomas Hardy; 480pp; Historical Fiction; August 24
  43. American Gods; Neil Gaiman; 624pp; Fiction; September 11
  44. Lone Survivor; Marcus Luttrell; 448pp; Nonfiction; October 13
  45. Between the Woods and the Water; Patrick Leigh Fermor; 280pp; Travel; October 26
  46. The Joy Luck Club; Amy Tan; 288pp; Fiction; November 2
  47. Hyperbole and a Half; Allie Brosh; 384pp; Humor; November 6
  48. Seal Team Six; Howard E. Wasdin; 368pp; Nonfiction; November 25
  49. Terra Incognita; Sara Wheeler; 320pp; Travel; November 30
  50. City of Djinns; Walter Dalrymple; 352pp; Travel; December 5
  51. Agincourt; Bernard Cornwell; 380pp; Historical Fiction; December 10
  52. I, Mona Lisa; Jeanne Kalogridis; 544pp; Historical Fiction; December 14
  53. Walking the Amazon; Ed Stafford; 320pp; Travel; December 16
  54. Silent House; Orphan Pamuk; 334pp; Fiction; December 18
  55. Brass Man; Neal Asher; 528pp; Science Fiction; December 24
  56. Polity Agent; Neal Asher; 562pp; Science Fiction; December 30

SQLskills holiday gift to you: all 2012 Insider videos

As we all wind down for the 2013 holiday season, we want to give the SQL Server community a holiday gift to say ‘thank you’ for all your support during 2013, and what better gift than more free content?!

As many of you know, I publish a bi-weekly newsletter to more than 10,000 subscribers that contains an editorial on a SQL Server topic, a demo video, and a book review of my most recently completed book. We’ve decided to make all the 2012 demo videos available  so everyone can watch them - 25 videos in all, WMV format only. I did the same thing last year with the 2011 videos.

Here are the details:

  1. January 2012: Troubleshooting common VM issues (video)
  2. January 2012: Lecture video on indexing strategies from the MCM series (video)
  3. February 2012: Using the free PAL tool (video)
  4. February 2012: PerfMon tips and tricks (video)
  5. March 2012: Using the free SQL Nexus tool (video | demo code)
  6. March 2012: Using the free Plan Explorer tool (video)
  7. April 2012: Lecture video on snapshot isolation from the MCM series (video)
  8. April 2012: Using the 2012 Extended Events SSMS UI (video)
  9. April 2012: Using the free Process Explorer tool to diagnose CPU usage (video | demo code)
  10. May 2012: Using wait statistics demo from the MCM series (video)
  11. May 2012: Using the SSMS Tools Pack (video)
  12. June 2012: Using semantic search (video | demo code)
  13. June 2012: Using Distributed Replay (part 1) (video)
  14. July 2012: Using Distributed Replay (part 2) (video)
  15. July 2012: Using Distributed Replay (part 3) (video)
  16. August 2012: DBCC CHECKDB and rolled-back transactions (video | demo code)
  17. August 2012: RECOVERY, NORECOVERY, and STANDBY restore options from the MCM series (video)
  18. September 2012: SQL Agent cluster failover (video)
  19. September 2012: Cardinality estimation issues (video | demo code)
  20. October 2012: Setting PerfMon default counters (video)
  21. October 2012: EXECUTE AS and SQL injection from the MCM series (video)
  22. October 2012: Regaining instance access without the sysadmin role (video | demo code)
  23. November 2012: Regaining instance access without downtime (video | demo code)
  24. November 2012: Using Windows Server 2012 Task Manager (video)
  25. December 2012: Indexing strategies from the MCM series (video)

If you want to see the 2013 videos before next December, get all the newsletter back-issues, and follow along as the newsletters come out, just sign-up at http://www.SQLskills.com/Insider. No strings attached, no marketing or advertising, just free content.

Happy Holidays and enjoy the videos!

New course on Optimizing Ad Hoc Statement Performance

Kimberly’s second Pluralsight online training course has just gone live and is available for watching!

Her course is SQL Server: Optimizing Ad Hoc Statement Performance and is a mammoth 7.5 hours long and full of extensive demos. The modules are:

  • Introduction (21 mins)
  • Statement Execution Methods (90 mins)
  • Estimates and Selectivity (98 mins)
  • Statement Caching (92 mins)
  • Plan Cache Pollution (101 mins)
  • Summary (32 mins)

Check it out here.

We now have almost 100 hours of SQLskills online training available, all for as little as $29/month through Pluralsight (including more than one thousand other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!

Two new courses on Advanced Corruption Recovery Techniques and Maintenance Plans

I’ve just heard from our good friends at Pluralsight that our two latest online training courses are now live and available for watching!

My latest course is SQL Server: Advanced Corruption Recovery Techniques which follows on from my earlier SQL Server: Detecting and Recovering from Database Corruption course.

The course is 4 hours long and has the following modules:

  • Introduction
  • DBCC CHECKDB Internals and Performance
  • Useful Undocumented DBCC Commands
  • Dealing with Transaction Log Problems
  • Advanced Restore Techniques
  • Advanced Repair Techniques

Check it out here.

Jonathan’s latest course is SQL Server: Maintenance Plans. The course is two and a half hours long and has the following modules:

  • Introduction
  • Approaches to Database Maintenance
  • Configuring SQL Server Agent Settings
  • Common Maintenance Tasks
  • Other Tasks
  • Creating Maintenance Plans
  • Maintenance Plan Challenges and Alternatives

Check it out here.

Now you have something to do this weekend :-) Enjoy!

Are I/O latencies killing your performance?

In this post I explain some methods for investigating and reducing high tempdb and transaction log I/O latencies that can severely hamper the performance of your workload.

Back at the end of August I kicked off a survey asking you to run some code to calculate average I/O latencies and send me the results. I have to apologize for taking two months to get to this editorial as I kept putting off spending the time to collate all the results I was sent. I made up for it by putting in lots of useful information :-)

I received results from 1094 random servers around the world (thank you all!) hosting 25445 databases and it took me a while to pull all the results into SQL Server so I could aggregate the data. Here it is.

What is Good or Bad?

For everything we’re talking about, you have to consider two things:

  • What is good or bad for I/O latency?
  • Even if you have “bad” I/O latency, do you care?

Everyone has their idea of what constitutes good or bad I/O latency, and here’s my take:

  • Excellent: < 1ms
  • Very good: < 5ms
  • Good: 5 – 10ms
  • Poor: 10 – 20ms
  • Bad: 20 – 100ms
  • Shockingly bad: 100 – 500ms
  • WOW!: > 500ms

You may want to argue that my numbers are too high, too low, or wrong, based on what you’ve seen/experienced, and that’s fine, but this is my blog and these are my numbers :-)

Even if your I/O latency is in what I consider the “bad” category, it could be that your workload is performing within the acceptable bounds for your organization and your customers, and so you’re comfortable with that. I’m OK with this, as long as you are aware of your I/O latencies and you’ve consciously decided to accept them as they are. Being ignorant of your I/O latencies and just accepting the workload performance as it is, because that’s just what the performance is, is not acceptable to me.

On to the survey data…

Tempdb Data Files

For this data I worked out the average read and write latency over all tempdb data files for each instance. I didn’t see any instances where one tempdb data file had a huge latency compared to the others so I believe this is a valid approach.

TempdbAvgRead Are I/O latencies killing your performance?

The tempdb data file read latencies aren’t too bad, to be honest, with more than 93% of all the servers in the survey having read latencies less than 20ms.

TempdbAvgWrite Are I/O latencies killing your performance?

This is very interesting – almost 42% of all the servers in the survey had average tempdb data file write latency of more than 20ms, and just over 12% of all servers had average tempdb data file write latency of more than half a second per write – that’s ridiculously high!

To be honest, I’m pretty shocked by these results. especially the relatively high number of servers with multi-second average write latencies for tempdb.

So if you check your average I/O latency for tempdb (using a script such as the one I blogged here, using sys.dm_io_virtual_file_stats) and find that it’s really high on my (or your) scale, what can you do?

Well, there are four approaches I can think of:

  1. Don’t do any investigation and just immediately move tempdb to a faster I/O subsystem, such as two SSD cards in a RAID-1 configuration (remember, one SSD is RAID-0, and that’s not good enough because if tempdb is corrupt or unavailable, your instance shuts down!). You might think that this is the lazy approach, but if you *know* you can’t make any changes to workload, this may be your only option. I bumped into this scenario while working with a client that supplies casino software. The optimal solution was to change some SPs to reduce tempdb usage, but that was going to take 18 months to be approved by the Gaming Commission in the state where the client’s client was located. The only solution in the meantime? Pony up for a faster tempdb I/O subsystem.
  2. Investigate the I/O subsystem where tempdb is located, looking for things like (non-exhaustive list):
    • Pathing/network issues, such as having a 1Gb switch in the middle of a 4Gb path, or having mismatched jumbo frame settings in your iSCSI configuration, or where the network to the SAN is being saturated by something other than SQL Server I/O traffic.
    • Incorrect SAN settings, such as not having write-caching enabled for a write-heavy workload, incorrect queue depth compared to your SAN vendor’s recommendations, or tempdb stuck on slow storage on an auto-tiering SAN because of incorrect SAN training.
    • Multiple users of the portion of the I/O subsystem where tempdb is located, such as having tempdb lumped in with other volatile databases, or even having LUNs shared between SQL Server and other applications like Exchange or IIS.
    • Only having a single tempdb data file so missing out on the usual higher I/O subsystem performance gained from having multiple data files in a filegroup (note: don’t confuse this with adding more tempdb data files to reduce PAGELATCH_XX contention on in-memory allocation bitmaps.)
  3. Try to reduce the usage of tempdb, by looking for (non-exhaustive list):
    • Spill warnings in query plans (hash, sort, or exchange) indicating that there was not enough query execution memory and an operator had to spill results to tempdb. See these blog posts for more information: here, here, here, and a blog post of mine explaining how to understand the data vs. log usage for a memory spill to tempdb here.
    • Incorrect, excessive usage of temp tables, such as *always* using a temp table when it may be better sometimes to not do so, pulling more columns or rows into a temp table than are actually required (e.g. SELECT * into the temp table from a user table, with no WHERE clause), creating nonclustered indexes on a temp table that are not used. I’ve seen this over and over with client code.
    • Index rebuilds that use SORT_IN_TEMPDB.
    • Using one of the flavors of snapshot isolation and allowing long-running queries that cause the version store to grow very large.
    • There are lots of useful queries and other information in the whitepaper Working with tempdb in SQL Server 2005 (which is still applicable to all current versions.)
  4. A combination of #2 and #3, and then maybe you just have to move to a faster I/O subsystem, as in #1.

One other thing to consider is the risk of making a change to your code and/or the cost of the engineering effort (dev and test) to do so. It may be cheaper and less risky to move to a faster I/O subsystem. Your call. Another issue you may have is that the bad code is in a 3rd-party application that you have no control over. In that case you may have no choice except to throw hardware at the problem.

Transaction Log Files

For this data I considered each database separately rather than aggregating per instance.

LogReadAvg2 Are I/O latencies killing your performance?

LogWriteAvg2 Are I/O latencies killing your performance?

For the transaction log, you really want the average write latency to be in the 0-5ms range, and it’s good to see more than 79% of transaction log files in the survey are achieving that. I would say that write latency for the transaction log is much more important than read latency, as write latency slows down transactions in your workload. That’s not to say that you should ignore high read latencies, as these slow down log readers (such as log backups, transactional replication, change data capture, asynchronous database mirroring/availability groups), but log read latencies don’t usually slow down your workload unless you have transactions that are rolling back (the only time that transactions will cause log reads) or you rely heavily on change data capture.

So you’re focusing on write latency. Again, there are multiple approaches that are the same as #1-#4 above. What you’re looking for in approach #3 is different though. I’ve written several detailed posts about transaction log performance tuning (including reducing the amount of log generated and changing the log block flush pattern) for SQL Sentry’s SQLPerformance.com blog so rather than duplicate those, I’ll point you to them:

And then there’s my 8-hour Pluralsight online training class that covers SQL Server: Understanding Logging, Recovery, and the Transaction Log.

Summary

It’s really important that you pay attention to the read and write I/O latencies for critical components of your SQL Server environment: tempdb and transaction logs. I’ve given you a lot of info above on what you can do inside and outside SQL Server to reduce these latencies and increase your workload throughput.

Unless you have no choice, don’t just throw some SSDs into the mix without first figuring out whether there are some things you can do inside SQL Server to reduce the I/O load, and if you do throw in some SSDs, make sure that you’re using them to host whatever files are your biggest I/O bottleneck, and make sure you’re using at least two of them in a RAID-1 configuration to protect against failure.

I hope this has been a useful read – happy tuning!

Why is log_reuse_wait_desc saying LOG_BACKUP after doing a log backup?

Here’s an interesting situation that sometimes crops up: you’re performing log backups, your transaction log isn’t growing, and yet the value of log_reuse_wait_desc in sys.databases for your database says LOG_BACKUP.

What’s going on?

I was actually just teaching this stuff today here in our last class of the year in Chicago.

Firstly, what is log_reuse_wait_desc? It’s a field in sys.databases that you can use to determine why the transaction log isn’t clearing (a.k.a truncating) correctly. Usually you query it when a log file is growing and you want to know why, using code like:

SELECT [log_reuse_wait_desc]
	FROM [master].[sys].[databases]
	WHERE [name] = N'Company';
GO

You can read about all the possible values in the Books Online topic Factors That Can Delay Log Truncation.

One important thing to bear in mind is that the value returned is the reason that log clearing did not clear any VLFs (Virtual Log Files – see Understanding Logging and Recovery in SQL Server) that last time that log clearing was attempted. For instance, you may see a value of ACTIVE_BACKUP_OR_RESTORE but you know that your full backup has completed. This means that the last time log clearing was attempted, the backup was still running.

Back to the original question. If you have a transaction log that is not growing, and you’re taking regular log backups, but the log_reuse_wait_desc remains LOG_BACKUP, this is because zero VLFs were cleared when the previous log backup was performed.

How can that happen?

Imagine a database where there’s very little insert/update/delete/DDL activity, so in between your regular log backups there are only a few log records generated, and they’re all in the same VLF. The next log backup runs, backing up those few log records, but it can’t clear the current VLF, so can’t clear log_reuse_wait_desc. As soon as there are enough changes in the database that the current VLF fills up and the next VLF is activated, the next log backup should be able to clear the previous VLF and then the log_reuse_wait_desc will revert to NOTHING. Until the next log backup occurs and isn’t able to clear the current VLF, in which case it will go back to LOG_BACKUP again.

So LOG_BACKUP really means “either you need to take a log backup, or the log records that were backed up were all in the current VLF and so it could not be cleared.”

Hope this helps!

 

SQLskills community mentoring – round 5

We’ve had a mentoring program here at SQLskills for a couple of years (the brainchild of Jonathan), where each of us can pick someone who’s attended one of our Immersion Events in the past and offer to be their mentor for six months.

It’s time to kick off the fourth mentoring session. You can read about the previous mentees below:

I’m the only one mentoring this time. I’ve found it really interesting helping my four prior mentees with a variety of personal and professional goals and getting to know them better. This time I’ll be mentoring Denis Tolstonogov who took IE1 in Tampa and IE2 in Chicago earlier this year. In his own words:

I was born in Russia, a small town called Kurgan, and came to USA 10 years ago. I have not regretted it yet! Back in Russia, I graduated from Kurgan State University with an MBA but never used it. I’ve been always interested in computers but never had a chance to actually study for it. After coming to the US, I started with attending an ESL school in Manhattan to learn how to speak English, still learning. After graduating the ESL school, I transferred to our local Community College where I continued to take English classes and majored in Computer Networking. I’ve started my IT career as a Network Admin, which I did for about 9 years, so I am one of those Accidental DBAs! I’ve been always fascinated by SQL Server, its complexity and the challenges that it presented. Any SQL project that I had to deal with was frustrating but always interesting! Frustrating only because I was lacking the knowledge. In 2009, my boss asked me if I’d be interested in spending my time only on SQL Server because we run a web site with SQL Server database as the back end but there is nobody in our company who knows SQL Server in depth and cannot answer why something works today but doesn’t work tomorrow. So my dream came true and I was given the opportunity to start learning how SQL Server works full time. In 2011, I took 2 week class in NYC which didn’t teach me much, only how to pass SQL Server exams, and I did. The most valuable things I took from this class was that one day the teacher mentioned about Paul Randal and how great his blog was. I’ve started to read Paul’s blog and found enormous amount of useful information for myself. Then I found out that Paul and Kim are married and they teach together Immersion Events! So in 2013, I had a chance to take IE1 in Tampa and IE2 in Chicago, met them and SQLskills gang in person! What an amazing fun people! The best learning experience ever!

Also, I love animals, nature, and traveling. Almost all of my free time I devote to our local Humane Society. Love to go hiking, camping, kayaking, you name it! As far as traveling, all inclusive resorts or laying 7 days on the beach — not for me, Nome AK — yes please! Dreaming about Norway right now!

Congratulations to Denis – I’m looking forward to working with him over the next six months.

PS I’m often asked how to get into our mentoring program – it’s simple: attend one of our Immersion Events and get to know one of us, then ask and see if there’s a match that works.

New RunAsRadio podcast with me, Kimberly, Brent

At the end of every SQLintersection conference we have a closing Q&A session where we answer audience questions about anything to do with SQL Server. It’s very popular with attendees and we have a lot of fun on stage. Last week when we did it, our good friend Richard Campbell (of .Net Rocks! fame, which we’ve been on quite a few times), MC’d the session and we recorded everything as a RunAsRadio podcast. We’re going to do that every time we do a conference as it worked out really well and lets us share some of the conference with all of you.

The recording has now been published. It’s about an hour long and you can get it here.

Enjoy!

Inside the Storage Engine: How are allocation unit IDs calculated?

It’s been a long time since I’ve written a post about pure internals, but every so often I get asked how an allocation unit ID is calculated from the m_objId and m_indexId fields that are stored in the header of every page.

When DBCC PAGE dumps a page header’s contents, it does the necessary calculations and metadata look-ups to be able to tell you the allocation unit ID, partition ID, relational object ID, and relational index ID. Basically everything prefixed with ‘Metadata:’ in the DBCC PAGE output below is NOT stored on the page itself:

Page @0x00000004ED8A2000

m_pageId = (1:445)                  m_headerVersion = 1                 m_type = 1
m_typeFlagBits = 0x0                m_level = 0                         m_flagBits = 0xa000
m_objId (AllocUnitId.idObj) = 97    m_indexId (AllocUnitId.idInd) = 256
Metadata: AllocUnitId = 72057594044284928
Metadata: PartitionId = 72057594039304192                                Metadata: IndexId = 0
Metadata: ObjectId = 599673184      m_prevPage = (0:0)                  m_nextPage = (0:0)
pminlen = 8                         m_slotCnt = 1                       m_freeCnt = 8069
m_freeData = 121                    m_reservedCnt = 0                   m_lsn = (225:443:22)
m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt = 0
m_tornBits = 0                      DB Frag ID = 1

The formula is as follows:

  • Take the m_indexId and left-shift by 48, giving value A
  • Take the m_objId and left-shift by 16, giving value B
  • AllocUnitId = A | B (where | is a logical OR operation)

Using the page above:

  • A = 256 << 48 = 72057594037927936
  • B = 97 << 16 = 6356992
  • AllocUnitId = 72057594044284928

You can do this using SQL Server using the POWER function as a left shift of X bits is the same as multiplying by 2-to-the-power-X:

SELECT 256 * CONVERT (BIGINT, POWER (2.0, 48)) | 97 * CONVERT (BIGINT, POWER (2.0, 16));
GO

And then you can perform the various look-ups using sys.system_internals_allocation_units and sys.partitions like so:

SELECT
	[a].[container_id] AS [Partition ID],
	[p].[object_id] AS [Object ID],
	[p].[index_id] AS [Index ID]
FROM sys.system_internals_allocation_units [a]
JOIN sys.partitions [p]
	ON [p].[partition_id] = [a].[container_id]
WHERE
	[a].[allocation_unit_id] = 72057594044284928;
GO
Partition ID         Object ID   Index ID
-------------------- ----------- -----------
72057594039304192    599673184   0

And you can see that the values match the DBCC PAGE output.

To convert from an allocation unit ID to what you should see in the DBCC PAGE output:

  • m_indexId = AllocUnitId >> 48
  • m_objId = (AllocUnitId – (m_indexId << 48)) >> 16

The T-SQL for this involves floating point math as we need to use the reciprocal of POWER:

DECLARE @alloc BIGINT = 72057594044284928;
DECLARE @index BIGINT;

SELECT @index =
	CONVERT (BIGINT,
		CONVERT (FLOAT, @alloc)
			* (1 / POWER (2.0, 48)) -- right shift, reciprocal of left shift
	);
SELECT
	CONVERT (BIGINT,
		CONVERT (FLOAT, @alloc - (@index * CONVERT (BIGINT, POWER (2.0, 48))))
			* (1 / POWER (2.0, 16)) -- right shift, reciprocal of left shift
	) AS [m_objId],
	@index AS [m_indexId];
GO

 

m_objId              m_indexId
-------------------- --------------------
97                   256

An example of when you might use this information/code is during programmatic analysis of a corrupt database that DBCC CHECKDB cannot process to allow you to extract data as a last resort.

Enjoy!