SQLskills hires all-round SQL expert Tim Radney

It’s hiring time again, as our consulting volume has reached the point where we need more help on our close-knit, expert team!

Specifically, we’ve asked well-known SQL expert and MVP Tim Radney to join us and we’re extremely pleased that he accepted our offer. He’ll bring our team up to seven people when he starts with us on Monday, January 19th, bringing a wealth of experience and knowledge to the team. You can reach him at tim@SQLskills.com.


I’ve known Tim for years (a prerequisite for working here) and he’s a huge contributor to the SQL Community: on Twitter (@TRadney), on his blog (here), as a frequent speaker at user groups, SQL Saturdays, and PASS Summits, as the Chapter Leader for the Columbus, GA SQL Server User Group, and as PASS Regional Mentor for the SE USA. Wow! Tim was also named a PASS Outstanding Volunteer in 2012 in recognition of all that he does.

Tim’s been working with SQL Server for 15 years, in a variety of roles including DBA, Lead DBA, and multi-department manager, which have given him expertise in areas like HA/DR, virtualization, SSIS, SSRS, and performance tuning, among everything else SQL Server-related. As a manager, Tim’s gained extensive experience planning and implementing large-scale environment changes and upgrades, which will be invaluable for working with some of our larger Fortune 20-100 clients.

Outside of SQL Server, Tim is married with three kids and also shares my passion for electronics and messing around with a soldering iron. He also farms chickens and tilapias in his spare time!

Tim is an excellent addition to our world-class consulting team and we’re very excited that he’s coming on board – welcome!

Thanks as always,

Paul and Kimberly

PS You can read Tim’s post here.

SQLskills holiday gift to you: all 2013 Insider videos

As we all wind down for the 2014 holiday season, we want to give the SQL Server community a holiday gift to say ‘thank you’ for all your support during 2014, and what better gift than more free content?!

As many of you know, I publish a bi-weekly newsletter to more than 11,500 subscribers that contains an editorial on a SQL Server topic, a demo video, and a book review of my most recently completed book. We’re making all the 2013 demo videos available  so everyone can watch them – 24 videos in all, WMV format only. I did the same thing the last few years for the 2012 videos and 2011 videos.

Here are the details:

  1. January 2013: Recreating missing log files on attach (from Pluralsight) (video | demo code)
  2. January 2013: Using the sys.dm_db_stats_properties DMV (video | demo code)
  3. February 2013: Using Microsoft Data Link (video)
  4. February 2013: Linked servers and statistics (video | demo code)
  5. March 2013: Using the system_health Extended Event session (video | demo code)
  6. March 2013: Moving from SQL Trace to Extended Events (video | demo code)
  7. April 2013: Color coding and other SSMS features (video)
  8. April 2013: DISTINCT aggregate improvements (video | demo code)
  9. May 2013: Using the tsql_stack Extended Event action (from Pluralsight) (video | demo code)
  10. May 2013: Deferred drop behavior (from Pluralsight) (video | demo code)
  11. June 2013: Finding duplicate statistics (video | demo code)
  12. June 2013: Using data compression (video | demo code)
  13. July 2013: Undetectable performance problems (video | demo code)
  14. July 2013: SSMS Object Explorer features (video)
  15. August 2013: Parallel crash recovery (from Pluralsight) (video | demo code)
  16. August 2013: Tracking tempdb space usage (video | demo code)
  17. September 2013: Enabling instant file initialization (video | demo code)
  18. September 2013: Using query hashes (video | demo code)
  19. September 2013: Recovering from data purity corruptions (from Pluralsight) (video | demo code)
  20. October 2013: Implicit conversions (from Pluralsight) (video | demo code)
  21. October 2013: Extended Events templates (video | demo code)
  22. November 2013: Using the missing index DMVs (video | demo code)
  23. November 2013: Using older backups to retrieve data after corruption (from Pluralsight) (video | demo code)
  24. December 2013: Enabling database mail (video)

If you want to see the 2014 videos before next December, get all the newsletter back-issues, and follow along as the newsletters come out, just sign-up at http://www.SQLskills.com/Insider. No strings attached, no marketing or advertising, just free content.

Happy Holidays and enjoy the videos!

Problems from having lots of server memory

A month ago I kicked off a survey asking how much memory is installed on your largest server that’s running SQL Server. Thank you to everyone that responded.

Here are the results:













The “other” values are:

  • 3 more for the ‘128 GB or more, but less than 256 GB’ count
  • 1 more for the ‘Less than 16 GB’ count
  • One poor soul who only has 512 MB in their server!

This is very interesting:

  • I expected the majority of servers to fall into the middle of the range (around 128GB), and it’s actually only 37% that fit into the 64 GB to 256 GB range.
  • I’m surprised at the percentage of servers (41%) of servers with 256 GB or more.
  • I didn’t know what percentage would have more than 1 TB, so almost 10% is really cool to see.

So what do these results mean? Well, the number of servers out there with lots (more than 128GB) of memory is more than half of all respondents. The more memory you have, the more important it is that you make sure that the memory is being used efficiently and that you’re not wasting space in the buffer pool (see here) and that you’re not churning the buffer pool with poor query plans causing lots of reads (see here).

What other things could be problems with large amounts of memory?

  • Shutting down the instance. This will checkpoint all the databases, which could take quite a long time (minutes to hours) if suddenly all databases have lots of dirty pages that all need to be flushed out to disk. This can eat into your maintenance window, if you’re shutting down to install an SP or a CU.
  • Starting up the instance. If the server’s POST checks memory, the more memory you have, the longer that will take. This can eat into your allowable downtime if a crash occurs.
  • Allocating the buffer pool. We’ve worked with clients with terabyte+ buffer pools where they hit a bug on 2008 R2 (also in 2008 and 2012) around NUMA memory allocations that would cause SQL Server to take many minutes to start up. That bug has been fixed in all affected versions and you can read about in KB 2819662.
  • Warming up the buffer pool. Assuming you don’t hit the memory allocation problem above, how do you warm up such a large buffer pool so that you’re not waiting a long time for your ‘working set’ of data file pages to be memory resident? One solution is to analyze your buffer pool when it’s warm, to figure out which tables and indexes are in memory, and then write some scripts that will read much of that data into memory quickly as part of starting up the instance. For one of the same customers that hit the allocation bug above, doing this produced a big boost in getting to the steady-state workload performance compared to waiting for the buffer pool to warm up naturally.
  • Complacency. With a large amount of memory available, there might be a tendency to slacken off proactively looking for unused and missing index tuning opportunities or plan cache bloat or wasted buffer pool space (I mentioned above), thinking that having all that memory will be more forgiving. Don’t fall into this trap. If one of these things becomes such a problem that it’s noticeable on your server with lots of memory, it’s a *big* problem that may be harder to get under control quickly.
  • Disaster recovery. If you’ve got lots of memory, it probably means your databases are getting larger. You need to start considering the need for multiple filegroups to allow small, targeted restores for fast disaster recovery. This may also mean you need to think about breaking up large tables, using partitioning for instance, or archiving old, unused data so that tables don’t become unwieldy.

Adding more memory is one of the easiest ways to alleviate some performance issues (as a band-aid, or seemingly risk-free temporary fix), but don’t think it’s a simple thing to just max out the server memory and then forget about it. As you can see, more memory leads to more potential problems, and these are just a few things that spring to mind as I’m sitting in the back of class here in Sydney.

Be careful out there!