CPU Upgrade to AMD Ryzen Threadripper 2950X

About a year ago, I built a high-end desktop (HEDT) workstation based on the then, top-of-the-line 14nm AMD Ryzen Threadripper 1950X processor. This is the machine that I use for most of my daily work, and I have been quite happy with it over the past year.  Here are the main components that I used to build this system:

By design, the system used 100% PCIe 3.0 NVMe NAND flash or Intel Optane storage. All AMD Ryzen Threadripper processors support 64 PCIe 3.0 lanes (with 4 reserved for the chipset), so I wanted to take advantage of that to get excellent total storage performance. This system has 16 physical cores (32 threads), 64GB of RAM, about 3.7TB of storage, and 10 Gbps Ethernet connectivity. It has four empty memory slots, so I can easily go to 128GB of RAM.

On August 31, 2018, AMD released the second generation 12nm AMD Ryzen Threadripper 2950X processor (AnandTech has a great review here). This processor uses the improved Zen+ microarchitecture that offers the Precision Boost 2 and XFR2 features to more aggressively boost more processor cores more quickly, based on available voltage and operating temperature. There is about a 3% IPC improvement and some minor base and boost clock speed increases.

After waiting a bit, I managed to find an “open-box” special for an AMD Ryzen Threadripper 2950X for $765.00 at my local Micro Center, where it looked like someone had purchased the processor, opened the box, but they had not actually installed the processor (since it had no thermal grease residue or even any fingerprints on the heat spreader). Normally, Micro Center sells that processor for $849.99, which is $50.00 less than Newegg or Amazon.

While I was at it, I swapped out the original NZXT Kraken X62 CPU cooler for a Noctua NH-U14S TR4-SP3 CPU cooler, that was actually made for the huge Threadripper CPU size. This Noctua cooler is quieter than the old cooler, and it keeps the CPU roughly 20 degrees Celsius cooler, both at idle and under a full load. The temperature difference is important for the XFR2 feature, which boosts the clock speed of more cores more aggressively if you have a high-end CPU cooler.

The CPU-swap went even more smoothly than I expected. The hardest part was removing the old NZXT CPU cooler and its associated wiring. I was expecting to have to go into the BIOS setup during the first boot and confirm a CPU change, and then I expected that Windows would also want a reboot. I also thought I might even have to re-activate Windows 10 because of the new CPU, but none of this was required. The system booted right into Windows 10 Pro for Workstations with no complaints at all.

Here are the old and new CPU-Z and Geekbench 4 scores for this system.

Picture1

Figure 1: Comparative Benchmark Scores

Subjectively, the system seems noticeably quicker for everyday common tasks. It’s hard to say how much of that is because I want the system to be faster after the upgrade…

Economically, its hard to make the case for this particular upgrade. If I sell the old Threadripper 1950X, that would make it much easier to justify. Another argument is that AMD is planning on maintaining full backwards compatibility with existing AMD X399 motherboards when the 7nm Zen 2 Threadripper processors are released in mid-2019.


image

Figure 2: AMD Ryzen Threadripper 2950X

image

Figure 3: HWiNFO64 System Summary


Here is an AMD diagram of the architecture of the processor.


See the source image

Figure 4: AMD Ryzen Threadripper 2950X Architecture


I really think that an AMD Threadripper system is the best choice for a general purpose, HEDT workstation, especially from a performance/dollar perspective. Intel still has a small single-threaded performance advantage with some of their processors, but the gap is pretty close now. You will have to spend quite a bit more money to get a comparable Intel-based system.




AMD EPYC 7371 Processor in Q1 2019

At the Supercomputing 2018 Conference, AMD announced a special high-frequency SKU in their first-generation 14nm AMD EPYC “Naples” line of server processors that will be the AMD EPYC 7371 processor. This upcoming processor will have 16 cores and 32 threads, with a base clock speed of 3.1 GHz, boosting up to 3.6 GHz on all 16 cores and up to 3.8 GHz on eight cores. It also has a 64MB L3 cache.

This is a significant clock speed increase over the existing AMD EPYC 7351, which has a base clock speed of 2.4 GHz, with a max boost speed of 2.9 GHz. This is especially relevant for SQL Server usage, where licensing is based on core counts, not clock speeds or actual performance. A base clock speed increase of 29.2% is going to be quite noticeable on a heavily utilized SQL Server instance.

I hope that this SKU is offered in mainstream vendor systems like the Dell PowerEdge R7415 and R7425.

Anandtech has some more details here.

IEPUM2017: Immersion Event on Planning and Implementing an Upgrade/Migration to SQL Server 2017

We’ve just announced the next round on live, online training classes for the first quarter of 2019. Paul has more details about all six classes that we are offering here. I will be teaching IEPUM2017: Immersion Event on Planning and Implementing an Upgrade/Migration to SQL Server 2017 on January 29-31, 2019.

I think this is a very interesting and useful class that is especially relevant during 2019 because of the end of extended support for SQL Server 2008/2008 R2, the end of mainstream support for SQL Server 2014, the release of Windows Server 2019, the upcoming release of SQL Server 2019, new processor releases from both Intel and a newly competitive AMD, and many new memory and storage-related developments that affect SQL Server. I wrote about some of these factors here.

Here is the module list:

  • Module 1: Upgrade Planning
  • Module 2: Hardware and Storage Selection
  • Module 3: SQL Server 2017 Installation and Configuration
  • Module 4: Upgrade Testing
  • Module 5: Migration Planning
  • Module 6: Production Migration Methods

This class is relevant for upgrade/migrations to SQL Server 2016, SQL Server 2017, and SQL Server 2019. I think a lot of organizations are going to be upgrading/migrating to a modern version of SQL Server during 2019-2020, and I want to teach as many people as possible how to avoid doing a “blind migration”, where they don’t do the necessary planning, analysis, and testing, and end up having poor upgrade/migration experience.

You can jump right to the registration page here. I hope to see you in this class!




Building a Modern AMD Desktop Development Workstation

Over the past 12-18 months, it has become relatively easy to build an extremely powerful AMD desktop development workstation, with plenty of compute, memory and storage performance and capacity to support some pretty serious workloads. You can also build a system for a relatively affordable cost, compared to what you had to spend in years past. This is especially true compared to previous and current generation Intel HEDT systems.

AMD Ryzen Threadripper

If you want the most total capacity and performance possible from an AMD desktop system, you will want to build a machine based on one of the AMD Ryzen Threadripper high-end desktop (HEDT) processors. The 12nm, 2nd generation AMD Ryzen Threadripper processors have up to 32 physical cores (and 64 threads, with SMT enabled), and support up to 128GB of DDR4 RAM (eight x 16GB DIMMs), along with 60 PCIe 3.0 lanes for I/O or graphics. When 32GB desktop DIMMs become available in 2019, this processor will support those.

The top of the line AMD Ryzen Threadripper 2990WX has 32 physical cores, which sounds impressive, but most people are going to get better performance for typical workloads by using the more affordable AMD Ryzen Threadripper 2950X, which has 16 physical cores. The 2950X has higher base and boost clock speeds than the 2990WX, and a different core to memory arrangement that yields better performance in many benchmarks. These second generation Threadripper processors were introduced in August of 2018, and they are drop-in compatible (with a BIOS update) with existing AMD X399-based motherboards. AMD has also stated that the next generation Threadripper processors, which will probably be 7nm, will also be compatible with existing X399 motherboards, which makes this platform more future-proof.


Component Selection

If you are building a new desktop system, there is a short list of components that you will need to have in order to have a basic, functioning system. This list will include these components as a starting point:

  • Case
  • Motherboard
  • Processor
  • Processor cooler
  • Memory
  • Video card
  • Storage


Case

Unless you want to just have your system sitting on a test bench or literally on top of your real physical desktop, you will need some sort of case to hold your components. Most AMD X399-based motherboards are using the ATX form factor, so you will need a case that is large enough to hold an ATX motherboard. You also need to think about whether you are concerned more about thermal performance or acoustic performance for your case.

The 2nd generation Ryzen Threadripper processors will deliver much higher boost clock speeds when they are running at a lower temperature, so having good thermal performance from your case is actually pretty important for processor performance. Better thermal performance for a case is usually pretty closely related to air flow through the case, and better air flow usually means more ambient noise from the case. Some people want their system to be as quiet as possible, but getting better acoustic performance usually means having less air flow, which hurts thermal performance.

Gamer’s Nexus is one of the best sites for thorough, accurate case reviews.They have a very relevant post on thermal vs. acoustic performance here. A couple of top-rated, affordable cases for thermal performance are the Cooler Master H500P Mesh and the NZXT H700. A couple of top-rated, affordable cases for acoustic performance are the be quiet! Silent Base 601 and the Fractal Design Define R6.


Motherboard

The motherboard is the heart of your system, controlling how many PCIe expansion slots, M.2 and SATA connectors, and what kind of on-board network support you have. AMD X399-based TR4 motherboards are relatively expensive, so be prepared for that. I prefer ASRock X399 motherboards because they have high quality components and very good storage and network features (such as three M.2 Ultra slots and an included 10Gbps Aquantia NIC on some models). The exact model I have is the ASRock Fatal1ty X399 Professional Gaming, which despite the name, is a great choice for a workstation. Anandtech has a review of this model here. Newegg has this model here, while Micro Center has it here.


Processor

I think you will want either the AMD Ryzen Threadripper 2950X or the previous generation AMD Ryzen Threadripper 1950X (which both will work in the same AMD X399 motherboards). The newer 2950X probably has 10-20% better performance, depending on the benchmark, but will be a little more expensive than the discounted, older 1950X. The best prices I am aware of are at Micro Center (if you happen to live near one of their 25 physical store locations). Here is their listing for the AMD Ryzen Threadripper 2950X and the AMD Ryzen Threadripper 1950X. BTW, if you purchase a processor and eligible motherboard together at Micro Center, you will get an additional $30 bundle discount.


Processor Cooler

AMD Ryzen Threadripper processors do NOT come with a bundled CPU cooler. Even if they did, you would probably want to get a better aftermarket CPU cooler. This is even more important for the Threadripper, because of its very large size and sensitivity to temperature (due to XFR2) when it comes to boost clock speeds. Many existing CPU coolers do not match up with the large size of the Threadripper heat spreader, so you are not getting 100% coverage of the heat spreader. This includes many existing AIO closed loop liquid-cooling systems.

One affordable, highly-reviewed air cooler is the Noctua NH-U12S TR4-SP3. Another benefit of this specific model is that it is not so large that it blocks RAM or PCIe slots on most motherboards.


Memory

AMD X399 Motherboards have eight DDR4 memory slots, which can each hold a 16GB DIMM. DDR4 memory prices are finally starting to come down after being quite high over the past couple of years, so you can currently get a kit with two 16GB sticks of DDR4 3000 for about $310.00. I like Corsair Memory, and I have had good results with these Corsair Dominator Platinum units in my AMD Ryzen Threadripper 1950X system.

Another important consideration for memory performance is how many memory channels are being used by the system. If you only have two DIMMs, you are in dual-channel mode, while four or more DIMMs will give you quad-channel mode, which will give you much more memory bandwidth. Higher speed memory, such as DDR4 2800 or higher, seems to be relatively important for AMD Ryzen Threadripper processors.


Video Card

AMD Ryzen Threadripper processors do not have integrated graphics, so you are going to need some sort of discrete PCIe video card. The exact card you want depends on your performance requirements and budget, and factors such as how many monitors you want to drive at what resolution. You also need to consider what kind of connectors you need for your existing, or perhaps new monitor(s). Display Port or mini-Display Port is the best kind of connection, followed by HDMI and the older DVI connectors.

Assuming you aren’t going to use your system for heavy duty gaming or serious CAD work, you can probably get by with something like an NVidia GeForce GTX 1060 or better. Since the newer, more expensive NVidia RTX 20xx cards are available, the supply of GTX 10xx cards is going to eventually dry up. Luckily, the GPU-based crypto-mining craze has died down, so video card prices and availability are nearly back to normal levels. For an AMD-based card, you want an AMD Radeon RX 580 or better.


Storage

This is a critical area for a high-end workstation machine. There are many storage technologies, interfaces, and protocols to choose from, and just getting “an SSD” without being aware of these choices would be a mistake for this type of machine. I would argue that you should try to go 100% with PCIe 3.0 x4 NVMe storage, whether it is using an AIC PCIe form factor or an M.2 PCIe form factor.  Older SATA III AHCI SSDs are not a good choice for a system like this.

The best choice for your system drive is an Intel Optane storage card, such as the Intel Optane SSD 905P or the slightly older Intel Optane 900P. These drives are admittedly expensive (ranging from about $2100 for a 1.5TB 905P to $270 for a 280GB 900P), but they all use Intel 3D XPoint technology (which is better and faster than NAND flash). All of these cards are PCIe 3.0 x4, and they use the NVMe protocol. There is no performance difference between the smaller capacity models and the larger capacity models, so your choice comes down to your budget and your space needs. If you want the absolute best performance for booting and patching, this is the way to go for your system drive.

If that is not possible for budget reasons, then you should be looking at a high-end M.2 NVMe flash storage card, probably from Samsung. The two best current models are the Samsung 970 PRO and the more affordable Samsung 970 EVO. The largest capacity for the 970 PRO is 1TB, while the 970 EVO has a 2TB model. You probably won’t notice any performance difference between the 970 PRO and the 970 EVO. As always, larger capacity NAND flash SSDs from the same model line have noticeably better performance than smaller capacity drives.

If you really want to go old-school, with SATA III AHCI SSDs, then the Samsung 860 PRO does have up to a 4TB capacity model, along with a more affordable Samsung 860 EVO that also has a 4TB model. Any SATA III AHCI NAND-flash SSD is going to be limited to about 550MB/sec of sequential throughput, and have much higher latency than a PCIe NVMe NAND flash drive.

My preference would be a 280GB or 480GB Intel 900P for the boot drive, and then one or more M.2 Samsung 970 PRO or EVO drives for other purposes.


Other Considerations

External interfaces (for network connectivity and external storage) are also very important for a high-end workstation. Most X399 motherboards will have two 1Gbps Intel NICs, but some also have a 10Gbps Aquantia NIC. Even if your motherboard doesn’t have a 10Gbps NIC, you can add one later for less than $100.00. You will need a 10Gbps switch in order to have 10Gbps connectivity, but prices on those continue to decrease. You could start out with a Netgear GS110MX switch or a Netgear XS708E switch. You would also need something else, like a file-server or NAS that had 10Gbps connectivity, in order to benefit from this.

Having USB 3.1 Gen2 (10Gbps) connectivity, whether it is a Type-A or a Type–C port is very useful for connecting to an external SSD (which could even be an M.2 NVMe drive). You could also have a Thunderbolt 3 AIC, for 40Gbps connectivity to a TB3 external drive or drive array.

Having this many PCIe 3.0 lanes available (there are 64 total, with four reserved for the chipset on all AMD Ryzen Threadripper processors) gives you a lot of I/O flexibility. For example, a single video card might use either 8 or 16 PCIe lanes, while an Intel 900P storage card would use 4 PCIe lanes, and an M.2 PCIe NVMe storage card would use 4 lanes.

Here is an example of how you could use some of these available PCIe 3.0 lanes in a Threadripper system.

  • 16 lanes     Nvidia GeForce GTX-1060 video card
  •   4 lanes     480GB Intel 900P
  •   4 lanes     1TB Samsung 970 EVO M.2
  •   4 lanes     1TB Samsung 970 EVO M.2
  •   4 lanes     1TB Samsung 970 EVO M.2
  •   4 lanes     ASRock TB3 AIC

Depending on your component choices, you are probably looking at spending anywhere from $2500.00 on up for a system like this, so this is a serious investment for most people. You will have a very flexible, powerful, and relatively future proof system that is capable of doing some serious work.





SQL Server Diagnostic Information Queries for November 2018

This month, I have just made some more minor improvements to most of the query sets.

I have a T-SQL script that you can use to check whether your instance of SQL Server has been patched to mitigate against the Spectre/Meltdown CPU vulnerability. This works for SQL Server 2008 through SQL Server 2017, for on-premises and cloud-based VM (IaaS) usage. You can get the query for this here.

I often make additional minor updates to the queries periodically during the month, so if you are in doubt, downloading the latest version is always a good idea.

Rather than having a separate blog post for each version, I have just put the links for all ten major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set.

Here are links to the latest versions of these queries for Azure SQL Database, SQL Server 2019, SQL Server 2017, SQL Server 2016 SP2, SQL Server 2016, and SQL Server 2014:

Azure SQL Database Diagnostic Information Queries

Azure SQL Database Blank Results Spreadsheet

SQL Server 2019 Diagnostic Information Queries

SQL Server 2019 Blank Results Spreadsheet

SQL Server 2017 Diagnostic Information Queries

SQL Server 2017 Blank Results Spreadsheet

SQL Server 2016 SP2 Diagnostic Information Queries

SQL Server 2016 SP2 Blank Results Spreadsheet

SQL Server 2016 Diagnostic Information Queries

SQL Server 2016 Blank Results Spreadsheet

SQL Server 2014 Diagnostic Information Queries

SQL Server 2014 Blank Results Spreadsheet

Here are links to the most recent versions of these scripts for SQL Server 2012 and older:

Since SQL Server 2012 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints.

SQL Server 2012 Diagnostic Information Queries

SQL Server 2012 Blank Results Spreadsheet

SQL Server 2008 R2 Diagnostic Information Queries

SQL Server 2008 R2 Blank Results Spreadsheet

SQL Server 2008 Diagnostic Information Queries

SQL Server 2008 Blank Results Spreadsheet

SQL Server 2005 Diagnostic Information Queries

SQL Server 2005 Blank Results Spreadsheet

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect. I am not planning on moving these to Github any time soon.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my four related Pluralsight courses, which are SQL Server 2017: Diagnosing Configuration Issues with DMVs, SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2, and SQL Server 2014 DMV Diagnostic Queries – Part 3. All four of these courses are pretty short and to the point, at 106, 67, 77, and 68 minutes respectively. Listening to these four courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

Automatic Plan Correction is Enterprise Edition Only Feature in SQL Server 2017

One interesting and useful new feature in SQL Server 2017 is Automatic plan correction. This feature, which relies on having Query Store enabled for your database, will look for query plan choice regressions where there has been a significant regression based on CPU time for a given query. This feature essentially automates the usage of sp_query_store_force_plan for CPU-related plan regressions. Microsoft has more details about automatic tuning here.

One important detail that isn’t easy to find (in Microsoft documentation) about automatic plan correction is whether it is available in SQL Server 2017 Standard Edition, or whether it is an Enterprise Edition-only feature. One way to easily confirm whether it is Enterprise-only, is to try and enable the feature on a SQL Server 2017 Standard Edition instance, and see what happens.

You use the T-SQL shown in Figure 1 to enable automatic plan correction for the current database.


image

Figure 1: T-SQL to Enable Automatic Plan Correction in SQL Server 2017


As it turns out, this feature is not available unless you are using SQL Server 2017 Enterprise Edition, as you can see in Figure 2. This is another reason that you should prefer Enterprise Edition if your budget allows it.


image

Figure 2: Error Message from SQL Server 2017 Standard Edition

If you want to learn more about this feature, you should watch Erin Stellato’s Pluralsight course, SQL Server: Automatic Tuning in SQL Server 2017 and Azure SQL Database. You can also read her article on SQLPerformance.com, Automatic Plan Correction in SQL Server.

New Intel Desktop Processor Families

Intel formally announced its 9th generation Core mainstream desktop processors on October 8, 2018 at its Fall Launch Event in New York. So far, they have announced three members of this family. The prices below are the MSRP prices. Actual street prices are a currently little higher, especially for the Core i9-9900K. Supply is also a little tight so far.

The Core i9-9900K has a base clock speed of 3.60 GHz, a Turbo clock speed of 5.0 GHz, and 16MB of L3 cache. The Core i7-9700K has a base clock speed of 3.60 GHz, a Turbo clock speed of 4.9 GHz, and 12MB of L3 cache. Finally, the Core i5-9600K has a base clock speed of 3.70 GHz, a Turbo clock speed of 4.6 GHz, and 9MB of L3 cache. These processors are aimed at gaming and general desktop usage. They are competitors to AMD’s Ryzen 2xxx mainstream processors.

You may have noticed that Intel has dropped hyper-threading (HT) from the Core i7 line (which is something it has always had, and was one of the main differentiators over the Core i5 line in the past). The lack of HT means a loss of about 25-30% of your overall CPU capacity when you have the same number of physical cores.

These new processors have soldered thermal interface material (rather than thermal paste) which will help with heat dissipation, and let them run more cores at slightly higher (100-200MHz) clock speeds more often. They also have hardware-level mitigation for some of the Meltdown CPU exploits, which will result in better performance than software mitigations.

These new processors will all work in existing Intel 300 series chipsets (with an updated BIOS), but Intel is also introducing a new Z390 chipset, which is a very slight improvement over the previous Z370 chipset, primarily with native USB 3.1 Gen 2 support and built-in 802.11ac Wi-Fi support. Here is the list of Z390 motherboards from NewEgg, while AnandTech has a roundup of Z390 motherboards here.

Intel also announced a new generation of high-end desktop (HEDT) processors, which are the Core X-Series processors. These processors use the LGA2066 socket on X299 motherboards. There are seven SKUs in this family.

These processors are designed to compete with the AMD Ryzen Threadripper HEDT processors. Unlike previous generation Intel HEDT processors, every one of the SKUs in this family support 48 PCIe 3.0 lanes.




SQL Server Diagnostic Information Queries for October 2018

This month, I have just made some minor improvements to most of the query sets.

I have a T-SQL script that you can use to check whether your instance of SQL Server has been patched to mitigate against the Spectre/Meltdown CPU vulnerability. This works for SQL Server 2008 through SQL Server 2017, for on-premises and cloud-based VM (IaaS) usage. You can get the query for this here.

I often make additional minor updates to the queries periodically during the month, so if you are in doubt, downloading the latest version is always a good idea.

Rather than having a separate blog post for each version, I have just put the links for all ten major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set.

Here are links to the latest versions of these queries for Azure SQL Database, SQL Server 2019, SQL Server 2017, SQL Server 2016 SP2, SQL Server 2016, and SQL Server 2014:

Azure SQL Database Diagnostic Information Queries

Azure SQL Database Blank Results Spreadsheet

SQL Server 2019 Diagnostic Information Queries

SQL Server 2019 Blank Results Spreadsheet

SQL Server 2017 Diagnostic Information Queries

SQL Server 2017 Blank Results Spreadsheet

SQL Server 2016 SP2 Diagnostic Information Queries

SQL Server 2016 SP2 Blank Results Spreadsheet

SQL Server 2016 Diagnostic Information Queries

SQL Server 2016 Blank Results Spreadsheet

SQL Server 2014 Diagnostic Information Queries

SQL Server 2014 Blank Results Spreadsheet

Here are links to the most recent versions of these scripts for SQL Server 2012 and older:

Since SQL Server 2012 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints.

SQL Server 2012 Diagnostic Information Queries

SQL Server 2012 Blank Results Spreadsheet

SQL Server 2008 R2 Diagnostic Information Queries

SQL Server 2008 R2 Blank Results Spreadsheet

SQL Server 2008 Diagnostic Information Queries

SQL Server 2008 Blank Results Spreadsheet

SQL Server 2005 Diagnostic Information Queries

SQL Server 2005 Blank Results Spreadsheet

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect. I am not planning on moving these to Github any time soon.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my four related Pluralsight courses, which are SQL Server 2017: Diagnosing Configuration Issues with DMVs, SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2, and SQL Server 2014 DMV Diagnostic Queries – Part 3. All four of these courses are pretty short and to the point, at 106, 67, 77, and 68 minutes respectively. Listening to these four courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

PASS Summit 2018

I will be presenting Migrating to SQL Server 2017 at the PASS Summit 2018 in Seattle, WA. I’m actually presenting a half-day session on Friday, November 9, 2018 in Room 6E. Here is the abstract:

How do you design and implement a safe and successful migration from an older version of SQL Server to SQL Server 2017 with no data loss and virtually no downtime? What if you have a limited hardware budget for the upgrade effort and you are worried about the core-based licensing in SQL Server 2017? How can you choose your hardware wisely in light of the new licensing model? How can you convince your organization that the time is right to upgrade to SQL Server 2017? This session will cover several different methods for migrating your data to SQL Server 2017 while meeting these objectives and minimizing your hardware and licensing costs.

I will also be covering some SQL Server 2019 considerations in this session.

This is a fun session that I really enjoy presenting. I think this subject is especially relevant with the upcoming end of extended support of SQL Server 2008/SQL Server 2008 R2 and end of mainstream support for SQL Server 2014, which both happen on July 9, 2019.

I honestly believe that CY 2019 is going to be an ideal time for many organizations to migrate from legacy versions of SQL Server (SQL Server 2014 and older) to a modern version of SQL Server (SQL Server 2016 and newer).

This is because you will have the opportunity to upgrade your entire data platform stack with significant new releases, including new processor families from Intel and AMD, wider availability of Intel Optane DC Persistent Memory, a new server operating system release (Windows Server 2019), and a new SQL Server release (SQL Server 2019).

BTW, if you are going to a Tuesday pre-con session at PASS, you should strongly consider Kimberly’s Query Performance Problems from Estimates, Statistics, Heuristics, and Cardinality. I have more information about why this session will be a great choice here.

You can register for the PASS Summit 2018 here.

clip_image002        clip_image002



SQLSaturday Oregon

I will be delivering a full day pre-con session called Migrating to SQL Server 2017 on November 2, 2018 in Portland, OR. Here is the abstract:

How do you design and implement a safe and successful migration from an older version of SQL Server to SQL Server 2017 with no data loss and virtually no downtime? What if you have a limited hardware budget for the upgrade effort and you are worried about the core-based licensing in SQL Server 2017? How can you choose your hardware wisely in light of the new licensing model? How can you convince your organization that the time is right to upgrade to SQL Server 2017? This session will cover several different methods for migrating your data to SQL Server 2017 while meeting these objectives and minimizing your hardware and licensing costs.

The early-bird price for this is $149.00 until October 21, 2018. You can register for this session here.

I will also be presenting High Availability/Disaster Recovery 101 at the actual SQLSaturday Oregon event on November 3, 2018. Here is the abstract:

How do you design a SQL Server 2017 infrastructure in order to meet specific Recovery Time Objective (RTO) and Recovery Point Objective (RPO) service levels? There are many aspects to consider, from technology choices and licensing, to policies and procedures. This session outlines and compares the various HA/DR technologies available in SQL Server 2017, showing how you can combine them to design and build a solution to help meet your HA/DR goals. This session also teaches you how to formulate policies that enable effective use of technology in your organization.

You can register for SQLSaturday Oregon here.

SQLSaturday #808 - Oregon 2018

SQLSaturday Oregon is one of the larger SQLSaturday events in the United States, and they have a great roster of speakers this year. Portland is a fun city with a lot of good breweries and tap rooms

I hope to see you there!