Building a Desktop Workstation for SQL Server Development and Testing (August 2016)

Back in March of 2014, I wrote a fairly long blog post called Building a Workstation for SQL Server 2014 Development and Testing, with an updated version in September of 2015, both of which still generate quite a bit of hits and e-mail. Since it is now about twelve months later, I thought it was a good time to update this information to cover the latest available hardware choices.

With the current selection of high-performance and very affordable desktop computer components, it is not very difficult to assemble an extremely high performance workstation for SQL Server development and testing at a very reasonable cost. Depending on how much performance you want and what your available budget is, you can take several different routes to get this accomplished.

At the high end of the spectrum, you can get a Socket 2011 v3 motherboard, with an Intel Xeon E5-2600 v4 product family “Broadwell-EP” processor or an Intel High-End Desktop Processor (HEDT) product family “Broadwell-E” processor and up to 128GB of ECC DDR4 RAM, and multiple PCIe flash storage cards, and spend a pretty significant amount of money, depending on your exact hardware and storage choices.  There are also workstation-class motherboards that let you have two Intel Xeon E5-2600 v4 processors and ECC RAM, so that you can have even more memory and total processor cores.

In the middle, you can build a very powerful system using the current 14nm Intel Core i7-6700K “Skylake” processor that uses a current generation LGA 1151 motherboard and DDR4 memory. Skylake processors require a newer Intel chipset, and many of the Skylake motherboards that are available are using the current high-end Intel Z170 Express chipset. Aside from DDR4 support, the most interesting improvement to Z170 Express is an increased number of PCIe lanes. The Z170 chipset supports a total of 20 PCI-Express lanes at the PCH in conjunction with the CPU’s 16 PCIe 3.0 lanes for a platform total of 36. Last generation’s Z97 Express chipset coupled with Devil’s Canyon or Haswell CPUs only allowed for 24 PCIe 2.0 lanes. All of these from the PCH were only PCIe 2.0 compliant. In contrast, all of the Z170’s PCIe lanes are Generation 3.0 compliant while retaining backwards compatibility with PCIe 2.0 and 1.0 specifications. This means you have a lot more I/O bandwidth available for storage. You can also have up to 64GB of RAM with a Z170-based system.

At the lower end of the spectrum, you can put together a system with a single Intel Core i7-4790K “Devil’s Canyon” processor using an Intel Z97 chipset, 32GB of non-ECC DDR3 RAM, and a single high-performance, 6Gbps consumer-class SSD, and still have a system with more processing power than many existing Production database servers. A system that uses this older generation architecture will still be quite powerful, but will be more economical than ever, since the newer Skylake family has been available for over a year now.

The upcoming 14nm 7th generation Kaby Lake desktop processors and Z270 chipsets probably won’t be available until at least January 2017, so the currently available hardware is the best you are going to be able to get for a little while longer.

If you are going to build a desktop system from scratch, you need eight basic components:

  1. Computer Case
  2. Power Supply
  3. Motherboard
  4. Processor
  5. Memory (RAM)
  6. Storage (magnetic or flash)
  7. Discrete Video Card (optional, not really necessary in most cases)
  8. Optical Drive (optional, becoming much less important)

This assumes that you have a keyboard, mouse, and one or more monitors. I’ll discuss each one of these components, with some tips for what you should consider as you are choosing them.

Computer Case

You will need some sort of case to hold your components (unless you want to leave them running on a test bench). Personally, I like mid-range, mid-tower cases from companies like Fractal Design, Antec, Cooler Master, and Corsair. Mid-Tower cases give you plenty of room for common ATX motherboards, and they usually have at least four to six internal 3.5” or 2.5” drive bays. Newer designs have special 2.5” mounting points for SSDs and front or top mounted USB 3.0 or USB 3.1 ports. Better cases are much easier to work with, and they often have much better cable management features (so you can route most of your cables in a separate space under the motherboard). This not only looks much nicer, but it gives you better airflow inside the case. You probably don’t really need a fancy, gaming-oriented case with LED lighting and a huge number of case fans (unless you like that sort of thing).

A decent case in the $50-100 range will usually have good quality components (such as quieter, larger diameter case fans), along with good thermal and noise management features. The Fractal Design Core 3300 is a good example of an affordable, good quality case for about $80.00. I also like the slightly more expensive Fractal Design Define R5. You can spend a little less on a case, or quite a bit more. Just make sure that the case will allow you to install the size of motherboard that you will be using. Another thing you might want to do in some situations is to replace the original case fans with something quieter and better such as the new Corsair ML120 or ML140 Magnetic Levitation fans.

Power Supply

You should invest in a decent quality power supply as opposed to the cheapest one you can find. You don’t want to go overboard and get a 1200 watt behemoth gaming-oriented power supply (unless you are building an extreme gaming rig with multiple, high-end video cards that really need that much power). For the kind of system that I am recommending, you can use a high quality 400-600 watt 80 PLUS (or better) modular power supply and have plenty of reserve power. Modular power supplies have detachable cables for things like SATA power, MOLEX power, PCIe power, etc., so you only need to plug in and use the cables you actually need.

Power supplies are much less efficient when they are only supplying a very small portion of their rated output. Getting a 1200 watt power supply because you think it must be “better” than a good 500-600 watt power supply is actually a waste of money, both for the initial cost of your power supply and the electrical power costs over the life of your machine. The components that I am recommending will end up drawing about 30-40 watts at idle.  I really like Seasonic power supplies, especially their fan-less, modular models such as the SS400FL and the newer SS-520FL2. They are both completely silent and highly efficient, 80+ Platinum rated power supplies. I also really like their latest Prime Titanium line of power supplies. Another less expensive alternative that I like are Corsair power supplies, such as the Corsair CX550M modular power supply.

Motherboard

The motherboard is where all of your other components are plugged into, so it is a critical component. You need to consider which processor you are going to be using, since there are several different processor socket types available, which will dictate your motherboard choices. The most common type in late 2016 is still the LGA 1150, which will work with the 4th generation, 22nm Intel Core processors (Haswell and Devil’s Canyon). You also need to consider the form-factor of your motherboard. You can choose from ATX, micro-ATX, and mini-ITX, which refers to the size of the motherboard. You also need to think about the chipset used on your motherboard.

The Intel Z97 Express chipset is their best chipset for an LGA 1150 motherboard. As you are looking at motherboards, you should be looking at the low-to-mid range Z97-based motherboards instead of the high-end, gaming motherboards. The high-end gaming Z97 motherboards can be quite expensive, and they will probably have features (such as support for three discrete video cards), that you don’t really need for a SQL Server workstation or test server. Instead, make sure you choose a model that has four DDR3 RAM slots, and at least six 6Gbps SATA III ports. You also might want a model that has one or more M.2 slots. A good example Z97 motherboard is the ASRock Z97 Extreme6/3.1.

A newer, slightly more expensive choice is a 6th generation, 14nm Intel Core processor (Skylake), combined with a Z170 chipset. Right now, there are two Skylake processor choices that are widely available, the Core i7-6700K, and the Core i5-6600K. Skylake processors use the the current Socket LGA 1151, and DDR4 RAM, so you will need a new motherboard and new memory if you are upgrading from an older platform. Unlike older Intel processors, these new Skylake processors do not come with a stock cooling fan bundled with the processor, so you will have to buy some type of decent CPU cooler. A good example Z170 motherboard is the ASRock Z170 Extreme7 +. The reason I really like this motherboard is because of all of the I/O capacity and flexibility that it offers, with three PCIe 3.1 x16 slots, and three Ultra M.2 PCIe 3 x4 slots.

If you are going to run Windows Server 2012 R2 for your host operating system, you should be aware that most Intel embedded NICs that you will find on many desktop motherboards will refuse let you install the NIC drivers with a Microsoft server operating system. In that case, you can buy an inexpensive, non-Intel ($15-20) PCIe Gigabit Ethernet card that work just fine. If you are running Windows 10 for your host operating system, you won’t have this issue.

Processor

You can choose a modern, Intel desktop processor that may well have much more raw processing power than many older two or four-socket production database servers. This is not an exaggeration, although it depends on the age of your production database server. You are far more likely to run into memory or I/O bottlenecks as you push a modern Intel desktop system than processor bottlenecks. For most people, an Intel Core i7-4790K processor will be your best choice (especially if you live near a Micro Center). It is a quad-core processor with hyper-threading (so you have eight logical cores) that runs at a base clock speed of 4.0GHz, with the ability to TurboBoost to 4.4GHz. It runs very cool, and is easy to overclock even with the stock Intel processor cooler. It is not really necessary to overclock this processor, to get good performance. You can have a maximum of 32GB of DDR3 RAM with this processor, and it supports both VT-x and VT-d for better virtualization performance.

Most 4th generation Intel Core processors (Haswell) have pretty good integrated graphics built-in to the CPU package. The better models have HD4600 graphics which give you more than enough performance for normal desktop usage and even some moderate gaming. There was a pretty big improvement in the integrated graphics performance between the Ivy Bridge and Haswell processors, so it is much more feasible to simply use the integrated graphics instead of buying a separate, discrete video card. This will save you money and reduce your electrical power usage.

One big variable in the cost of using this processor is whether you live near a Micro Center computer store or not. Micro Center has 25 locations in the Continental United States, and they sell a few specific models of Intel processors at prices that no other company seems willing to match. They have been doing this for years, and it is their regular practice (so it is not a special sale or promotion). The only catch is that those processors are only available for in-store pickup (so no mail-order).

For example, Micro Center is currently selling the Intel Core i7-4790K processor for $289.99, while NewEgg is selling the exact same Intel Core i7-4790K processor for $339.99. Micro Center quite often does promotions where they will reduce the price of a motherboard by $30-$50 if you buy the motherboard with a qualifying processor. Their prices on motherboards, cases, memory, hard drives and SSDs are also quite competitive.

A newer Intel Core i7-6700K processor will cost $309.99 at Micro Center, and it will cost $349.99 at NewEgg. The other good thing about Micro Center is that the people who work in their computer components department are generally very knowledgeable and helpful. I have seen them patiently help many customers select appropriate components for the type of system they are trying to build.

Memory

If you select an LGA 1150, Z97 Express motherboard with four RAM slots, you can have up to 32GB of non-ECC DDR3 RAM in your system. You can get two 8GB sticks of 240-pin PC3 12800 DDR3 RAM for about $138.00, so it would be about $276.00 to get 32GB of RAM. This should be plenty for most development and testing workloads (including running multiple VMs), but if you really need more, you could make the jump to the LGA 2011 v3 platform that uses the more expensive six, eight and ten-core Intel Broadwell-E processors where you can have up to 128GB of DDR4 RAM.

DDR4 RAM is actually less expensive than DDR3 RAM now. For example, you can get two 8GB sticks of 288-pin PC4 17000 DDR4 RAM for about $74.00, so it would be about $148.00 to get 32GB of RAM. Eventually, you can also get 16GB DDR4 DIMMs, so you will be able to have 64GB of RAM in a Z170 Express system.

One thing you will want to do as you are configuring your system is to go into your BIOS setup and turn on Extreme Memory Profile (XMP), so that you will get better memory performance. This can occasionally cause stability problems, depending on the type of memory that you have, but if that happens, you can always turn it back off.

Storage

You will need some type of storage for your system. Traditional magnetic hard drive prices continue to decline, so that you can get a high-performance 4TB, 7200rpm SATA III drive with 128MB of cache, such as a 4TB Western Digital WD Black WD4004FZWX for $221.99. For just a little more money, you can also get a much smaller, but much much higher performance 6Gbps SATA III consumer-grade SSD, such as a 1TB Samsung 850 EVO SSD for $307.19. Solid State Drive prices have come down a lot (as performance has increased slightly) over the past couple of years, but they still cost about six times as much as conventional magnetic storage, per gigabyte.

I really encourage you to use a modern, fast 6Gbps SATA III SSD for your boot drive since it will have an extremely dramatic, positive effect on how fast your system performs and “feels” in everyday use. It will boot faster, shut down faster, programs will load nearly instantly, and it will take much less time to install new software and Windows Updates. It is similar to the difference between a dial-up modem and a fast broadband connection. Once you start using a fast SSD, you will never want to go back to a conventional magnetic hard drive.

You want to make sure your fast 6Gbps SSD is plugged into a 6Gbps SATA III port (not one of the slower 3Gbps SATA II ports) on your motherboard. Otherwise, your fast SATA III SSD will be limited to about 275MB/sec for sequential reads and writes (which is still about twice as fast as a very fast traditional 7200rpm SATA hard drive). You also want to avoid the smallest capacity 128GB SSD models, since their performance is much usually much lower than the larger capacity models from the same manufacturer and product line. This is because the smaller capacity models have fewer NAND chips and fewer data channels. Ideally, you would want a 250GB (or larger) 6Gbps SATA III SSD plugged into each SATA III port that you have available on your motherboard. This would give you lots of options for how to lay out your SQL Server data files, log files, tempdb files and SQL Server backup files.

Of course, you may not want to spend that much money, so it is still common to have one or two SSDs, along with one or more conventional magnetic drives in a desktop system. One of the luxuries with a desktop system compared to any laptop is that you have a very high number of internal drive bays and up to ten or twelve SATA III ports on the motherboard. You can also buy inexpensive PCIe SATA III cards to add even more SATA III ports to a desktop system.

If you want even more storage performance, you may want to consider a PCIe NVMe flash storage card such as the Intel 750 Series, which comes in 400GB, 800GB, and 1.2TB capacities. These cost slightly less than $1.00/GB, and their performance is about 4-5 times better than SATA III SSDs for reads and about 2-3 times better for writes. Another alternative is the Samsung 950 Pro Series of M.2 PCIe NMVe flash devices, which are available in 256GB and 512GB capacities. If you have one or more M.2 slots that support PCIe 3.0 x4, then you will also get tremendous performance from these.

Discrete Video Card

There are some situations where the Intel integrated graphics might not be enough for your needs. An example would be if you were doing things such as AutoCad that really place a lot of stress on your graphics performance. Another example is if you wanted to run multiple, large monitors on your system. Most motherboards that support the Intel integrated graphics only have two or three video connectors (such as a DP connector, DVI connector and an HDMI connector), so that would limit how many monitors you could connect to the system. Depending on your processor choice, you may not have integrated graphics at all.

If you do decide to go with one or more discrete video cards, you can get quite decent performance for about $100-150.00 each (but you can spend much, much more). You may also need a power supply with multiple, supplemental PCIe power connectors, and you might even need a higher capacity power supply.

Optical Drive

Even though they are becoming much less useful over time, it can still be useful to have a DVD-Recorder, optical drive in a desktop system. It just makes it easier to install the operating system and other software (although you can certainly install from a USB drive). It is also becoming much more common to simply mount an .iso file for doing something like installing SQL Server 2016. You can get bare, OEM optical DVD drives for about $15-20. Personally, I like to use an external USB optical drive to install the operating system, so I don’t have to have an internal optical drive taking up space and power in the system.

So, after all of this, how much money am I trying to convince you to spend?  Well, here is one example:

  1. Case                   $80.00
  2. Power Supply     $100.00
  3. Motherboard      $180.00          (ASRock Z170 Extreme 7+ from Micro Center)
  4. Processor          $309.00          (Intel Core i7-6700K from Micro Center)
  5. RAM                  $148.00          (32GB of DDR4 RAM)
  6. Storage             $308.00          (One 1TB Samsung 850 EVO SSD)

Total System         $1125.00

This system would have much better performance than a laptop that could cost several times as much. It would also have better performance than many production SQL Server database servers. It would be pretty easy to slice over $300.00-$400.00 off of this system cost by choosing some different components, and still have a very capable system.

Of course, a desktop system like this does not have redundant, server-class components or ECC RAM, so you would not want to use them in a production situation. They would probably be much better (in terms of performance) than some ancient, out of warranty, retired server for development and testing.

SQL Server 2014 Service Pack 2 CU1 is Available

On August 25, 2016, Microsoft released SQL Server 2014 Service Pack 2 CU1, which is build 12.0.5511.0. This cumulative update has 45 hotfixes in the public fix list. If you are running SQL Server 2014, this is the branch and build that you should try to be on as soon as you can do your testing and planning (since it is the “latest and greatest” SP and CU).

Currently, the SP1 and SP2 branches are not chronologically aligned as far as release dates go, but that is supposed to be fixed when SP1 CU9 and SP2 CU2 are released at the same time in October.

Speaking at 24 Hours of PASS on September 7, 2016

I will be presenting Dr. DMV: How to Use DMVs to Diagnose Performance Problems during the 24 Hours of PASS Preview Edition at 8AM Mountain Time (UTC –6) on September 7, 2016.

Here is the abstract:

SQL Server 2005 introduced Dynamic Management Views (DMVs) that allow you to see exactly what is happening inside your SQL Server instances and databases with much more detail than ever before. SQL Server 2016 adds even more capability in this area. You can discover your top wait types, most CPU intensive stored procedures, find missing indexes, and identify unused indexes, to name just a few examples. This session preview (which is applicable to SQL 2005-2016), presents and explains over seventy DMV queries that you can quickly and easily use to detect and diagnose performance issues in your environment. If you have ever been responsible for a mission critical database, you have probably been faced with a high stress, emergency situation where a database issue is causing unacceptable application performance, resulting in angry users and hovering managers and executives. If this hasn’t happened to you yet, thank your lucky stars, but start getting prepared for your time in the hot seat. This session will show you how to use DMV queries to quickly detect and diagnose the problem, starting at the server and instance level, and then progressing down to the database and object level. This session will show you how to properly analyze and interpret the results of every single query in the set, along with lots of information on how to properly configure your instance and databases.

You can register here.

PASS-24HOP_SummitPreview2016_624x93b

 

Ideally, after listening to this one hour preview session on September 7, you will decide to sign up for my PASS Summit 2016 Pre-Conference session.

On Monday, October 24, 2016, I will be doing an all-day, session on how to run and interpret my SQL Server diagnostic information queries. I have done many shorter versions of this session (such as 60 minutes, 75 minutes, or even a half-day) before, but I have always felt a little rushed as I went through the complete set of diagnostic queries, explaining how to interpret the results of each one, and also talking about related information that is relevant to each query.

Now, I will have a full day to go into much more detail, without having to hurry to cover everything. I will be using the SQL Server 2016 version of the diagnostic queries, which have even more useful information, including information about many new SQL Server 2016 features. If you are on an older version of SQL Server, most of the queries will still be relevant (depending on how old of a version of SQL Server you are using).

Based on past experience and feedback, Dr. DMV has always been a very popular session that people really seem to enjoy. This all-day, expanded version is going to be really fun and useful, and I hope to see you there!

You can register for the PASS Summit 2016 here.

SQL Server 2014 SP1 CU8 Available

Microsoft has released SQL Server 2014 Service Pack 1 Cumulative Update 8, which is Build 12.0.4468. This CU has 38 hotfixes in the public fix list. Since the SQL Server 2014 RTM branch is no longer a “supported service pack”, there is no corresponding CU for that branch.

The hotfix below seems interesting, although Microsoft is frustratingly vague about what constitutes a “high-end computer… with multiple cores”

Decrease in performance and “non-yielding scheduler” errors caused by unnecessary spinlocks in SQL Server

With the introduction of contained databases in SQL Server, the database engine always performs a containment check before executing stored procedures. On high-end computers with multiple cores, this check may cause a decrease in performance because of internal spinlock contention, occasionally causing “non-yielding scheduler” errors.

There is also no corresponding CU for the SQL Server 2014 SP2 branch, since SQL Server 2014 SP2 was only released on July 11, 2016. This probably means the the CUs for SQL Server 2014 SP1 and SQL Server 2014 SP2 will not be synchronized, which tends to confuse people. Of course, the ultimate solution is to get on the SP2 branch as soon as you are ready.

SQL Server Diagnostic Information Queries for August 2016

This month, I have a new AlwaysOn AG query for SQL Server 2016 and several new improvements in the SQL Server 2014 and 2016 sets, along with additional comments and documentation in the SQL Server 2012, 2014 and 2016 sets.  I have gotten quite a bit of interest about making a special version of these queries for SQL Database in Microsoft Azure. So, I will be doing that during August.

The best way to learn how to interpret the results of all of these queries is to attend my all-day PASS Summit 2016 Pre-Conference Session on Monday, October 24, 2016.

Dr. DMV: How to Use DMVs to Diagnose Performance Problems

Rather than having a separate blog post for each version, I have just put the links for all six major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set. 

Here are links to the latest versions of these queries for SQL Server 2016, 2014 and 2012:

SQL Server 2016 Diagnostic Information Queries (August 2016)

SQL Server 2016 Blank Results

SQL Server 2014 Diagnostic Information Queries (August 2016)

SQL Server 2014 Blank Results

SQL Server 2012 Diagnostic Information Queries (August 2016)

SQL Server 2012 Blank Results

Here are links to the most recent versions of these scripts for SQL Server 2008 R2 and older:

Since SQL Server 2008 R2 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints. I did update these queries slightly in January 2016 though.

SQL Server 2008 R2 Diagnostic Information Queries (CY 2016)

SQL Server 2008 R2 Blank Results

SQL Server 2008 Diagnostic Information Queries (CY 2016)

SQL Server 2008 Blank Results

SQL Server 2005 Diagnostic Information Queries (CY 2016)

SQL Server 2005 Blank Results

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my three latest Pluralsight courses, which are SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2 and SQL Server 2014 DMV Diagnostic Queries – Part 3. All three of these courses are pretty short and to the point, at 67, 77, and 68 minutes respectively. Listening to these three courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

New TPC-E Results for SQL Server 2016

There have been two recent TPC-E OLTP benchmark results published for SQL Server 2016. These include one from Fujitsu and one from Lenovo.

The most recent result, from July 12, 2016 is for a four-socket FUJITSU Server PRIMERGY RX4770 M3 server that is using the latest generation, 14nm 2.2GHz Intel Xeon E7-8890 v4 processor (Broadwell-EX), with a TPC-E throughput score of 8,796.42. As is always the case with TPC-E benchmarks, the hardware vendor used the “flagship”, highest core count processor available from the latest processor family, in this case, a 24-core processor. This helps achieve the highest possible TPC-E throughput score (which is a measure of the total processor capacity of the system), at the cost of quite high SQL Server 2016 licensing costs, since you would have to purchase 96 SQL Server 2016 Enterprise Edition core licenses. This would cost about $684K at full retail price. Fujitsu priced the SQL Server 2016 licenses at $647K in the Executive Summary report.

Another recent result from May 31, 2016 is for a four-socket Lenovo System x3850 X6 that is using the same Intel Xeon E7-8890 v4 processor. This system gets a TPC-E throughput score of 9,068.00, which is about 3% higher than the Fujitsu system. Both systems use a 36TB initial database size, while the Fujitsu system uses 2TB of RAM and the Lenovo system uses 4TB of RAM (which is the license limit for Windows Server 2012 R2). Both systems use all flash storage, with 2.5” SAS SSDs.

Unlike the old TPC-C OLTP benchmark, TPC-E does not require an unrealistically expensive storage subsystem to get good scores. As long as the storage subsystem is “good enough” so that it does not become a bottleneck, then the ultimate TPC-E bottleneck becomes processor performance.

Earlier this year, there were two competing results for two-socket systems from Lenovo and Fujitsu. On March 30, 2016, Fujitsu published a result for a two-socket FUJITSU Server PRIMERGY RX2540 M2 system using the latest generation, 14nm 2.2GHz Intel Xeon E5-2699 v4 processor (Broadwell-EP), with a TPC-E throughput score of 4,734.87. The Intel Xeon E5-2699 v4 has 22 physical cores, so the two-socket system has a total of 44 physical cores that would need SQL Server 2016 licenses that would cost about $313K at full retail price. Fujitsu priced the SQL Server 2016 licenses at $296K in the Executive Summary report.

On March 24, 2016, Lenovo published a result for a Lenovo System x3650 M5 system using the same Intel Xeon E5-2699 v4 processors, with a TPC-E throughput score of 4,938.14, which is about 4% higher than the Fujitsu system. In this case, the Fujitsu system uses 1TB of RAM (with a 19TB initial database size), while the Lenovo system uses 512GB of RAM (with a 20TB initial database size). Both systems use all flash storage, with 2.5” SAS SSDs.

I know that this is a lot of numbers to be throwing around, so a summary of these four systems is shown in Table 1.

 

System Processor Raw Score Total Cores Score/Core
Lenovo System x3850 X6 E7-8890 v4 9,068.00 96 94.45
Fujitsu PRIMERGY RX4770 M3 E7-8890 v4 8,796.42 96 91.63
Lenovo System x3650 M5 E5-2699 v4 4,938.14 44 112.23
Fujitsu PRIMERGY RX2540 M2 E5-2699 v4 4,734.87 44 107.61

Table 1: Recent TPC-E Score Highlights

 

This shows that four-socket Broadwell-EX systems scale relatively well compared to older Xeon E7 processor families, meaning that the drop in single-threaded performance compared to equivalent two-socket Xeon E5 processor families is not as large as it used to be. There is still a gap though, which means that you are losing some scalability as you make the jump from a two-socket system to a four-socket system. If you can split your workload across two database servers, you would be better off to have two, two-socket servers rather than one, four-socket server. You would have more total processor capacity, better single-threaded performance, more PCIe expansion slots and lower SQL Server license costs.

An even better alternative for most people would be to use a lower core count, “frequency optimized” processor, instead of the flagship processor. For example, if you used the eight-core, 3.2 GHz Intel Xeon E5-2667 v4 processor in a two-socket server, you would get the estimated results shown in Table 2.

 

System Processor Raw Score Total Cores Score/Core
Estimated Two-Socket System E5-2667 v4 2611.91 16 163.24

Table 2: Estimated TPC-E Results

If you had four, two-socket systems with the eight-core, 3.2 GHz Intel Xeon E5-2667 v4 processor, instead of one, four-socket system with the 24-core 2.2 GHz Intel Xeon E7-8890 v4 processor, you would have about 15.2% more total processor capacity, about 72.9% better single-threaded performance, and a 33.3% lower SQL Server 2016 licensing cost (which would be about $227K in license savings). You would have the same total memory capacity, and more than three times the number of PCIe slots.

Recent SQL Server 2012 and 2014 Updates

Microsoft has released a number of SQL Server Cumulative Updates and Service Packs over the past several weeks. For SQL Server 2014, these include:

June 20, 2016    SQL Server 2014 RTM CU14 (12.0.2569)

June 20, 2016    SQL Server 2014 SP1 CU7 (12.0.4459)

July 11, 2016     SQL Server 2014 SP2 RTM (12.0.5000)

SQL Server 2014 RTM CU14 will be the last cumulative update for the SQL Server 2014 RTM branch, and it is now an “unsupported service pack”. If you are still on the RTM branch, you should be be planning on moving to either SP1 or preferably SP2. SQL Server 2014 SP2 RTM has all of the fixes that are in SQL Server 2014 SP1 CU7, so there is no need to wait for SQL Server 2014 SP2 CU1 in order to “catch up” to the previous branches. It also has a number of new features and performance improvements (which you can read about here), so I think people are going to want to move to the SP2 branch relatively soon.

You can find the official Microsoft Build list for SQL Server 2014 here:

SQL Server 2014 Build Versions

 

For SQL Server 2012, we have these updates:

July 18, 2016     SQL Server 2012 SP2 CU13  (11.0.5655)

July 18, 2016     SQL Server 2012 SP3 CU4    (11.0.6540)

As always, I think you are better off to be on the latest Service Pack for whatever version of SQL Server you are using. For SQL Server 2012, the RTM and SP1 branches are both considered “unsupported service packs”. You need to be on either SP2 or SP3, preferably SP3.

You can find the official Microsoft Build lists for SQL Server 2012 SP3 and SP2 here:

SQL Server 2012 SP3 build versions

SQL Server 2012 SP2 build versions

Finally, if you or your organization are still reluctant to deploy SQL Server Cumulative Updates, you should read the current official guidance from Microsoft about this. One of the key points is “we now recommend ongoing, proactive installation of CU’s as they become available”. This does not mean that you just blindly deploy a cumulative update to Production the day it is released. Rather, you should have a good testing and deployment plan that you go through before you deploy to Production. You can read the full Microsoft guidance here:

Announcing updates to the SQL Server Incremental Servicing Model (ISM)

Speaking at PASS Summit 2016

I will be presenting two sessions at the PASS Summit 2016 in Seattle, WA, which is being held October 25-28, 2016.

On Monday, October 24, 2016, I will be doing an all-day, Pre-Conference session on how to interpret my SQL Server diagnostic information queries. I have done many shorter versions of this session (such as 60 minutes, 75 minutes, or even a half-day) before, but I have always felt a little rushed as I went through the complete set of diagnostic queries, explaining how to interpret the results of each one, and also talking about related information that is relevant to each query.

Now, I will have a full day to go into more detail, without having to hurry to cover everything. I will be using the SQL Server 2016 version of the diagnostic queries, which have even more useful information, including information about many new SQL Server 2016 features. If you are on an older version of SQL Server, most of the queries will still be relevant (depending on how old of a version of SQL Server you are using).

Based on past experience and feedback, Dr. DMV has always been a very popular session that people really seem to enjoy. This all-day, expanded version is going to be really fun and useful, and I hope to see you there!

Dr. DMV: How to Use DMVs to Diagnose Performance Problems

SQL Server 2005 introduced Dynamic Management Views (DMVs) that allow you to see exactly what is happening inside your SQL Server instances and databases with much more detail than ever before. SQL Server 2016 adds even more capability in this area. You can discover your top wait types, most CPU intensive stored procedures, find missing indexes, and identify unused indexes, to name just a few examples. This session (which is applicable to SQL 2005-2016), presents and explains over seventy DMV queries that you can quickly and easily use to detect and diagnose performance issues in your environment. If you have ever been responsible for a mission critical database, you have probably been faced with a high stress, emergency situation where a database issue is causing unacceptable application performance, resulting in angry users and hovering managers and executives. If this hasn’t happened to you yet, thank your lucky stars, but start getting prepared for your time in the hot seat. This session will show you how to use DMV queries to quickly detect and diagnose the problem, starting at the server and instance level, and then progressing down to the database and object level. This session will show you how to properly analyze and interpret the results of every single query in the set, along with lots of information on how to properly configure your instance and databases.

 

This is a regular, 75 minute session that will be all new content, going into much deeper detail about the SQL Server related factors of current server hardware, and how to go about selecting the best database server hardware for your workload and budget. I want to show you how to pick hardware that gives you the best performance possible while minimizing your SQL Server license costs, saving your organization a huge amount of money!

Hardware 301: Diving Deeper into Database Hardware

Making the right hardware selection decisions is extremely important for database scalability. Having properly sized and configured hardware can both increase application performance and reduce capital expenses dramatically. Unfortunately, there are so many different choices and options available when it comes to selecting hardware and storage subsystems, it is very easy to make bad choices based on outdated conventional wisdom. This session will give you a framework for how to pick the right hardware and storage subsystem for your workload type. You will learn how to evaluate and compare key hardware components, such as processors, chipsets, and memory. You will also learn how to evaluate and compare different types of storage subsystems for various database workload types. Gain the knowledge you need to get the best performance and scalability possible from your hardware budget!

The PASS Summit is always a fun and very useful and educational event. It is a great way to get to know more people in the SQL Server community and to connect with people that you may only know online.

You can register for the PASS Summit 2016 here.

 

 

SQL Server Diagnostic Information Queries for July 2016

This month, I have several new improvements in the SQL Server 2014 and 2016 sets, along with additional comments and documentation in the SQL Server 2012, 2014 and 2016 sets.  I have gotten quite a bit of interest about making a special version of these queries for SQL Database in Microsoft Azure. So, I will be doing that next month.

The best way to learn how to interpret the results of all of these queries is to attend my all-day PASS Summit 2016 Pre-Conference Session on Monday, October 24, 2016.

Dr. DMV: How to Use DMVs to Diagnose Performance Problems

Rather than having a separate blog post for each version, I have just put the links for all six major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set. 

Here are links to the latest versions of these queries for SQL Server 2016, 2014 and 2012:

SQL Server 2016 Diagnostic Information Queries (July 2016)

SQL Server 2016 Blank Results

SQL Server 2014 Diagnostic Information Queries (July 2016)

SQL Server 2014 Blank Results

SQL Server 2012 Diagnostic Information Queries (July 2016)

SQL Server 2012 Blank Results

Here are links to the most recent versions of these scripts for SQL Server 2008 R2 and older:

Since SQL Server 2008 R2 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints. I did update these queries slightly in January 2016 though.

SQL Server 2008 R2 Diagnostic Information Queries (CY 2016)

SQL Server 2008 R2 Blank Results

SQL Server 2008 Diagnostic Information Queries (CY 2016)

SQL Server 2008 Blank Results

SQL Server 2005 Diagnostic Information Queries (CY 2016)

SQL Server 2005 Blank Results

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my three latest Pluralsight courses, which are SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2 and SQL Server 2014 DMV Diagnostic Queries – Part 3. All three of these courses are pretty short and to the point, at 67, 77, and 68 minutes respectively. Listening to these three courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

SQL Server 2014 Service Pack 1 CU7

On June 20, 2016, Microsoft released SQL Server 2014 Service Pack 1 CU7, which is Build 12.0.4459. This cumulative update has 35 fixes in the public hot fix list. Quite a few of these fixes seem to be pretty significant.

They also released SQL Server 2014 RTM CU14, which is Build 12.0.2569. This cumulative update has 21 fixes in public hot fix list. This is the last CU for the RTM branch.

SQL Server 2014 Service Pack 2 is due to be released in July of 2016, so depending on how old of a build of SQL Server 2014 you are on, you might just want to wait for SP2. Otherwise, I think you should try to stay on a current CU, in a proactive fashion. Microsoft recently changed their official guidance about installing CUs, as you can read about here.