Hardware Selection for a Home Lab – Part 1

I have long believed that if you want to be more successful as a developer, database professional, or in any technically-oriented I.T. career, it is a very good idea to have some sort of home lab environment to use for experimentation and learning.

Depending on your goals, needs, budget, available space, etc., there are many different ways to actually go about doing this. Some of these include using actual rack-mount or tower servers (which might be old, used equipment or newer equipment from a server vendor’s outlet), using desktop workstation tower servers built from parts (or purchased used), and even using old laptop machines.

Again, based on your needs, preferences, and available overall infrastructure, it may make more sense to use virtualization, with one or more fairly robust virtualization host machines that can run multiple, concurrent virtual machines (VMs) or to just have a number of “bare metal” machines.

One factor that is often ignored as people think about this is the spouse acceptance factor (SAF), where what you might like to do must be tempered by what your spouse will actually accept in terms of noise, space, heat and power usage. Having a number of rack mount servers running 24 x 7 can have a very negative impact on your electrical bill!

You also need to think about the network and storage infrastructure required to properly support your home lab. At a bare minimum, you will probably want wired Gigabit Ethernet, but you may want to think about having some 10 Gigabit Ethernet capability, especially as 10Gb equipment prices continue to decline. You also might want a decent NAS device to support your lab.

Using a NUC

One increasingly viable method for equipping a home lab is to use one or more Small Form Factor (SFF) machines, such as the Intel NUC series. These are very small footprint, bare-bone kits that include everything you need except RAM, storage, and an operating system. You buy the NUC kit, then buy and install your RAM and storage, install the operating system of your choice (with some important limitations), and then you are ready to go.

These machines are typically based on low power, dual-core laptop processors (with hyper-threading), using laptop SO-DIMM RAM, and have had somewhat limited storage capability, at least for earlier generations. These type of machines have been around for about four years now, with several generations that have been released from Intel and other smaller companies, such as Gigabyte. Each new generation has had various improvements that make a SFF machine a more realistic choice for many lab scenarios.

Intel has recently announced a new generation of NUCs that are based on the 7th generation Kaby Lake processor. The main potential advantage of this new generation is increased storage space and performance. The Core i7 and Core i5 models have 40Gbps Thunderbolt 3 support built-in with an Alpine Ridge controller that connects to a USB-C connector. You can get a primer about the various USB and Thunderbolt connectors and standards here.

Having Alpine Ridge built-in gives you a lot more I/O flexibility and performance. For example, you could plug in an external enclosure that uses USB 3.1 Gen 2 (up to 10 Gbps) or Thunderbolt 3 (up to 40 Gbps) connectivity to get a lot of additional I/O throughput from your NUC.

The two best choices from this generation (for a home lab) are the Intel NUC7i7BNH and the Intel NUC7i5BNH, which are the taller Core i7 and Core i5 versions respectively. The reason why you want the taller models is because they let you install a 2.5” SATA III SSD in addition to an M.2 2280 PCIe 3.0 NVMe storage card, which gives you additional storage space and performance.

The Intel NUC7i7BNH uses the Intel Core i7-7567U processor, which has a base clock speed of 3.5GHz, a Turbo clock speed of 4.0GHz, a TDP of 28W, and a 4MB L3 cache. The Intel NUC7i5BNH uses the Intel Core i5-7260U processor, which has a base clock speed of 2.2 GHz, a Turbo clock speed of 3.4GHz, a TDP of 15W, and a 4MB L3 cache. Both of these processors have integrated Iris Plus graphics with 64GB of eDRAM, which shows up as L4 cache. Whether or not Windows and SQL Server will take advantage of that L4 cache for general computing purposes is unknown to me right now.

A fully loaded Intel NUC7i7BNH would have four logical processor cores, 32GB of 2133MHz DDR4 RAM, wired Gigabit Ethernet, wireless 802.11ac networking, up to a 4TB SATA III SSD, up to a 2TB M.2 PCIe NVMe, plus whatever storage you plugged into the Thunderbolt 3 USB-C port. A system configured like this would be pretty expensive, mainly because of the storage costs for the top of the line components. The ultimate performance bottleneck for most scenarios would be the number of available processor cores, even though these will be pretty fast cores.

Another potential issue is that you might not be able to use a Server operating system such as Windows Server 2016 or Windows Server 2012 R2, not because the OS won’t install, but because of driver issues with the NIC. Quite often, Intel won’t let you install server-class NIC drivers on client-grade NIC hardware. Sometimes people figure out ways to hack around this, but often it is much easier to install Windows 10 Professional, and then use Windows Hyper-V to host VMs.

 

Figure 1: Rear Panel of Intel NUC7i7BNH

 

On the software side of your home lab, there are many free resources available from Microsoft. If you are a student or faculty member at a high school or university, you can take advantage of Microsoft Imagine (formerly called Dreamspark) to get a lot of valuable software and other useful resources (such as a free, three-month subscription to Pluralsight). Another free program is Microsoft Visual Studio Dev Essentials, and yet another free program is Microsoft IT Pro Cloud Essentials.

In future posts, I’ll talk about some other hardware options for a home lab.

 

 

 

Operating System Support for SQL Server Versions

There are currently six major versions of SQL Server that I commonly see being used in Production, along with five major versions of Windows Server. Only certain combinations of SQL Server and Windows Server are officially supported by Microsoft, but tracking down this information is a little tedious.

Table 1 shows these possible combinations and whether they are officially supported by Microsoft. One possibly surprising combination is the fact that SQL Server 2012 is not officially supported on Windows Server 2016. Perhaps this is less surprising if you keep in mind that SQL Server 2012 will fall out of mainstream support on July 11, 2017, which is not that far away.

 

Picture1

Table 1: OS Support for Recent Versions of SQL Server

 

The available links that document this are listed below:

Hardware and Software Requirements for Installing SQL Server 2008 R2

Hardware and Software Requirements for Installing SQL Server 2012

Hardware and Software Requirements for Installing SQL Server 2014

Hardware and Software Requirements for Installing SQL Server (for 2016 and later)

 

If you are getting ready to deploy a new instance of SQL Server 2014 or SQL Server 2016, then you should prefer Windows Server 2016, even though they are also supported on older operating systems. If you are getting ready to deploy a new instance of SQL Server 2008 through SQL Server 2012, then you should prefer Windows Server 2012 R2, even though they are supported on older operating systems.

Finally, if you are getting ready to deploy a new instance of SQL Server 2005, then I feel a little sorry for you! SQL Server 2005 is out of extended support, and it is missing so many useful features that were added in newer versions of SQL Server.

Actually, I recently helped a client deploy some new instances of SQL Server 2005 for some pretty valid business reasons. We ended up deploying to a VM (on new, very fast host hardware) that was running Windows Server 2008 R2, which worked perfectly fine.

 

SQL Server Diagnostic Information Queries for January 2017

This month, I have updated the legacy queries for SQL Server 2005, SQL Server 2008 and SQL Server 2008 R2. There are also minor updates for the SQL Server 2012 and newer queries.

Rather than having a separate blog post for each version, I have just put the links for all six major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set. 

Here are links to the latest versions of these queries for SQL Server vNext, 2016, 2014 and 2012:

SQL Server vNext Diagnostic Information Queries (January 2017)

SQL Server nNext Blank Results Spreadsheet

SQL Server 2016 Diagnostic Information Queries (January 2017)

SQL Server 2016 Blank Results Spreadsheet

SQL Server 2014 Diagnostic Information Queries (January 2017)

SQL Server 2014 Blank Results Spreadsheet

SQL Server 2012 Diagnostic Information Queries (January 2017)

SQL Server 2012 Blank Results Spreadsheet

Here are links to the most recent versions of these scripts for SQL Server 2008 R2 and older:

Since SQL Server 2008 R2 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints. I did update these queries slightly in January 2017 though.

SQL Server 2008 R2 Diagnostic Information Queries (CY 2017)

SQL Server 2008 R2 Blank Results Spreadsheet

SQL Server 2008 Diagnostic Information Queries (CY 2017)

SQL Server 2008 Blank Results Spreadsheet

SQL Server 2005 Diagnostic Information Queries (CY 2017)

SQL Server 2005 Blank Results Spreadsheet

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my three related Pluralsight courses, which are SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2 and SQL Server 2014 DMV Diagnostic Queries – Part 3. All three of these courses are pretty short and to the point, at 67, 77, and 68 minutes respectively. Listening to these three courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

Intel Speed Shift Support in Windows Server 2016

If you are gathering evidence to help make the case for a complete data platform upgrade in 2017, you want to find as much information as possible to bolster your argument. This post is meant to assist you in your efforts.

By early-mid 2017, Windows Server 2016 will have been GA long enough to convince most skeptics that it is safe and stable. Windows Server 2016 has many tangible advantages over previous versions of Windows Server, which I will be discussing in future blog posts.

SQL Server 2016 already has its first Service Pack available, with many very valuable enhancements (especially for Standard Edition). Some people who are on older versions of SQL Server Enterprise Edition may be able to migrate to SQL Server 2016 Standard Edition SP1.

Finally, Intel is due to release the next generation two-socket server processor (which will require new model servers from the server vendors). One specific new improvement you will get with a new server/processor and Windows Server 2016 is called Intel Speed Shift.

Intel Speed Shift (which is different than the older Intel SpeedStep technology) was added in the Intel Skylake microarchitecture. It requires operating system support in order to work, and the first OS to enable Speed Shift was Windows 10, after an update in November 2015. This feature lets the OS give control of processor P-states back to the hardware, with P-state requests coming from the operating system. This lets the processor increase the clock speed of individual cores much more quickly in response to an OS request for more performance.

The upcoming Intel Xeon E5-2600 v5 processors (Skylake-EP) for two-socket servers are due to ship in mid-2017 (although rumor has it that they are already shipping to some cloud data center providers). I don’t know for sure whether Intel Skylake-EP will have Speed Shift like the mobile and desktop Skylake processors do, but it probably will. Supposedly, the flagship Xeon E5-2699 v5 will have 32 physical cores, even though you will likely want a lower core count model with a higher base clock speed in most cases, to minimize your SQL Server licensing costs.

Why this matters for SQL Server is that many short-duration OLTP queries might run a little faster with Speed Shift enabled. Legacy power management techniques on older processors can take up to 120ms to fully throttle up to full Turbo Boost clock speed, while Speed Shift does it in 35ms, with most of the frequency increase happening in 3-4ms.

Figure 1 shows an example of the difference in how quickly the processor can increase the clock speed with Speed Shift Technology.

 

Figure 1: Intel Speed Shift Technology Performance Example

 

If your operating system supports Intel Speed Shift (Windows 10 or Windows Server 2016) and if you have an Intel processor with Speed Shift Technology (Skylake or newer), then you will be able to get the performance benefits of Intel Speed Shift with virtually no effort beyond perhaps changing a BIOS/UEFI setting on your server. Figures 2 and 3 have some more information about how Intel Speed Shift works.

 

Image result for intel speed shift

Figure 2: Intel Speed Shift Technology Introduction

 

Image result for intel speed shift

Figure 3: Intel Speed Shift Technology Details

 

Over the past year, there has been a decent amount of information regarding Intel Speed Shift support in Windows 10, but nothing regarding Windows Server 2016. Part of the problem here is that we already have Skylake processors available in the mobile and desktop space, and in the single-socket server space, but not on the higher socket count server platforms.

Just as a quick experiment, I installed Windows Server 2016 Standard Edition on a brand new HP Spectre X360 laptop that has an Intel Core i7-7500U Kaby Lake-U processor, to see if Intel Speed Shift would be enabled or not. As you can see in Figure 4, the green SST lettering in the Features box on the left side of the System Summary screen in HWiNFO64 shows that Intel Speed Shift is enabled in Windows Server 2016, at least for this processor (which is the next release after Skylake).

 

clip_image001

Figure 4: Intel Speed Shift Enabled on Windows Server 2016 Standard Edition

 

Now, this is not definitive proof that Intel Speed Shift will be enabled with two-socket Skylake-EP processors, but I would say the chances are pretty good (especially given some conversations I have had with some people who actually know the answer)…

The outstanding question is how much will this actually help SQL Server performance? At this point, we simply don’t know, without actually doing some testing. Based on some reading I have been doing about Windows 10, I believe there is a way to temporarily disable Intel Speed Shift in Windows 10. Once I figure that out, it should not be that difficult to do some initial testing with short duration queries (150 ms or less) to see if they have lower average durations when Intel Speed Shift is enabled or not.

There are many other variables to consider regarding your workload and configuration that will likely affect the results in real production usage, but I expect there will be some positive benefit from this feature. As I like to say in presentations, nobody has ever told me that their database server is “too fast”, so I will take any performance improvement that I can find!

 

 

Useful Hardware Utilities – HWiNFO64

Since I have a personal and professional interest in PC hardware (I have actually been accused by my friend Steve Jones of “loving hardware”), I like to find and talk about useful hardware utilities that are relevant for both personal machines and for PC-based servers.

One very interesting utility is HWiNFO64 from Martin Malík. This program does a pretty complete job of examining your system, from the details of the CPU, motherboard, chipset, BIOS version, GPU, memory, drives, and operating system. It also lets you drill into different components of your system to see even more details about your hardware and configuration status.

You might be wondering why this is relevant to a typical database professional? Well, first, I think DBAs should have some knowledge and interest in the hardware that their database servers are running on (whether it is a bare metal physical host, or a virtualization host), since this has a big effect on their performance and SQL Server licensing costs. Second, being aware of some of the seemingly trivial details of your hardware can have a huge positive benefit on performance.

For example, if you are using a hypervisor, such as VMware or Hyper-V, you want to make sure your hardware virtualization support is enabled in your BIOS/UEFI. This includes Intel VT-x (Intel Virtualization Technology) and VT-d (Intel Virtualization Technology for Directed I/O).

In HWiNFO64, the VMX (Virtual Machine Extensions) lettering in the Features box of the System Summary as shown in Figure 1, tells you whether VT-x is enabled or not. If it is grey, it is not supported. If it is green, it is supported and enabled. In my case, I have an Intel Core i7-6700K processor that supports VT-x, but I currently have it disabled in my BIOS/UEFI. In CPU-Z, VT-x is simply absent in the Instructions box, since it is disabled in the BIOS/UEFI, as shown in Figure 2.

Another example is Intel Speed Shift Technology (shown as SST in the Features box of the System Summary as shown in Figure 1). It shows up in red on my system, since I have a BIOS feature called multi-core enhancement (MCE) enabled, which runs all of my processor cores at full Turbo Boost speed all of the time.

Intel Speed Shift Technology is an interesting feature that is present in Intel Skylake and later processors, that also requires operating system support. Current builds of Windows 10 and Windows Server 2016 support this feature, which I think will be beneficial for some SQL Server workloads.

 

image

Figure 1: HWiNFO64 System Summary

 

image

Figure 2: CPU-Z 1.78 CPU Tab

 

Figure 3 shows part of the Sensor Status windows, which lets you see details about your memory usage, component voltage and speeds, component temperatures, etc..

 

image

Figure 3: HWiNFO64 Sensor Status

 

Figure 4 shows details about the MSI Geforce GTX 1060 ARMOR OC video card that is in my workstation, which is a very nice mid-range video card.

 

image

Figure 4: HWiNFO64 Video Adapter Details

 

 

SQL Server Diagnostic Information Queries for December 2016

This month, I have added a separate set of queries for SQL Server vNext. Right now, they are almost identical to the SQL Server 2016 queries, but I will make changes to surface new functionality in SQL Server vNext over time. I have also added support for new functionality in SQL Server 2016 Service Pack 1.

Rather than having a separate blog post for each version, I have just put the links for all six major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set. 

Here are links to the latest versions of these queries for SQL Server vNext, 2016, 2014 and 2012:

SQL Server vNext Diagnostic Information Queries (December 2016)

SQL Server nNext Blank Results Spreadsheet

SQL Server 2016 Diagnostic Information Queries (December 2016)

SQL Server 2016 Blank Results Spreadsheet

SQL Server 2014 Diagnostic Information Queries (December 2016)

SQL Server 2014 Blank Results Spreadsheet

SQL Server 2012 Diagnostic Information Queries (December 2016)

SQL Server 2012 Blank Results Spreadsheet

Here are links to the most recent versions of these scripts for SQL Server 2008 R2 and older:

Since SQL Server 2008 R2 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints. I did update these queries slightly in January 2016 though.

SQL Server 2008 R2 Diagnostic Information Queries (CY 2016)

SQL Server 2008 R2 Blank Results

SQL Server 2008 Diagnostic Information Queries (CY 2016)

SQL Server 2008 Blank Results

SQL Server 2005 Diagnostic Information Queries (CY 2016)

SQL Server 2005 Blank Results

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my three related Pluralsight courses, which are SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2 and SQL Server 2014 DMV Diagnostic Queries – Part 3. All three of these courses are pretty short and to the point, at 67, 77, and 68 minutes respectively. Listening to these three courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

Intel Xeon E7 Processor Generational Performance Comparison

Intel has a fairly recent document titled Accelerated Operations for Telecom and Financial Services which is also listed under Accelerate OLTP Database Performance with Intel TSX. It describes the “performance” increases seen with the AsiaInfo ADB from moving from 2.8GHz Intel Xeon E7-4890 v2 (Ivy Bridge-EX), to 2.5GHz Intel Xeon E7-8890 v3 (Haswell-EX), and finally to 2.2GHz Intel Xeon E7-8890 v4 (Broadwell-EX) processors, as shown in Figure 1.

 

Figure 1: Speedup from Successive Processor Generations

 

This workload is described as “AsiaInfo ADB Database OCS k-tpmC”, while the AsiaInfo ADB is described as “a scalable OLTP database that targets high performance and mission critical businesses such as online charge service (OCS) in the telecom industry”, that runs on Linux.

The reason I have performance in quotes above is because what they are really measuring is closer to what I would call capacity or scalability. Their topline result is “Thousands of Transactions per Minute” as measured with these different hardware and storage configurations.

The key point to keep in mind with these types of benchmarks is whether they are actually comparing relatively comparable systems or not. In this case, the systems are quite similar, except for the core counts of the successive processor models (and the DD3 vs. DDR4 memory support). Here are the system components, as listed in the footnotes of the document:

Baseline: Four-sockets, 15-core Intel Xeon E7-4890 v2, 256GB DDR3/1333 DIMM, Intel DC S3700 SATA for OS, (2) 2TB Intel DC P3700 PCIe NVMe for storage, 10GbE Intel X540-AT2 NIC

Next Generation: Four-sockets, 18-core Intel Xeon E7-8890 v3, 256GB DDR4/1600 LVDIMM, Intel DC S3700 SATA for OS, (2) 2TB Intel DC P3700 PCIe NVMe for storage, 10GbE Intel X540-AT2 NIC

New: Four-sockets, 24-core Intel Xeon E7-8890 v4, 256GB DDR4/1600 LVDIMM, Intel DC S3700 SATA for OS, (2) 2TB Intel DC P3700 PCIe NVMe for storage, 10GbE Intel X540-AT2 NIC

The baseline system has a total of 60 physical cores, running at 2.8GHz, using the older Ivy Bridge-EX microarchitecture. The next generation system has a total of 72 physical cores, running at 2.5GHz, using the slightly newer Haswell-EX microarchitecture. Finally, the new system has a total of 96 physical cores, running at 2.2GHz, using the current Broadwell-EX microarchitecture. These differences in core counts, base clock speeds, and microarchitecture make it a little harder to fully understand their benchmark results in a realistic manner.

Table 1 shows some relevant metrics for these three system configurations. The older generation processors have fewer cores, but run at a higher base clock speed. The newer generation processors would be faster than the older generation processors at the same clock speed, but the base clock speed is lower as the core counts have increased with each successive generation flagship processor. The improvements in IPC and single-threaded performance are obscured by lower base clock speeds as the core counts increase, which makes the final score increase less impressive.

 

Processor Base Clock Total System Cores Raw Score Score/Core
Xeon E7-4890 v2 2.8GHz 60 725 12.08
Xeon E7-8890 v3 2.5GHz 72 1021 14.18
Xeon E7-8890 v4 2.2GHz 96 1294 13.48

Table 1: Analysis of ADB Benchmark Results

 

Table 2 shows some metrics from an analysis of some actual and estimated TPC-E benchmark results for those same three system configurations, plus an additional processor choice that I added. The results are pretty similar, which supports the idea that both of these benchmarks are CPU-limited. From a SQL Server 2016 perspective, you are going to be better off from a performance/license cost perspective if you purposely choose a lower core count “frequency-optimized” processor (at the cost of less total system capacity per host).

This is somewhat harder to do with the Intel Xeon E7 v4 family, because of your limited SKU choices. A good processor choice for many workloads would be the 10-core Intel Xeon E7-8891 v4 processor, which has a base clock speed of 2.8GHz and a 60MB L3 cache that is shared by only 10 cores.

If you could spread your workload across two database servers, you would be much better off with two, four-socket servers with the 10-core Xeon E7-8891 v4 rather than one four-socket server with the 24-core Xeon E7-8890 v4. You would have more total system processor capacity, roughly 27% better single-threaded CPU performance, twice the total system memory capacity, and twice the total number of PCIe 3.0 expansion slots. You would also only need 80 SQL Server 2016 Enterprise Edition core licenses rather than 96 core licenses, which would save you about $114K in license costs. That license savings would probably pay for both database servers, depending on their exact configuration.

 

Processor Base Clock Total System Cores Est TPC-E Score Score/Core
Xeon E7-4890 v2 2.8GHz 60 5576.27 92.94
Xeon E7-8890 v3 2.5GHz 72 6964.75 96.73
Xeon E7-8890 v4 2.2GHz 96 9068.00 94.46
Xeon E7-8891 v4 2.8GHz 40 4808.79 120.22

Table 2: Analysis of Estimated TPC-E Benchmark Results

 

The Intel document also discusses the “performance” increases seen from moving from Intel DC S3700 SATA drives to Intel DC P3700 PCIe NVMe drives. This is going to be primarily influenced by the advantages of being connected directly to the PCIe bus and the lower latency and overhead of the NVMe protocol compared to the older AHCI protocol.

Finally, they talk about the “performance” increases they measured from enabling the Intel Transactional Synchronization Extensions (TSX) instruction set and the Intel AVX 2.0 instruction set on current generation Intel E7-8800 v4 series processors.

SQL Server 2016 already has hardware support for older SSE/AVX instructions as discussed here and here. I really hope that Microsoft decides to add even more support for newer instruction sets (such as TSX) in SQL Server vNext.

 

 

CPU-Z 1.78 is Available

On November 21, CPU-Z 1.78 was released. This is a great tool for getting all the technical details about your processors and checking on their current clock speed.

The main improvement in this version is support for Intel Kaby Lake processors, which are already available in the mobile space. It looks like the desktop version of Kaby Lake will be released at CES in January. Tom’s Hardware did some benchmarking of an early sample of a Core i7-7700K that someone supplied to them, as detailed here.

 

image

Figure 1: CPU-Z 1.78 CPU Tab

 

Recent versions of CPU-Z have added a quick CPU benchmarking function that is very useful for running a brief CPU benchmark that measures single-threaded CPU performance and multi-threaded CPU performance. Each test only takes about 7-8 seconds, and is useful for a number of reasons.

 

image

Figure 2: CPU-Z 1.78 Bench Tab For Intel Core i7-6700K System

 

First, you can get a quick gauge of your single-threaded CPU performance (which equates to the “speed” of the processor), and your multi-threaded CPU performance (which equates to the CPU capacity of the entire system). This is useful for comparing different processors and systems, whether they are physical or virtual. You can measure the performance of a VM versus running bare metal on the host, or you can measure different VM configurations. You can also compare your numbers to the built-in reference processors, or submit your results and compare them to other systems results that are stored online.

Second, you can use the Bench CPU button to briefly stress your processors, and then quickly switch to the main CPU tab while the test is running, to see what happens to your CPU core clock speeds, in order to understand whether you have power management configured correctly to get the performance benefits of Intel Turbo Boost.

SQL Server 2016 Service Pack 1 and SQL Server 2016 RTM CU3

Microsoft has released SQL Server 2016 RTM CU3 (Build 13.0.2186.6), with 31 fixes in the public fix list. They have also released SQL Server 2016 Service Pack 1 (Build 13.0.4001.0).

As they previously did for SQL Server 2014 SP2, Microsoft has managed to include all of the hotfixes from the latest SQL Server 2016 RTM CU3 in SQL Server 2016 SP1 RTM, including the very recent security fixes in Security Update MS16-136, so there is no need to wait for a subsequent SP1 CU to get caught up with the latest fixes from the RTM branch.

The Microsoft SQL Server Release Services Blog has a lot more detail about SQL Server 2016 SP1 here. The Data Platform blog has more detail about SP1 feature improvements here.

Given all of the feature changes and other useful improvements in SQL Server 2016 Service Pack 1 (on top of all of the hotfixes), it is almost a no-brainer to move to SQL Server 2016 Service Pack 1 as soon as you can do your testing, and plan and implement your deployment.

This is also a good time for organizations that are running down-level versions of SQL Server Standard Edition to seriously consider upgrading to SQL Server 2016 Standard Edition (preferably on new hardware, running Windows Server 2016).

Major Changes for SQL Server 2016 Standard Edition

At today’s Connect() event, Microsoft announced some pretty major changes for SQL Server 2016 Standard Edition, which are implemented in SQL Server 2016 Service Pack 1 (which was also released today). Many very useful SQL Server features that were previously only available in Enterprise Edition will now be available in Standard Edition (and also in Web Edition and even Express Edition).

These include Row-level security, Dynamic Data Masking, Change Data Capture(*), Database Snapsnots, Columnstore indexes, Table Partitioning, Data Compression, Multiple Filestream containers, In-Memory OLTP, Always Encrypted, Distributed Partitioned Views, Polybase, and Fine grained Auditing.

There are some scalability limits for some of these features running on Standard or Web/Express Edition. For example, In-Memory OLTP is limited to 1/4 of the Edition buffer pool memory limit. Columnstore is also limited to 1/4 of the Edition buffer pool memory limit. These feature memory limits are in addition to the buffer pool limit for each edition.

In Standard Edition, Columnstore is limited to 2 DOP, and in Web/Express is limited to 1 DOP. Polybase worker compute nodes can be deployed on Standard, Web, and Express Editions, but still requires Enterprise Edition for the head node to scale out with (multiple worker compute nodes).

Keep in mind that you will need SQL Server 2016 Service Pack 1, to get these new features (and many other nice improvements, as detailed here).

The primary goal here is to provide application developers with an easy way to program an application in the same way using all of the application features of SQL Server 2016, regardless of which edition of SQL Server 2016 that the application may eventually be deployed on.

This is going to make it much easier for ISVs to use these features without requiring their customers to use Enterprise Edition. They will be able to simplify their deployment scripts by not having to check the Edition of SQL Server before they run DDL statements to create database objects. This is a very welcome development that should encourage many more organizations to move to SQL Server 2016.

Once you have an application using SQL Server 2016 Standard Edition, you can just do an Edition Upgrade to Enterprise Edition to get even more scalability and performance, taking advantage of the higher license limits in Enterprise Edition. You will also get the intrinsic performance benefits that are present in Enterprise Edition. Microsoft should consider publicizing some of these performance differences between Standard Edition and Enterprise Edition.

Note: Change Data Capture won’t be available in Express Edition, since it does not have SQL Server Agent.