Glenn’s Tech Insights For February 21, 2019

(Glenn’s Tech Insights… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers interesting new hardware and software developments that are generally relevant for SQL Server).

PCIe 5.0 Nears Release

On January 17, 2019, the PCI-SIG ratified version 0.9 of the PCIe 5.0 standard, with version 1.0 of the standard expected to be ratified later in Q1 of 2019. PCIe 5.0 doubles the bandwidth of PCIe 4.0, going from 64GB/s with sixteen lanes to 128GB/s with sixteen lanes. PCIe 4.0 also doubled the bandwidth of PCIe 3.0, which was “only” 32GB/s with sixteen lanes.

Currently, no released AMD or Intel processors have PCIe 4.0 support, but the upcoming 7nm AMD Ryzen 3000 desktop processors and the AMD EPYC “Rome” server processors will both have PCIe 4.0 support. The upcoming Intel Cascade Lake-SP server processors will NOT have PCIe 4.0 support.

After you have processor support for PCIe 4.0 and greater, you will need storage devices that support PCIe 4.0 and greater.

The first public PCIe 4.0 NVMe SSD was demoed at CES. It was a Phison PS5016-E16. This SSD did 4069MB/sec for sequential reads and 4268MB/sec for sequential writes on CrystalDiskMark. Phison claims that the released version will have speeds up to 4.8/4.4 GB/s of read/write sequential throughput. This card is due to go on sale in Q3 of 2019.


AMD EPYC Market Share Analysis

ServeTheHome has a thoughtful article looking at the market share gains in the server space by the current generation AMD EPYC “Naples” processor since it was first released in mid-2017. AMD has gone from 0.8% in Q4 of 2017 to 3.2% in Q4 of 2018. That is still a small number, but I believe it will start to increase at a much faster rate during 2019 and 2020.

This is because both HPE and Dell EMC have multiple AMD EPYC systems on the market, and because the 7nm AMD EPYC “Rome” processors are due to be released in mid-2019. I think the AMD EPYC “Rome” processor is going to be a huge success, with PCIe 4.0 support, very high memory density, and possibly better single-threaded performance than Intel Cascade Lake-SP. This could make the AMD EPYC “Rome” processor a better choice for SQL Server OLTP usage than Intel Cascade Lake-SP. We will see as we get closer to release, and start to see more benchmark results.


AMD Ryzen 3000 Series Release Date Rumors

RedGamingTech reports that the upcoming 7nm AMD Ryzen 3000 “Matisse” mainstream desktop processors (and a new, optional X570 chipset) are going to be released on July 7, 2019, announced at Computex 2019. The expectation is that they will initially have twelve physical cores, with a 16-core SKU being released later in the year to counter the expected release of the 10nm 10-core Intel Comet Lake desktop processors.

It is possible that AMD will then have both a single-threaded CPU performance and a core count advantage, while also selling at a lower price than the competing Intel mainstream desktop parts. This situation will probably true for at least nine-twelve months. This is not good news for Intel, and it will be interesting to see how they respond to this challenge.

Creating SQL Server Agent Job Schedules for Ola Hallengren’s Maintenance Solution

Data Platform MVP Ola Hallengren has created and maintained his free SQL Server Maintenance Solution script for over eleven years now. This script creates some objects in your master system database (by default), and it also creates and enables twelve SQL Server Agent jobs which are used to do things like database backups, index maintenance, and database integrity checks. It is a great solution that many SQLskills clients use.

An enabled SQL Server Agent job that does not have a schedule associated with it will never run (unless you kick it off manually). I frequently help clients setup and configure the Ola Hallengren SQL Server Maintenance Solution, and I finally got tired of manually creating a job schedule for each of the twelve SQL Server Agent jobs that it uses. I decided to create a T-SQL script that you can run to set up a schedule for each of the twelve jobs, which you can get here.

You can (and probably should) modify the schedules in my script to suit your business requirements and infrastructure. For example, depending on your Recovery Point Objective (RPO) SLA, you would probably want to change how often you run transaction log backups. Another example is deciding when to run resource intensive jobs like the “DatabaseIntegrityCheck – USER_DATABASES” job or the “IndexOptimize – USER_DATABASES” job. If you have multiple SQL Server instances and you have shared storage, you would want to be more careful to ensure that every instance is not doing resource intensive jobs at the same time.

Please let me know what you think of this script and if you have any suggestions for improvements. Thanks!

Avoiding SQL Server Upgrade Performance Issues

As SQL Server 2008 and SQL Server 2008 R2 rapidly approach the end of Extended support from Microsoft on July 9, 2019, and with SQL Server 2014 also falling out of Mainstream support on July 9, 2019 (joining SQL Server 2012, which fell out of Mainstream support on July 11, 2017) ), I am seeing an increasing number of organizations that have been migrating to a modern version of SQL Server. I define a modern version of SQL Server as SQL Server 2016 or later.

I see this as a positive development overall, since SQL Server 2016 and newer actually have many useful new features that make them much better products than their predecessors. Migrating to a modern version of SQL Server also usually means using new, faster hardware and storage, running on a current version of Windows Server, which is also very beneficial, as long as you choose your new hardware and storage wisely.

Despite all of this, I have seen a decent number of cases where organizations have migrated from a legacy version of SQL Server to a modern version of SQL Server on new hardware and a new operating system, and then be unpleasantly surprised by performance regressions once they are in Production. How can these performance regressions be occurring, and what steps can you take to help prevent them?

The main culprit in most of these performance regressions is a combination of lack of knowledge, planning, and adequate performance testing. Unlike legacy versions of SQL Server, modern versions of SQL Server have several important performance-related configuration options that you need to be aware of, understand, and actually test with your workload. Most people who run into performance regressions have done what I call a “blind migration” where they simply restore their databases from the older version to the new version of SQL Server, with no meaningful testing of the performance effects of these different configuration options.

So, what are these key configuration options that you need to be concerned with from a performance perspective? The most important ones include your database compatibility level, the cardinality estimator version that you are using, your database-scoped configuration options, and what trace flags you are using. Since SQL Server 2014, the database compatibility level affects the default cardinality estimator that the query optimizer will use. Since SQL Server 2016, the database compatibility level also controls other performance related behavior by default. I have written more about this subject here:

The Importance of Database Compatibility Level in SQL Server

Compatibility Levels and Cardinality Estimation Primer

You have the ability override many of these database compatibility level-related changes with database scoped configuration options and query hints. There are actually a rather large number of different combinations of settings that you have to think about and test. So, what are you supposed to do?

Microsoft Database Experimentation Assistant

In my ideal scenario, you would use the free Microsoft Database Experimentation Assistant (DEA) to capture a relevant production workload. This involves taking a full production database backup, then capturing a production trace that covers representative high priority workloads. While this is going on, I would run some of my SQL Server Diagnostic Queries to get some baseline metrics from your legacy instance.

Once you have done that, you can then restore that backup to your new environment, and replay the production trace multiple times in your new environment. Each time you do this (which also includes a fresh restore from that original full production database backup), you will use a different combination of these key configuration settings. You have to make the database configuration/property changes after each restore, but before you replay the DEA trace.

The idea here is to see which combination of these configuration settings yields the best performance with your workload. Here are some relevant, likely combinations:

    • Use the default native database compatibility level of the new version
    • Use the default native database compatibility level of the new version and use the query optimizer hotfixes database-scoped configuration option
    • Use the default native database compatibility level of the new version and use the legacy cardinality estimator database-scoped configuration option
    • Use the default native database compatibility level of the new version and use the legacy cardinality estimator database-scoped configuration option and use the query optimizer hotfixes database-scoped configuration option
    • Use the existing database compatibility level of the old version
    • Use the existing database compatibility level of the old version and use the query optimizer hotfixes database-scoped configuration option

This level of DEA testing may not be practical if you have a large number of databases, but you should really try to do it on your most mission critical databases. Barring that, I would try to do as much testing of your most important stored procedures and queries as possible, using these different configuration settings.

Finally, if no adequate testing is possible you can follow Microsoft’s recommended upgrade sequence (in your new production environment, after you go live), which is:

    • Upgrade to the latest SQL Server version and keep the source (legacy) database compatibility level
    • Enable Query Store, and let it collect a baseline of your workload
    • Change the database compatibility level to the native level for the new version of SQL Server
    • Use Query Store (and Automatic Plan Correction on SQL Server 2017 Enterprise Edition) to fix performance regressions by forcing the last known good plan

You also have all of the other new “knobs” of database-scoped configuration options, query-level hints, and trace flags available to you. You may have to do some additional work on some queries with USE HINT query hints. Ideally, you would have done enough testing so that you already have a pretty good idea of the “best” combination of these settings for your workload, but many organizations don’t actually do that.

Keep in mind that for each of the new QP features over the last two versions (Adaptive Query Processing in SQL Server 2017 and Intelligent Query Processing in SQL Server 2019), Microsoft exposes the ability to disable specific behavior at the database scoped configuration or query USE HINT scope.  Microsoft generally recommends that if you do find regressions related to a specific feature, try disabling it at lower granularities first, so you can still benefit from all of the rest of the improvements you get from the latest database compatibility level.

Query Tuning Assistant

Microsoft is shipping a new tool called Query Tuning Assistant (QTA) in SSMS 18.0. QTA can guide you through the recommended database compatibility level upgrade process in a wizard-fashion, collecting the baseline workload in Query Store, bumping up the database compatibility level, and then comparing performance with the post-upgrade workload collection. At the end of this process, if performance regressions are detected, rather than moving back to the previously known good plan, the QTA will actually suggest hint-based improvements that can be deployed for individual queries (using plan guides), without having to necessarily move back to the legacy CE. It will also gives you some ideas (indirectly) for how you can modify problematic queries that have CE-related regression issues, when you have that option.

SQL Server Diagnostic Information Queries for February 2019

This month, I have just made some minor improvements to most of the query sets, mainly in the comments and documentation.

I have a T-SQL script that you can use to check whether your instance of SQL Server has been patched to mitigate against the Spectre/Meltdown CPU vulnerability. This works for SQL Server 2008 through SQL Server 2017, for on-premises and cloud-based VM (IaaS) usage. You can get the query for this here.

I often make additional minor updates to the queries periodically during the month, so if you are in doubt, downloading the latest version is always a good idea.

Rather than having a separate blog post for each version, I have just put the links for all ten major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set.

Here are links to the latest versions of these queries for Azure SQL Database, SQL Server 2019, SQL Server 2017, SQL Server 2016 SP2, SQL Server 2016, and SQL Server 2014:

Azure SQL Database Diagnostic Information Queries

Azure SQL Database Blank Results Spreadsheet

SQL Server 2019 Diagnostic Information Queries

SQL Server 2019 Blank Results Spreadsheet

SQL Server 2017 Diagnostic Information Queries

SQL Server 2017 Blank Results Spreadsheet

SQL Server 2016 SP2 Diagnostic Information Queries

SQL Server 2016 SP2 Blank Results Spreadsheet

SQL Server 2016 Diagnostic Information Queries

SQL Server 2016 Blank Results Spreadsheet

SQL Server 2014 Diagnostic Information Queries

SQL Server 2014 Blank Results Spreadsheet

Here are links to the most recent versions of these scripts for SQL Server 2012 and older:

Since SQL Server 2012 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints.

SQL Server 2012 Diagnostic Information Queries

SQL Server 2012 Blank Results Spreadsheet

SQL Server 2008 R2 Diagnostic Information Queries

SQL Server 2008 R2 Blank Results Spreadsheet

SQL Server 2008 Diagnostic Information Queries

SQL Server 2008 Blank Results Spreadsheet

SQL Server 2005 Diagnostic Information Queries

SQL Server 2005 Blank Results Spreadsheet

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect. I am not planning on moving these to Github any time soon.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my five related Pluralsight courses, which are SQL Server 2017: Diagnosing Performance Issues with DMVs, SQL Server 2017: Diagnosing Configuration Issues with DMVs, SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2, and SQL Server 2014 DMV Diagnostic Queries – Part 3. All five of these courses are pretty short and to the point, at 164, 106, 67, 77, and 68 minutes respectively. Listening to these five courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

New Pluralsight Course Published

On January 10, 2019, my latest Pluralsight course, SQL Server 2017: Diagnosing Performance Issues with DMVs was published! This makes fifteen courses that I have done for Pluralsight. Here is the official course description:

Learn how to easily query SQL Server 2017 for performance information to help identify and fix issues that can affect performance and stability. This course is applicable to anyone responsible for SQL Server 2017 and earlier versions.

Essentially, I walk you though the activity and performance-related queries from my SQL Server 2017 Diagnostic Queries by discussing and demonstrating each query and talking about how to interpret the results of each query. Knowing how to understand what each diagnostic query reveals is extremely useful as you are trying to determine what is going on with your SQL Server instance or database.

This course is a companion to my earlier SQL Server 2017: Diagnosing Configuration Issues with DMVs course that was published on July 19, 2018.

Despite the title, this course is still applicable for older versions of SQL Server, since many of the queries that I demonstrate and discuss will work on older versions of SQL Server. Ideally, you should be using the correct version of my SQL Server Diagnostic Queries that matches your version of SQL Server so that all of the queries will work, but the concepts are still relevant for older versions of SQL Server.

 

You can see all of my Pluralsight courses here.

The Importance of Database Compatibility Level in SQL Server

Prior to SQL Server 2014, the database compatibility level of your user databases was not typically an important property that you had to be concerned with, at least from a performance perspective. Unlike the database file level (which gets automatically upgraded when you restore or attach a down-level database to an instance running a newer version of SQL Server, and which can never go back to the lower level), the database compatibility level can be changed to any supported level with a simple ALTER DATABASE SET COMPATIBILITY LEVEL = xxx command.

You are not stuck at any particular supported database compatibility level, and you can change the compatibility level back to any supported level that you wish. In many cases, most user databases never had their compatibility levels changed after a migration to a new version of SQL Server. This usually didn’t cause any issues unless you actually needed a new feature that was enabled by the latest database compatibility level.

With SQL Server 2012 and older, the database compatibility level was mainly used to control whether new features introduced with a particular version of SQL Server were enabled or not and whether non-supported old features were disabled or not. The database compatibility level was also used as a method to maintain better backwards application compatibility with old versions of SQL Server. If you didn’t have time to do full regression testing with the newest compatibility level, you could just use the previous compatibility level until you could test and modify your applications if needed.

Table 1 shows the major versions of SQL Server and their default and supported database compatibility levels.

SQL Server Version             Database Engine Version        Default Compatibility Level           Supported Compatibility Levels

SQL Server 2019                   15                                            150                                                 150, 140, 130, 120, 110, 100

SQL Server 2017                   14                                            140                                                  140, 130, 120, 110, 100

SQL Server 2016                   13                                            130                                                  130, 120, 110, 100

SQL Server 2014                   12                                            120                                                  120, 110, 100

SQL Server 2012                   11                                            110                                                  110, 100, 90

SQL Server 2008 R2              10.5                                         100                                                  100, 90, 80

SQL Server 2008                   10                                            100                                                  100, 90, 80

SQL Server 2005                     9                                              90                                                  90, 80

SQL Server 2000                     8                                              80                                                  80

Table 1: SQL Server Versions and Supported Compatibility Levels


New Database Creation

When you create a new user database in SQL Server, the database compatibility level will be set to the default compatibility level for that version of SQL Server. So for example, a new user database that is created in SQL Server 2017 will have a database compatibility level of 140. The exception to this is if you have changed the compatibility level of the model system database to a different supported database compatibility level, then a new user database will inherit its database compatibility level from the model database.


Database Restore or Attach

If you restore a full database backup that was taken on an older version of SQL Server to an instance that is running a newer version of SQL Server, then the database compatibility level will stay the same as it was on the older version of SQL Server, unless the old database compatibility level is lower than the minimum supported database compatibility level for the newer version of SQL Server. In that case, the database compatibility level will be changed to the lowest supported version for the newer version of SQL Server.

For example, if you were to restore a SQL Server 2005 database backup to a SQL Server 2017 instance, the database compatibility level for that restored database would be changed to 100. You will get the same behavior if you detach a database from an older version of SQL Server, and then attach it to a newer version of SQL Server.

This general behavior is not new, but what is new and important is what else happens when you change a user database to database compatibility level 120 or newer. These additional changes, that can have a huge impact on performance, don’t seem to be widely known and understood in the wider SQL Server community. I still see many database professionals and their organizations just doing what I call “blind upgrades” where they go from SQL Server 2012 or older to SQL Server 2014 or newer (especially SQL Server 2016 and SQL Server 2017), where they don’t do any serious performance regression testing to understand how their workload will behave on the new native compatibility level and whether the additional configuration options that are available will have a positive effect or not.

Database Compatibility Level 120

This was when the “new” cardinality estimator (CE) was introduced. In many cases, most of your queries ran faster when using the new cardinality estimator, but it was fairly common to run into some queries that had major performance regressions with the new cardinality estimator. Using database compatibility level 120 means that you will be using the “new” CE unless you use an instance-wide trace flag or a query-level query hint to override it.

Joe Sack wrote the classic whitepaper “Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator” that explains the background and behavior of this change back in April of 2014.  If you saw performance regressions on some queries with the new CE, SQL Server 2014 did not have that many options for alleviating the performance issues caused by the new CE. Joe’s whitepaper covers those options in great detail, but essentially, you were limited to instance-level trace flags or query-level query hints to control which cardinality estimator was used by the query optimizer, unless you wanted to revert back to database compatibility level 110 or lower.

The reason I called it the “new” CE in quotes is because there is now no single “new” CE. Each new version of SQL Server since SQL Server 2014 has CE and query optimizer changes tied to the database compatibility level. The new, more accurate terminology that is relevant on SQL Server 2016 and newer is CE120 for compatibility level 120, CE130 for for compatibility level 130, CE140 for for compatibility level 140, and CE150 for for compatibility level 150.


Database Compatibility Level 130

When you are on SQL Server 2016 or newer, using database compatibility level 130 will use CE130 by default, and will enable a number of other performance related changes. The effects of global trace flags 1117, 1118, and 2371 are enabled with database compatibility level 130. You will also get the effect of global trace flag 4199 for all query optimizer hotfixes that were released before SQL Server 2016 RTM.

SQL Server 2016 also introduced database scoped configuration options, which give you the ability to control some behaviors that were formerly configured at the instance level, using an ALTER DATABASE SCOPED CONFIGURATION command. The two most relevant database scoped configuration options for this discussion are LEGACY_CARDINALITY ESTIMATION and QUERY_OPTIMIZER_HOTFIXES.

LEGACY_CARDINALITY ESTIMATION enables the legacy CE (CE70) regardless of the database compatibility level setting. It is equivalent to trace flag 9481, but it only affects the database in question, not the entire instance. It allows you to set the database compatibility level to 130 in order to get the other functional and performance benefits, but use the legacy CE database-wide (unless overridden by a query-level query hint).

The QUERY_OPTIMIZER_HOTFIXES option is equivalent to trace flag 4199 at the database level. SQL Server 2016 will enable all query optimizer hotfixes before SQL Server 2016 RTM when you use the 130 database compatibility level (without enabling trace flag 4199). If you do enable TF 4199 or enable QUERY_OPTIMIZER_HOTFIXES, you will also get all of the query optimizer hotfixes that were released after SQL Server 2016 RTM.

SQL Server 2016 SP1 also introduced the USE HINT query hints that are easier to use and understand than the older QUERYTRACEON query hints that you had to use in SQL Server 2014 and older. This gives you even more fine-grained control over optimizer behavior that is related to database compatibility level and the version of the cardinality estimator that is being used. You can query sys.dm_exec_valid_use_hints to get a list of valid USE HINT names for the exact build of SQL Server that you are running.


Database Compatibility Level 140

When you are on SQL Server 2017 or newer, using database compatibility level 140 will use CE140 by default. You also get all of the other performance related changes from 130, plus new ones as detailed here. SQL Server 2017 introduced the new adaptive query processing features, and they are enabled by default when you are using database compatibility level 140. These include batch mode memory grant feedback, batch mode adaptive joins, and interleaved execution.


Database Compatibility Level 150

When you are on SQL Server 2019 or newer, using database compatibility level 150 will use CE150 by default. You also get all of the other performance related changes from 130 and 140, plus new ones as detailed here. SQL Server 2019 is adding even more performance improvements and behavior changes that are enabled by default when a database is using compatibility mode 150. A prime example is scalar UDF inlining, which automatically inline many scalar UDF functions in your user databases. This may be one of the most important performance improvements for some workloads.

Another example is the intelligent query processing feature, which is a superset of the adaptive query processing feature in SQL Server 2017. New features include table variable deferred compilation, approximate query processing, and batch mode on rowstore.

There are also sixteen new database scoped configuration options (as of CTP 2.2) that give you database-level control of more items that are also affected by trace flags or the database compatibility level. It gives you even more fine-grained control of these higher level changes that are enabled by default with database compatibility level 150.


Conclusion

Migrating to a modern version of SQL Server (meaning SQL Server 2016 or newer) is significantly more complicated than it was with legacy versions of SQL Server. Because of the changes associated with the various database compatibility levels and various cardinality estimator versions, it is actually very important to put some thought, planning, and actual testing into what database compatibility level you want to use on the new version of SQL Server that you are migrating your existing databases to.

Microsoft’s recommended upgrade process is to upgrade to the latest SQL Server version, but keep the source database compatibility level. Then, enable Query Store on each database and collect baseline data on the workload. Next, you set the database compatibility level to the latest version, and then use Query Store to fix your performance regressions by forcing the last known good plan.

You really want to avoid a haphazard “blind” migration where you are blissfully unaware of how this works and how your workload will react to these changes. Changing the database compatibility level to an appropriate version and using the appropriate database scoped configuration options, along with appropriate query hints where absolutely necessary, is extremely important with modern versions of SQL Server.

Another thing to consider (especially for ISVs) is that Microsoft is starting to really push the idea that you should think about testing and certifying your databases and applications to a particular database compatibility level rather than a particular version of SQL Server. Microsoft provides query plan shape protection when the new SQL Server version (target) runs on hardware that is comparable to the hardware where the previous SQL Server version (source) was running and the same supported database compatibility level is used both at the target SQL Server and source SQL Server.

The idea here is that once you have tested and certified your applications on a particular database compatibility level, such as 130, you will get the same behavior and performance if you move that database to a newer version of SQL Server (such as SQL Server 2017 or SQL Server 2019) as long as you are using the same database compatibility level and you are running on equivalent hardware.





More CPU Competition Coming for Intel

On January 9, 2019, AMD CEO and President Dr. Lisa Su presented a CES 2019 Keynote where she demonstrated (at 1:25:00 in the video) a 7nm, eight core/sixteen thread, 3rd Generation AMD Ryzen “Matisse” desktop processor running the Cinebench R15 Multithreaded (MT) benchmark vs. a 14nm, eight core/sixteen thread Intel Core i9-9900K “Coffee Lake” desktop processor.

These two systems were as identical as possible (outside of the motherboard and processor), meaning identical 2666MHz memory, video card, and storage. The Intel system was running at stock clock speeds vs. an engineering sample Ryzen running at lower than final clock speeds. The Cinebench MT score for Intel was 2040, while the Cinebench MT score for the AMD Ryzen 2 was 2057. The Intel system was using 179.8 watts, while the AMD system was using 133.4 watts during the Cinebench MT benchmark (which pegs all of the cores in the system). This is extremely significant!

Dr. Su held up one of these Ryzen 2 processors, showing a 14nm I/O die on the left and the 7nm eight core/sixteen thread Zen 2 processor die on the top right. It was pretty obvious that there was room on the package for another identical Zen 2 processor die on the bottom right. During interviews over the next couple of days, Dr. Su basically confirmed that the Zen 2 family had room for an extra processor die and that we should expect a higher core count SKU. AMD purposely used an eight core CPU for the demo so that they would have the same core count as Intel’s current top of the line processor.

It appears that what AMD demonstrated was actually a mid-range Ryzen 5 class SKU, running with artificially slow memory, at a non-final lower clock speed that still had a slightly higher Cinebench MT score (with the same core/thread count) as the current best mainstream desktop processor that Intel has available. Since the core/thread counts were the same between the two systems, this means that the single-threaded performance should be about the same. If this is true, then this would be the first time in an extremely long time where AMD has better single-threaded performance than Intel. The final version of these Zen 2 processors should perform even better than this early sample.

You may be wondering what this has to do with server processors and with SQL Server. It turns out that the upcoming 7nm AMD EPYC “Rome” processors use the same Zen 2 architecture and 7nm manufacturing process as these Zen 2 mainstream desktop processors. If the 7nm AMD EYPC Rome processors end up having better single-threaded performance than the upcoming 14nm Intel Cascade Lake-SP processors (which I think is pretty likely), then AMD is going to be extremely competitive in the server market and for SQL Server usage. This is especially true if you consider AMD’s advantage in memory density, PCIe lanes (which will probably be Gen 4.0) and hardware cost. Dr. Su actually did a demonstration of a one-socket AMD EPYC Rome system vs. a two-socket Intel Xeon 8180 system, showing the AMD system winning.

Both the desktop Ryzen 2 and the server EPYC Rome processors are due to be released in mid-2019.

AdoredTV has their analysis here, while UFD Tech has their analysis here. Anandtech has a good writeup here.



SQL Server Diagnostic Information Queries for January 2019

This month, I have just made some more minor improvements to most of the query sets, mainly in the comments and documentation.

I have a T-SQL script that you can use to check whether your instance of SQL Server has been patched to mitigate against the Spectre/Meltdown CPU vulnerability. This works for SQL Server 2008 through SQL Server 2017, for on-premises and cloud-based VM (IaaS) usage. You can get the query for this here.

I often make additional minor updates to the queries periodically during the month, so if you are in doubt, downloading the latest version is always a good idea.

Rather than having a separate blog post for each version, I have just put the links for all ten major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set.

Here are links to the latest versions of these queries for Azure SQL Database, SQL Server 2019, SQL Server 2017, SQL Server 2016 SP2, SQL Server 2016, and SQL Server 2014:

Azure SQL Database Diagnostic Information Queries

Azure SQL Database Blank Results Spreadsheet

SQL Server 2019 Diagnostic Information Queries

SQL Server 2019 Blank Results Spreadsheet

SQL Server 2017 Diagnostic Information Queries

SQL Server 2017 Blank Results Spreadsheet

SQL Server 2016 SP2 Diagnostic Information Queries

SQL Server 2016 SP2 Blank Results Spreadsheet

SQL Server 2016 Diagnostic Information Queries

SQL Server 2016 Blank Results Spreadsheet

SQL Server 2014 Diagnostic Information Queries

SQL Server 2014 Blank Results Spreadsheet

Here are links to the most recent versions of these scripts for SQL Server 2012 and older:

Since SQL Server 2012 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward.  I started this policy a while ago, and so far, I have not heard any complaints.

SQL Server 2012 Diagnostic Information Queries

SQL Server 2012 Blank Results Spreadsheet

SQL Server 2008 R2 Diagnostic Information Queries

SQL Server 2008 R2 Blank Results Spreadsheet

SQL Server 2008 Diagnostic Information Queries

SQL Server 2008 Blank Results Spreadsheet

SQL Server 2005 Diagnostic Information Queries

SQL Server 2005 Blank Results Spreadsheet

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect. I am not planning on moving these to Github any time soon.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

If you want to understand how to better run and interpret these queries, you should consider listening to my five related Pluralsight courses, which are SQL Server 2017: Diagnosing Performance Issues with DMVs, SQL Server 2017: Diagnosing Configuration Issues with DMVs, SQL Server 2014 DMV Diagnostic Queries – Part 1SQL Server 2014 DMV Diagnostic Queries – Part 2, and SQL Server 2014 DMV Diagnostic Queries – Part 3. All five of these courses are pretty short and to the point, at 164, 106, 67, 77, and 68 minutes respectively. Listening to these five courses is really the best way to thank me for maintaining and improving these scripts…

Please let me know what you think of these queries, and whether you have any suggestions for improvements. Thanks!

New Year Technology Maintenance

With the start of a new year, and with many people having some time off work, this seems like a good time to do some maintenance and updates on the computers and related technology items and devices in your home (and the homes of your immediate relatives and friends if possible). While many devices do a pretty decent job of automatically pulling down and installing updates by themselves (or they can be configured to do so), you probably have a number of devices that need some attention and help.

These might include your laptop, desktop, and tablet computers, your wireless router, your network attached storage (NAS), managed switches, printers, home automation gear, UPS’s, Home AV gear, the list can go on quite a bit.

For Windows-based computers that are owned/managed by individuals (rather than being company-owned), you should manually check for updates by clicking the “Check for Updates” button, and also make sure that the “Give me updates for other Microsoft products when I update Windows” option is enabled, under Advanced options, so that you will get updates for things like Microsoft Office. This is important to do even if the machine is supposed to automatically check for updates, since many people (especially non-computer savvy people) don’t ever manually check for updates and they also may not leave their machines running for extended periods, so it is very common to find them woefully out of date.

image

Figure 1: Windows Update Dialog



image

Figure 2: Windows Update Advanced Options


It is also a good idea to run Windows Disk Cleanup, and click on the “Clean up system files” button, so that you will have the option of cleaning up Windows Update files and even removing previous versions of Windows (in Windows 10), meaning previous semi-annual update versions. Removing those old versions can often free up 20-40GB of disk space, although you won’t be able to rollback to the old versions anymore. Be warned that running disk cleanup can take quite a bit of time, even with a fast SSD.


image

Figure 3: Disk Cleanup Default Dialog


image

Figure 4: Disk Cleanup System Files Dialog


After getting Windows updated and running Disk Cleanup, you should make an effort to see if there are any BIOS/firmware or driver updates for each device. Most large hardware vendors have utilities that you can use to automatically look for any updates, and then download and install the updates. You may have to go to the vendor’s support site to download and install the utility (or update an old version of the utility), but once you have done that, getting the latest updates is usually pretty easy.

If the machine was built from parts or is not from a major vendor, you will probably need to do some research and manual updating, once you figure out what you are dealing with. A couple of good tools to identify the components in a strange computer are CPU-Z and HWiNFO64, which you can have on a USB thumb drive. These will let you identify the motherboard model and BIOS version, along with many other components, such as video cards and storage devices. Most motherboard vendors also have utilities that can be used to check for and install the latest updates for that motherboard.


image

Figure 5: CPU-Z Mainboard Tab


image

Figure 6: HWiNFO64 System Summary


Many individual components in a system may require updates, such as video cards and storage devices. Once you know what components you are dealing with, you should be able to go to the vendor’s support site and either use a utility to check for and install updates, or simply manually do it yourself. For example, Samsung and Intel have utilities to maintain and update their storage devices and NVidia and AMD have utilities to maintain and update their video cards.


Other Devices

After you have updated all of your computers, you should try to update all your other updateable devices as much as you can. I’m talking about things like routers, managed network switches, printers, home automation gear, AV components and the like. Many devices will have a web interface or downloadable utility program that you can use to check for and install firmware and software updates. Often they will have a built-in method for checking for updates, buried somewhere in the setup or configuration interface. You should make the effort to find whatever method is necessary, and then check for and install updates.

Perhaps you are wondering why you should go to this trouble, for yourself and for your family and friends? The main reason is that it will make their devices be more secure and work better, and it will also make it a little more difficult for bad things like viruses, trojans, and worms to spread so quickly in the wild. Since you are probably in the technology field if you are reading this, your friends and family probably rely on you (at least to a certain extent) for their computer and technology support. For example, if you are are a DBA, that means you are “in computers”, and you must know everything there is to know about desktop and network support, right?

Here are some links for some of the more common brands and devices you may run into.


Firmware Updates


NETGEAR Download Center

Marantz Updates and Upgrades

HP Customer Support

Brother Support and Downloads


Software Updates


Samsung Magician

Intel Solid State Drive Toolbox

Synology Download Center

Sonos Support


Driver Updates


NVidia GeForce Drivers

AMD Radeon Drivers

ASUS Download Center

ASRock Download Center

Gigabyte Download Center

MSI Download Center







How to Check if Your Processor Supports Second Level Address Translation (SLAT)

If you want to run Docker for Windows, you will need to be running Windows 10 Professional, Windows 10 Enterprise, Windows 10 Pro for Workstations, or Windows 10 Education Edition. Since Docker for Windows requires Microsoft Hyper-V, you will also need a processor on your host machine that supports second level address translation (SLAT) in order to run Hyper-V.  You will also want/need SLAT support for pretty much any other hypervisor that you may be using.

This should not be a problem in most cases, since nearly all systems that are running Windows 10 will have a new enough Intel or AMD processor so that SLAT support won’t be an issue. For AMD, SLAT support, which they call Rapid Virtualization Indexing (RVI), was introduced in the Barcelona microarchitecture in late 2007. For Intel, SLAT support, which they call Extended Page Tables (EPT), was introduced with the Westmere microarchitecture in early 2010.

If you want to actually check your system to confirm that you have SLAT support in your processor (before you install Hyper-V), here is how to do it:


1. Download Coreinfo from this link


2. Extract the zip file, and then copy the Coreinfo.exe file to the root of your C: drive


3. Open a command prompt as an administrator


4. Navigate to the root of your C: drive in the command prompt


5. Run this command: coreinfo –v


6. The –v switch shows virtualization-related features


If you see an asterisk on the feature line, that means the processor supports that feature. If there is a dash on the feature line, the processor does not support that feature.


image

Figure 1: Results of Coreinfo –v on an Intel Core i7-8700K Processor


If you already have Hyper-V installed, you will get inaccurate results, as shown in Figure 2. Just to be clear, this AMD processor does have SLAT support, but the fact that Hyper-V is running gives the wrong results.


image

Figure 2: Results of Coreinfo –v on an AMD Ryzen Threadripper 2950X Processor