Properly configuring Windows and SQL Server to get the best performance from your server hardware is an important task for database administrators. There is a lot of information available online with different recommendations about how to configure your servers for the best performance. The challenge is knowing what recommendations are correct and what advice you should follow when setting up a new server or performance tuning the configuration of an existing server.
Last week I received an email from a member of the community that had attended a user group meeting where the presenter was talking about configuration options for SQL Server on Windows Server 2012. The source of the information that was being presented is actually a Microsoft whitepaper titled Performance Tuning Guidelines for Windows Server 2012 which has a section titled “Performance Tuning for OLTP Workloads”. While the document mentions the TPC-E benchmark from the Transaction Processing Performance Council, what is not made clear in this document is that the recommendations in this section DO NOT apply to a majority of the SQL Server installations that exist in the world, they are strictly for servers trying to achieve a high score on the benchmark. This section also existed in the whitepaper Performance Tuning Guidelines for Windows Server 2008 R2, but under the title “Performance Tuning for TPC-E Workload”, which is more a accurate indication of what it applies to.
If you want to know why this paper doesn’t apply to most workloads, read on. If you want to know how to configure a SQL Server with general best practices, jump down to the “How should I really configure SQL?” section at the end of this post.
The “SQL Server Tunings for OLTP Workloads” section in the whitepaper has a lot of items that I refer to as tuner specials because they exist for benchmarks like TPC-C and TPC-E, and they are documented because the benchmarks require full disclosure of the configuration before the results can be published. Most of the tuner specials can be found in KB article 920093. The startup configuration options mentioned in the whitepaper include:
- -x : Disable SQL Server perfmon counters
- -T661: Disable the ghost record removal process
- -T834: Use Microsoft Windows large-page allocations for the buffer pool
- -T652: Disable page-prefetching scans
- -T8744: Disallow prefetch for ranges
Of these, the only one that applies to general systems running SQL Server is –T834, to use large-page allocations in servers with large amounts of RAM installed. Even this trace flag has some important considerations around its usage, since the buffer pool has to allocate its total size at startup from contiguous memory. If a contiguous allocation is not possible, the instance tries to allocate a smaller value until it finds a contiguous memory region to allocate from. This can significantly increase the instance startup time and is explained in further detail by Bob Ward in his blog post SQL Server and Large Pages Explained.
While there are some workloads where using all these startup options can be beneficial, they don’t apply to general SQL Server workloads.
The most interesting thing I found in the whitepaper was the recommendation to set the ‘priority boost’ value to 1 in sp_configure. This goes against Microsoft’s own recommendations for this configuration option. When I am doing a health check of a server for a client, this configuration option is a big, red flashing light. When I see this set to 1 it usually means that I’m going to find a bunch more incorrectly configured options. Additionally this option has been marked as deprecated since SQL Server 2008 R2 released and will be removed from future releases of the product. This option is used for TPC-E to gain a slight advantage on a server where all unnecessary background services have been disabled, SQL Server is the only thing running, and the goal is strictly to obtain the best benchmark.
How should I really configure SQL?
My colleague Glenn Berry (Blog | Twitter) knows a lot about performance tuning SQL Server hardware configurations, and he knows a lot about how to provision and configure a new SQL Server Instance, which he shared in his three part series on SimpleTalk (Part 1 | Part 2 | Part 3) last year, and in a new Pluralsight online course this year. If you’re looking for a guide to setting up a new SQL Server I’d start off there. I published my own checklist for new server configuration back in 2010, but upon reviewing some of the recommendations I think I’ll update that with a new revised checklist in a future post (for example I’d leave hyper-threading turned on today, and set ‘max degree of parallelism’ based on the NUMA configuration and workload).
We’ve been helping a lot of clients with upgrading to SQL Server 2012 on Windows Server 2012 – if you need help, drop us a line.
3 thoughts on “Performance Tuning SQL Server on Windows Server 2012”
Omg some of these recommendations are horrible. What an awful set of advice to put out there.
i am using window server 2012 r2.ny system ram is 64GB.when management work on the specific tool my sql services use min 28GB RAM..in this case whole network performance is too much slow.in this scenario what should i do for best performance in whole network.
It’s impossible to answer this question based on the limited information that is being provided unfortunately.