Properly configuring SQL Server is an important factor in having a stable environment that performs well.  While there is a lot of good guidance available for how to best configure SQL Server, the specifics of any given implementation is a very big “It Depends…” and one of the things that we do in our Immersion Events for SQL Server is teach you why it depends so that you make the appropriate decisions about your environments configuration.  Last week, a Microsoft employee published on the MSDN Blogs site a calculator to determine where you should set ‘max server memory’ (http://blogs.msdn.com/b/sqlsakthi/archive/2012/05/19/cool-now-we-have-a-calculator-for-finding-out-a-max-server-memory-value.aspx) for your instance of SQL Server.  While the intent was good, the outcome is incorrect and will most likely lead to problems if implemented.

Note: There have been edits to the above mentioned post since this was first published that state it is not an official Microsoft recommendation and that it is targeted at 100 level users.  However, if the goal is to keep 100 level admins from having problems, a much more conservative set of values would have been used for where 'max server memory' should have been set to ensure that they didn't have memory pressure issues for their configuration.  Read the rest of this post to see why I say this.

How to appropriately configure ‘max server memory’ has been something I’ve blogged and talked about repeatedly over the last few years because it is important to proper server performance and stability on 64-bit platforms.  My most recent post, How much memory does my SQL Server actually need?, offered two methods of being able to try and calculate a good value for this based on the system configuration, with one being a bit more reserved than the other.  My general recommendation is to use the calculation from my book, Troubleshooting SQL Server: A Guide for the Accidental DBA, which is to reserve 1 GB of RAM for the OS, 1 GB for each 4 GB of RAM installed from 4–16 GB, and then 1 GB for every 8 GB RAM installed above 16 GB RAM.  This isn’t an overly technical calculation but it has worked well and is generally going to configure ‘max server memory’ low enough that the server will be stable and have reliable performance.  However, this doesn’t necessarily account for everything that might be going on in a specific environment, for example if you have a FusionIO card installed, there is a memory requirement associated with the drivers for the SSD and the memory requirements can be quite significant depending on the size of your SSD and the block size you are using (http://www.sqlservercentral.com/articles/SSD+Disks/69693/).  HBA drivers can also be large memory consumers in the environment, and these requirements could result in memory pressure if you don’t properly account for them in memory configuration.

There’s lots of bad advice out there about how to set ‘max server memory’ for a given instance, so you might be wondering why would I point this one out specifically?  It’s really simple, because this person represents Microsoft, and that means people immediately follow the recommendations even when they are wrong.  If we take a look at the the online calculator that was published on this blog and use an example SQL Server with 24 cores, and 128GB RAM installed with nothing but corporate anti-virus running on it per the online calculator, the recommended ‘max server memory’ value is 122.375GB, which is going to be too high from practical experience working with dedicated SQL Servers the last 8 years.

Wow… An online calculator to misconfigure your SQL Server memory!   image thumb

If there is anyone out there running a SQL Server with 128GB RAM installed and a ‘max server memory’ value this high, I can almost guarantee that you are hitting memory pressure issues in the Windows OS, and a quick peak at your ring buffer memory notifications in dm_os_ring_buffers would prove it.

The blog post that introduced the tool doesn’t provide any information specifically about how to tell if you’ve set ‘max server memory’ too high, and what signs to look for to consider lowering the value that has been set, it only covers items that tell you SQL Server is in memory pressure and could use more memory.  As I mentioned in my Lock Pages in Memory article, SQL Server has a built-in component of the SQLOS, the Resource Monitor, that monitors the QueryMemoryResourceNotification Windows Server API to get status about the OS level memory availability on the server.  If the Windows OS is under memory pressure, it will set a low memory notification that the Resource Monitor thread will detect and force the external clock hands on the caches internally to begin sweeps of the caches to cleanup and reduce memory usage allowing the process to free memory back to the OS.  You can see this happening on a server where ‘max server memory’ has been set too high by running the following query:

–System Memory Usage 


SELECT 
    EventTime,
    record.value('(/Record/ResourceMonitor/Notification)[1]', 'varchar(max)') as [Type],
    record.value('(/Record/MemoryRecord/AvailablePhysicalMemory)[1]', 'bigint') AS [Avail Phys Mem, Kb],
    record.value('(/Record/MemoryRecord/AvailableVirtualAddressSpace)[1]', 'bigint') AS [Avail VAS, Kb]
FROM (
    SELECT
        DATEADD (ss, (-1 * ((cpu_ticks / CONVERT (float, ( cpu_ticks / ms_ticks ))) – [timestamp])/1000), GETDATE()) AS EventTime,
        CONVERT (xml, record) AS record
    FROM sys.dm_os_ring_buffers
    CROSS JOIN sys.dm_os_sys_info
    WHERE ring_buffer_type = 'RING_BUFFER_RESOURCE_MONITOR') AS tab
ORDER BY EventTime DESC

The output of the above script on a server that has been set incorrectly can be seen below:

Wow… An online calculator to misconfigure your SQL Server memory!   image thumb

The SQLOS responded to the low memory condition in the OS by reducing it’s memory usage, and then the Windows OS changed it’s notification from steady, too high memory, which signals the SQLOS that it can once again grow it’s memory usage, which resulted in the OS setting a low memory notification once again, and the process repeats.  If you follow the recommended settings from the calculator that was posted, this exact scenario will almost certainly result.

The take away from this post, don't blindly follow information or use tools that have been posted online, even when they are from someone at Microsoft.  As Paul says, there is nothing black and white with SQL Server, everything is an "It Depends…" (except for AutoShrink, that should NEVER be turned on!).  When you are configuring a SQL Server initially, it is best to start off with a lower 'max server memory' and then monitor the server over time to see it's memory usage trends.  If the server has available memory, the Available MBytes counter will show the extra gigabytes of memory that could be used and you can slowly adjust up based on what is available, leaving at least 150-300MB (I specify a range because it depends, but I stick to the high side of 300MB+ available at all times) available to the OS at all times.


EDIT:

After posting this, some people asked me by DM on twitter and by email, about the MAXDOP calculator that was on the same blog site (http://blogs.msdn.com/b/sqlsakthi/p/maxdop-calculator-sqlserver.aspx).  I intentionally didn't blog about this one originally, because it doesn't make the same type of problemattic recommendations as the 'max server memory' calculator.  If you plug in your processor information, it will make recommendations that have been in the support KB's for years.  However, newer processor architectures don't necessarily fit the recommendations that were made back in 2009 when the referenced KB article was written, and some of the changes recommended by the calculator have absolutely zero effect on the server.  For example, if you plug in 0 NUMA nodes and 4 CPU cores, it recommends MAXDOP of 4, that is what the 0 essentially correlates to anyway, so what's the purpose in changing the default from 0? It also doesn't mention or take into account workload as a part of its recommendations and a MAXDOP of 8 on a 32 way server that is doing 1-2 data warehouse queries an hour that chunk through large amounts of data is a waste of the remaining resources on the server.  Just like 'max server memory' the correct value for MAXDOP is environment and workload specific.