Traditionally questions about how much memory SQL Server needs were aimed at how to appropriately set the ‘max server memory’ sp_configure option in SQL Server, and in my book the recommendation that I make is to reserve 1 GB of RAM for the OS, 1 GB for each 4 GB of RAM installed from 4–16 GB, and then 1 GB for every 8 GB RAM installed above 16 GB RAM. Then monitor the Memory\Available MBytes performance counter in Windows to determine if you can increase the memory available to SQL Server above the starting value. (Note: This counter should remain above the 150-300MB at a bare minimum, Windows signals the LowMemoryResourceNotification at 96MB so you want a buffer, but I typically like it to be above 1GB on larger servers with 256GB or higher RAM) This has typically worked out well for servers that are dedicated to SQL Server. You can also get much more technical with determining where to set ‘max server memory’ by working out the specific memory requirements for the OS, other applications, the SQL Server thread stack, and other multipage allocators. Typically this would be ((Total system memory) – (memory for thread stack) – (OS memory requirements ~ 2-4GB) – (memory for other applications) – (memory for multipage allocations; SQLCLR, linked servers, etc)), where the memory for thread stack = ((max worker threads) *(stack size)) and the stack size is 512KB for x86 systems, 2MB for x64 systems and 4MB for IA64 systems. The value for ‘max worker threads’ can be found in the max_worker_count column of sys.dm_os_sys_info. However, the assumption with either of these methods is that you want SQL Server to use everything that is available on the machine, unless you’ve made reservations in the calculations for other applications.
As more shops move towards virtualizing SQL Servers in their environment this question is more and more geared towards determining what is the minimum amount of memory that a SQL Server will need to run as a VM. Unfortunately there is no way to calculate out what the ideal amount of RAM for a given instance of SQL Server might actually be since SQL Server is designed to cache data in the buffer pool, and it will typically use as much memory as you can give it. One of the things to keep in mind when you are looking at reducing the RAM allocated to a SQL Server instance is that you will eventually get to a point where the lower memory gets traded off for higher disk I/O access in the environments.
If you need to figure out the ideal configuration for SQL Server memory in an environment that has been over provisioned the best way to try to go about doing this is start off with a baseline of the environment and the current performance metrics. Counters to begin monitoring would include:
- SQL Server:Buffer Manager\Page Life Expectancy
- SQL Server:Buffer Manager\Page reads/sec
- Physical Disk\Disk Reads/sec
Typically if the environment has excess memory for the buffer pool, the Page Life Expectancy value will continue to increase by a value of one every second and it won’t typically drop off under the workload because all of the data pages end up being cached. At the same time, the number of SQL Server:Buffer Manager\Page reads/sec will be low after the cache ramp up occurs which will also correspond to a low value for Physical Disk\Disk Reads/sec.
Once you have your baseline for the environment, make a change to the sp_configure ‘max server memory’ option to reduce the size of the buffer pool by 1GB and then monitor the impact to the performance counters after things stabilize from the initial cache flushing that may typically occur when RECONFIGURE is run in the environment. If the level of Page Life Expectancy remains acceptable for your environment (keeping in mind that a fixed target of >= 300 is ridiculous for servers with large amounts of RAM installed), and the number of SQL Server:Buffer Manager\Page reads/sec is within what the disk I/O subsystem can support without performance degradation, repeat the process of reducing the sp_configure value for ‘max server memory’ by 1GB and continuing to monitor the impact to the environment.
41 thoughts on “How much memory does my SQL Server actually need?”
You have helped me a lot to understand Memory related arch within SQL Server and I really appreciate that.I have read many of your articles on that.
One quick question – Please confirm if you can guide me with some great pointers(If its written by you,then great),which will help to understand the memory arch related changes for SQL2012.
Thanks in advance and once again,you have helped me a great deal.
This is an interesting twist on configuring SQL memory. We rarely try to figure how little it will run on. Cool stuff.
if 32 bit SQL server runs on 64 bit OS with x64 systems ,what’s the Size of thread stack, 512KB or 2M?
if 32 bit SQL server runs on 32 bit OS with x64 systems ,what’s the Size of thread stack, 512KB or 2M?
I’ve noticed that the memory usage on my web site http://www.figurinesanimaux.com which runs SQL Server 2008 on a 64 bit Windows 2008 R2 seems to use much more memory than it did on a 2012 virtual server (which I tested on the cloud). Thanks for this post as it might help me refine this issue!
Is there any significance with the min server memory setting? At present we have the default max and min server memory setting which is 21 GB and 0GB respectively. Since we have only 12GB of physical RAM we are reconfiguring the max server memory to 9GB. What should be the minimum server memory setting for SQLServer 2008R2 64 Bit enterprise edition. Thanking you in advance.
If the server is dedicated to SQL Server and has ‘max server memory’ set to prevent memory pressure, there is no benefit from setting ‘min server memory’ equal to ‘max server memory’ and at the same time there is not really a problem with doing it either. The ‘min server memory’ setting only takes effect after the buffer pool has ramped up and when Windows signals memory pressure and SQLOS responds by shrinking the buffer pool memory usage. When ‘min server memory’ has a value other than 0, that sets the limit to how far down SQLOS will shrink under pressure, which is not likely to happen on a dedicated box for SQL Server. Unless there are multiple instances installed or it is a VM that might get ballooned, my general recommendation is to leave ‘min server memory’ set to the default of 0 because it won’t generally matter.
Hmm I’m not sure this really works for enterprise servers. Whilst I think this isn’t bad for servers with up to 64GB , we’re progressively pushing memory up to 256GB for our production servers, I’m not convinced I’d need 35GB for the o/s. Interestingly I have sql memory pegged at around 220GB however I have a constant free memory shown at around 22GB. ( Windows 2012 cluster with sql 2008 ) but I will be changing this in due course, probably leave the o/s with about 12GB.
I’ve updated the first paragraph with the rest of the recommendation from my book, about monitoring Memory\Available MBytes to determine what is left that could be allocated to SQL Server without getting into memory pressure.
I need to collect and analyze data related to cpu,memory, disk utilization for sql server 2008R2 version as a DC consolidation project. What is a best method to perform this task and how to analyze current existed configuration.
I usually leverage PerfMon to collect information about these metrics and combine that with information collected from DMVs to make decisions like this.
The IT set up SQL Server 2008 R2 on a virtual machine, allocating it 6 GB. When about 10 users get on with datamining and other data entry, it slows down and can cause the front-end to freeze for seconds.
The IT staff points to the Thread Usage under 30% and memory graph only using 4 GB.
Some say these graphs can not be really trusted in a VM environment.
What would be the best way to approach to move to a solution?
You would have to do root cause analysis of the problem occurring and I would start with analyzing the wait types for the sessions running on the server. It could be that the data mining is reading large amounts of data and the I/O subsystem is slow and the memory size is too small for the concurrent workload demands, so you have huge buffer pool flushing, or any other of a number of bottlenecks or issues occurring.
You said ” 1 GB for every 8 GB RAM installed above 16 GB RAM “So what about after 32G?For example for a server with 64G ram(as you mentioned in your ebook)why the value is 32G?I calculated the max server memory for sql 54G!!!
“if the server only has 4 GB of physical memory (RAM) installed, that’s going to be the upper limit for SQL Server.If a server has 64 GB of RAM, and the Windows operating system is Windows Server 2008 64-bit Standard Edition, then the OS limit is 32 GB and that is the most that could be used by SQL Server.”
This is talking about the limitation of Windows Server 2008 64-bit Standard Edition, which can only address 32GB of RAM, so it is an actual OS limitation that makes it impossible to see all 64GB of RAM. It has nothing to do with the calculation, it is just pointing out that the limitation is Windows Server 2008 64-bit Standard Edition. You can see this in the Windows Server Memory Limits BOL topic (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx#physical_memory_limits_windows_server_2008)
I want to know, is SQL 2008 R2 express uses 1 GB Ram per instance?, or it uses all over 1 GB ram?, Does number of instances matters? OR it doesn’t matter for it, it will uses only 1 GB ram, doesn’t matter how many instances you have?
For example, i have 3 instances, than it will use over all 1 GB Ram or it will use 1 GB Ram per instance?
I have a VMWare SQL Server 2008 R2 Server dedicated to SSIS and SSRS. We were experiencing issues with PLE over the past few weeks and have raised the Memory on the server from 8GB to 32GB. Previously the Max Server Memory was set to the default.
I have initially set the Max Server Setting to 28GB but as this server is only used for SSIS and SSRS I don’t think this is correct.
I can currently see SQL server has been allocated 28GB but is only using 533MB. Does this mean that the DTSExec process has access to the remaining 27.5GB or does SQL Server not allow other processes to access the memory? Or is it available to other processes if SQL Server does not need it?
You would have to look at VMware to see if the VM is actually getting the memory allocated to it, or if the VM is being Ballooned by the hypervisor due to memory pressure on the host if it is overcommitted for memory. If you have Lock Pages in Memory set for the VM, Task Manager won’t show all the memory usage for SQL Server. You would have to look at the SQL Server:Memory Manager\Total Server Memory counter to see the locked page allocations for the SQL Server process.
Ok – thanks for taking the time reply and for the tips
Jonathan, In our Production database (SQL Server 2008 R2) server SQL Server instance is always occupying 80% to 90% of the memory. If we restart the instance/server the memory is released. But over the period of few hours it is reaching again 90% of the memory. In Server Properties, Min and Max server memory are default.
How to release the Memory occupied by the sql server instance?
Thanks in advance.
Thanks & regards,
This is normal behavior for SQL Server. The data cache and plan cache will ramp up memory usage and not release it unless there is memory pressure on the server. This is intended behavior and you shouldn’t expect SQL Server to release the memory allocated to the buffer pool under normal circumstances.
Ya John, This is sql servers norm behavior. Even I had came across with same issue but after restricting max and min memory my sql server’s memory utilization came down to <70%.Not sure, It might be because of buffer pool released some memory when I made changes.
Yes, changes to max server memory take effect immediately and will shrink the buffer pool if it is reduced.
it’s possible to limit the memory allocated to the buffer ?
i have a client with an application that need 5% of memory server and the sql is using 80% on task manager plus terminal server users the memory is always on 99% and the program return an error…….
Yes, by setting max server memory you limit the buffer pool.
I just read your eBook, and was happy to see you covered most of the basics of troubleshooting. I think the book is well laid out. I agree with your MAX RAM suggestions you have outlined as a good starting base, and tuning SQL Server up or down from there. Always looking for deeper material though, I was hoping for some subject matter on packet size and impact of using jumbo frames, client SNAC versions issues(effects of clients connecting with prior versions of SNAC or MDAC), as well as some subject matter on Isolation modes, and the effects of RSCI-perhaps on the next addition?
Hey Jonathan, we are having same problem like Vijay mentioned when our maintenance plan runs at 10:30 DBCC CheckDB the memory peaks to 95% and didn’t get released. Is it ok and not to worry? We are running SQL 2012.
I have sql server using 93-95% RAM.current max memory setting for SQL Server is 43GB and Total RAM is 48GB.
In task manager its showing 46.5GB for sqlsrv.exe
1 Dont know which process/query taking more memory?
2 Do we need to add more RAM?
3 Which database using more memory and CPU?
I have sql server using 93-95% RAM.current max
memory setting for SQL Server is 43GB and Total
RAM is 48GB.
In task manager its showing 46.5GB for sqlsrv.exe
1 Dont know which process/query taking more
2 Do we need to add more RAM?
3 Which database using more memory and CPU?
I have issue the same you VIJAY AHIRE and till now I don’t how to solve this problem yet, thank in advance for great solvers with this
Max Server Memory only controls buffer pool size, it doesn’t account for all memory usage, for example thread call stacks (2MB per thread on x64), SQLCLR memory usage, OLE Automation memory usage, VAS allocations outside the buffer pool, all add to the total memory used by SQL Server.
Jonathan, I need a little clarification on this statement. “…reserve 1 GB of RAM for the OS, 1 GB for each 4 GB of RAM installed from 4–16 GB, and then 1 GB for every 8 GB RAM installed above 16 GB RAM.” In this case assume, 64bit Windows and 64bit SQL Server. If I have a server with 128 GB physical RAM, using your suggestion, I would reserve 19GB of RAM as OS overhead? Thus leaving 109 GB of RAM to use for SQL Servers Max. Server Memory?
It would be 18GB with MaxServerMemory set at 110GB as a starter.
Thank you for the post.
I have 512 GB total memory, Locked pages in memory enabled and max memory limit set to 480 GB. Could you please clarify few things for me:
I) Would RM ever notify BP to shrink pages if there is external memory pressure? I would think it would not since I have Locked pages turned on?
ii) Is there formula how RM calculates and “thinks” there is external memory pressure?
Windows has a MemoryResourceNotification that SQLOS enlists a subscription to any notification status changes with it’s internal ResourceMonitor thread. If the LowMemoryNotification get’s set by Windows, SQLOS will internally respond to that notification by shrinking it’s memory usage, even with Lock Pages In Memory enabled. All LPIM does is prevent Windows from paging out the buffer pool memory, it doesn’t stop SQLOS from responding to memory pressure the way it is designed.
As a SQL Server developer for a small company I’ve just been sort of thrown into the DBA role. We have a SSIS package that has just recently gone from taking 30 minutes to run to 4 hours. One of the things I noticed is that SQL Server seens to be eating up 87% of the memory and not releasing it until I restart the services. We have 16 GB of ram and the Maximum server memory set to 12288 MB.
Do you think the SQL server memory hogging is behind the slowness of the package?
If so is there anything else we can do to throttle SQL Server memory back like setting Max Server Memory lower?
If not what are the best tools to use to track the cause of the performance issue?
And lastly, since this new role for me looks to be permanent can you recommend any courses or tutorials or books for the beginner DBA?
It is completely normal and expected for SQL Server to use the memory on the server and not release it back to the OS, that’s exactly what it is supposed to do, and it is highly unlikely that SQL Server’s memory usage has anything to do with the slower performance of the SSIS package. Without seeing it my first guess would be that the volume of data being processed has increased and you need to do performance tuning work based on the SSIS package is doing when it runs. If you are looking for a book, I’d recommend starting with the SimpleTalk troubleshooting book I wrote years ago (https://www.red-gate.com/library/troubleshooting-sql-server-a-guide-for-accidental-dbas), it’s free to download as an ebook in PDF format. For classes, I’d recommend our Immersion Event for the Accidental/Junior DBA (https://www.sqlskills.com/sql-server-training/ie0/) which we will be teaching in Chicago in April (https://www.sqlskills.com/sql-server-training/chicago-ie0-20160425/). I’d also recommend reading the month long series of posts we wrote (https://www.sqlskills.com/help/accidental-dba/).
If MAX server memory is set and OS or other processes require more memory than what is available, will the OS resort to paging or will SQLOS release memory below MAX level. I am clear on MIN memory setting where SQLOS will not drop below but I am not so sure about MAX levels.
Yes, the Resource Monitor internally in SQLOS will respond to the OS level memory pressure and attempt to back off memory, even if Lock Pages in Memory is set. The only time the buffer pool can’t grow or shrink in SQLOS to respond to OS memory conditions is when the instance is configured to use Large Pages with a trace flag and LPIM on Enterprise Edition for the buffer pool. Under Large Pages, the buffer pool is sized and grown immediately on instance startup and is static/non-dynamic from that point forward.
I have SQL server with 56GB RAM & max memory for SQL is set 50GB, min is 0. SQL uses 47GB but when full backup runs on 5TB DB it fails and even we have AppDynamics- monitoring tool which also crashes during same time interval and if we run the full backup manually it gets completed. Memory utilization is always 95-97%. Should we lower the max memory to 45-46GB and give more to OS? Will it cause performance issues? PLE is above 700 and cache hit ratio is above 90%.
SQL server tendency is to use all the memory but out of 50GB it is using 47GB, does that mean that there is a OS memory pressure that’s why SQL is releasing some memory..?
The value you have set for Max Server Memory is higher than I would recommend. As far as what causes the backup to fail, I can’t say just on the information you’ve provided, there isn’t an error message, or enough details to be able to even make a guess.