(The Curious Case of… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can sometimes be more frequent. It covers something interesting one of us encountered when working with a client, doing some testing, or were asked in a random question from the community.)
As part of our remote DBA service, we constantly monitor a client’s servers for anomalous behavior, and last week Tim noticed a sudden latency and memory usage increase at one of the client’s he’s responsible for. He reached out to their in-house DBA and asked about any recent changes. The DBA quickly replied back that they were not aware of any changes however they’d been struggling with that server due to reports of things running slow and overall poor response times. Furthermore, the developers and application team all stated that no new code had been deployed.
As they are one of our Remote DBA customers, they’d been provided with monthly reports by Tim that include system response times and other baseline information. Tim quickly reviewed the numbers with them from previous months showing how much better the system had been performing – so *something* had changed.
Now Tim knew this was a virtual machine, and in situations like this, it’s common for there to have been a change at the VM level instead of a change at the SQL Server level. On reaching out to the VM admin to see if the VM had been moved to another host, Tim found that sure enough, there had been an event of some kind and this larger SQL Server VM had been moved to a very busy host that was struggling to keep up.
The VM admin quickly moved the VM to a better suited host and performance went back to normal – simple solution. Later on Tim found out the VM admin had been doing maintenance on the VM infrastructure over the weekend and had just forgotten to move the SQL Server back to its normal environment!
Take away: if you have a SQL Server running in a VM, and it has a sudden, catastrophic performance degradation, check whether something has changed at the VM level before spending a lot of time troubleshooting at the SQL Server level.
PS I had a question about the increased memory usage: The thinking is that things were so much slower that connections were staying active for longer, so many more connections = many more threads being used and memory grants being held for longer = more memory being used, and a feedback loop that made it worse and worse compared to normal connection load.
2 thoughts on “The Curious Case of… a sudden latency and memory usage increase”
If this is running in a Hyper-V environment, monitor the CPU dispatch nanoseconds perf counter at the host level. Anything above 50µs will generally mean latency on the VMs that are requesting more CPU resources.
Another take away is that when you are told nothing has changed, don’t take that at face value. Dig a little deeper, albeit politely :)