It’s well known that one of the common performance issues that can affect tempdb is allocation bitmap contention. I discuss this, and ways to alleviate it, in these posts:

The current best advice around adding tempdb data files is enshrined in KB article 2154845. If you’re seeing tempdb allocation contention (see top blog post link above), then:

  • If your server has less than 8 logical cores (e.g. a one CPU server with 4 physical cores and hyperthreading enabled has 8 logical cores), use # tempdb data files = # logical cores, equally sized
  • If your server has more than 8 logical cores, start with 8 tempdb data files, and add sets of four at a time, equally sized, until the contention is alleviated

There are three problems that people often face when adding tempdb data files: matching the size of the existing files that are growing, adding a file doesn’t help with contention, and adding too many files.

Matching Existing File Sizes

This problem occurs when the existing tempdb data files are growing, and people find it hard to create additional files that match the size of the existing files.

There’s an easy method for doing this: don’t!

Don’t try to match the size of existing, growing files. Create the new files to be a bit larger than the existing files, then go back and increase the size of the existing files to match the size of the new files.

For example, if I have 4 tempdb data files sized at 6GB each, and they’re growing by 512MB every few minutes because of an ad hoc workload. If I decide to add 4 more files, I might decide to add the four new files at 10GB each, and then go back and do ALTER DATABASE [mydb] MODIFY FILE [DataFileX] (SIZE = 10GB) for each of the 4 existing files. Problem solved.

But also see the bottom section, where you may want to limit the total amount of space taken up by all your tempdb files if the only reason for extra files is to alleviate tempdb allocation contention.

Additionally, if you have one full data file, you may find that…

Adding a File Doesn’t Help

This is very frustrating when it happens to people because it gives the impression that adding tempdb data files does not help with allocation contention. However, there is a simple explanation for this phenomenon.

Consider the case where there is one tempdb data file. Obviously all the allocations have to come from that data file and with the right workload, allocation bitmap contention will result. After the server has been up for a while, and the workload has been running and using tempdb for a while, the single tempdb data file may become quite full.

Now let’s say that you decide to add one more tempdb data file. What happens to the allocations?

Allocation uses two algorithms: round-robin and proportional fill. It will try to allocate from each file in the filegroup in turn, but will allocate proportionally more frequently from files that have proportionally more free space than others in the filegroup.

In the case where one file is very full and the other file is very empty, the vast majority of the allocations will be from the new, empty file. This means that almost all the contention moves from the initially existing tempdb data file to the new one, without much alleviation of the overall contention.

If this happens to you, try adding some more data files so that the allocation system has multiple files that it will allocate from, spreading the contention over those files and leading to an overall drop in contention and increase in transaction throughput.

But beware of immediately…

Adding Too Many Data Files

This is the case where tempdb allocation contention is a problem and people immediately add a large number of additional files where fewer files would work just as well. The problem here is that additional disk space is used up for no real gain, which may or may not be significant in your environment, depending on the size of the files added.

Let’s do an experiment. Below is a screen shot of PerfMon measuring transactions per second in tempdb for a contrived workload that has 100 connections all repeatedly creating and truncating temp tables. It’s running on my laptop (8 logical cores) using SQL Server 2014 RTM CU3.

perfmon1

For the first third of the trace, there’s a single data file. For the middle third of the trace, there are two equally-sized files. For the final third of the trace, there are 8 equally-sized files.

Clearly there isn’t a big performance boost from having the additional 6 data files in the final third, but what’s the sweet spot?

Ideally you’d experiment with varying numbers of tempdb data files to find the sweet spot for your workload. However, that’s easier said than done, especially when you’re trying to standardize a tempdb configuration across multiple servers.

Here’s an example of a slightly different workload running under the same conditions on my laptop.

perfmon2

It starts with a single data file, then 2, 4, 6, and 8 (pausing perfmon between each file addition). In this case, it’s clearly worth it going to 8 data files. But would I make them all the same size as the initial data file?

No, not if the only reason I need the extra files is to alleviate the allocation bitmap contention. I’d lower the size of all the tempdb files, including the initial one, so I’m not taking up a huge amount of extra disk space for these files.

Just be aware that sometimes you don’t need to go all out and add a whole bunch of extra tempdb data files to get a performance boost.

Summary

The easiest way to alleviate tempdb allocation contention is to enable trace flag 1118 and to add more tempdb data files. Just be careful that you add the right number to help with the contention, you make all the files the same size, and that you take into account the total size of all the data files you’ve created, and possibly dial them all down a bit.

Enjoy!