Survey: how is your tempdb configured?

In this week's survey, I want to know how you've got tempdb configured compared to the number of processor cores SQL Server thinks it has. I'll correlate, analyze, and present the results like the log file survey I did last year where I got results for 17000 databases.

The code I'd like you to run is as follows:

SELECT os.Cores, df.Files
FROM
   (SELECT COUNT(*) AS Cores FROM sys.dm_os_schedulers WHERE status = 'VISIBLE ONLINE') AS os,
   (SELECT COUNT(*) AS Files FROM tempdb.sys.database_files WHERE type_desc = 'ROWS') AS df;
GO

It works on 2005 onwards (thanks to Robert Davis (blog|twitter) for doing a quick 2005 test for me).

I'll take as many results as you can be bothered gathering for me – anywhere from 1 to 100s!

To help you out, Eric Humphrey (blog|twitter), who's in our class this week, put together a PowerShell script to help you run the code against multiple servers. See here – very cool!

Either add your results as a comment or drop me a mail with the results. Feel free to add in any explanation you want as to the reasoning for your setup. Everything will remain anonymous as always.

I'll report on the results in a week or two.

[Edit: the survey is now closed.]

Thanks!

72 thoughts on “Survey: how is your tempdb configured?

  1. All our servers are configured to use one file per core, up to a maximum of 10 files. These files are on the same file group, though (10 different disks for tempdb would be nice, but… :-))

  2. Production servers (2005):
    8 cores, 8 files on one
    8 cores, 1 file on another

    Testing/Training servers (2008r2):
    4 cores, 1 file on each of 2 servers

  3. We have about 50 servers in our production site. In each case, the number of files equal the number of cores. The values can range from 4/4 to 48/48.

  4. 16|1 – 55GB (50GB used) TEMPDB – NO PAGELATCH OR PAGEIOLATCH
    16|1 – 300GB (only 5GB used) TEMPDB – NO PAGELATCH OR PAGEIOLATCH

    THE WAITS have been tested with this script (http://www.sqlservercentral.com/blogs/robert_davis/archive/2010/03/05/Breaking-Down-TempDB-Contention.aspx):
    Select
    session_id,wait_type,wait_duration_ms,blocking_session_id,resource_description,ResourceType =
    Case When Cast(Right(resource_description, Len(resource_description) – Charindex(‘:’, resource_description, 3)) As Int) – 1 % 8088 = 0 Then ‘Is PFS Page’
    When Cast(Right(resource_description, Len(resource_description) – Charindex(‘:’, resource_description, 3)) As Int) – 2 % 511232 = 0 Then ‘Is GAM Page’
    When Cast(Right(resource_description, Len(resource_description) – Charindex(‘:’, resource_description, 3)) As Int) – 3 % 511232 = 0 Then ‘Is SGAM Page’
    Else ‘Is Not PFS, GAM, or SGAM page’
    End
    From sys.dm_os_waiting_tasks
    Where wait_type Like ‘PAGE%LATCH_%’
    And resource_description Like ‘2:%’

  5. Cores Files
    4 1
    4 1
    8 4
    16 4
    24 4
    24 4
    1 1
    1 1
    2 2
    12 4
    2 1
    24 4
    2 1
    1 1
    1 1
    1 1
    8 1
    2 1
    4 1
    4 2
    4 2
    1 1
    1 1
    4 1
    4 1
    1 1
    1 1
    2 1
    2 1
    4 1
    2 2
    2 1

  6. Thanks to the power of Central Management Servers:
    Cores Files
    8 1
    4 1
    4 1
    16 1
    16 1
    8 1
    16 1
    16 1
    16 1
    16 1
    8 1
    16 16
    16 1
    4 4
    8 1
    2 1
    8 1
    8 1
    8 1
    4 2
    4 1
    4 1
    4 1
    1 1
    4 1
    16 16
    4 1
    4 1
    4 1
    4 1
    4 1
    8 1
    8 8
    8 9
    8 1
    4 1
    1 1
    2 1
    8 8
    4 1

  7. Instance 1 – 24 cores, 8 files
    Instance 2 – 4 cores, 1 file
    Instance 3 – 4 cores, 1 file
    Instance 4 – 4 cores, 1 file

  8. Cores Files
    32 1
    16 1
    32 1
    2 1
    32 1
    32 1
    8 1
    8 1
    4 1
    32 1
    8 1
    2 1
    6 4

    Interestingly enough the only one that we have with more than 1 was due to a MS recommendation when looking at a performance issue but couldn’t prove that it was necessary. After adding the additional files it made no difference and of course the PAGELATCH wait counts for tempdb did not change (they weren’t bad to begin with).

    I have checked all the other servers and we don’t have PAGELATCH waits for tempdb but I am doing some further assessment currently.

  9. Cores Files
    24 1
    24 4
    24 4
    24 4
    24 4

    These, perhaps perverse, results are due to migrating instances from dual-processor, quad-core servers (hence, 4 files using a 2:1 core:file ratio) to dual-processor, hexa-core hyperthreaded servers. The first example, with a single file, is a work-in-progress.

  10. For SQL 8.0 its one file per core
    For SQL 9.0 and above this is variable typically between 2 files per 8 cores to 6 files per 8 core, configured after checking for allocation contention.

  11. 8 cores. 8 Files throughout live estate.
    No hyperthreading.
    Moving these to their own drives (Temp and Log – RAID 10)
    Read only subscribers being tested with local server spec SSD for Temp and Log soon.

  12. Cores Files
    —– —–
    16 8
    16 8
    16 1
    16 1
    16 8
    16 1
    16 1
    16 8
    16 8
    16 8
    8 8
    8 1
    16 8
    16 4
    8 1
    4 1
    8 1
    4 1
    8 8
    4 4
    8 1
    8 8
    8 8
    8 1
    16 8
    16 8
    16 8
    16 3
    4 1
    1 1
    24 8
    24 8
    16 8
    16 8
    4 1
    16 8
    8 1
    16 1
    16 1
    16 1
    8 1
    8 1
    8 1

  13. Prod: 4 and 1 (We had two files up to a week ago, but found better performance with 1, we tried 4 as well, but it didn’t help.)
    Dev: 8 and 1

  14. 16/16 <– hundreds of systems setup this way

    We weren’t aware that the best practice was no longer cores=# data files, so we deployed them all with the old best practice. We haven’t found any tempdb contention with this config, so we haven’t made any changes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.