Understanding TempDB, table variables v. temp tables and Improving throughput for TempDB

This is a much needed and much overdue blog entry… In 8 Steps to Better Transaction Log throughput, I mentioned a customer that was helped by TWO typical optimization problems I see. In that blog entry, I said I would write two blog entries – that one on transaction log optimization and another on common tempdb optimizations. Well, I forgot…until I was reminded with an email this morning (thanks Marcus!).

First – a bit of understanding of TempDB – what goes there?

  • Internal temporary objects needed by SQL Server in the midst of other complex operations. For example, worktables created by a hash aggregate will be stored in TempDB or interim tables uses in hash joins (almost anything that shows as "hash" something in your query plan output is likely to go to tempdb).
  • User objects created with either # (for local temporary objects), ## (globabl temporary objects) or @ (table variables)
    • # = Local temporary object
      Local temp objects are objects accessible ONLY in the session that created it. These objects are also removed automatically when the session that created it ends (unless manually dropped).
    • ## = Globabl temporary object
      Global temporary objects are objects that are accessible to ANYONE who can login to your SQL Server. They will only persist as long as the user that created it lasts (unless manually dropped) but anyone who logs in during that time can directly query, modify or drop these temporary objects. These objects are also removed automatically when the session that created it ends (unless manually dropped) OR if being used by another session when the session that created it ends, when the session using it finishes using it (and it's only as long as any locks are held). If other sessions need more permanent use of a temporary object you should consider creating a permanent objects and dropping it manually.
    • @ = User-defined Table Variable
      User-defined Table Variables were introduced in SQL Server 2000 (or, wow – was it 7.0?) and provide an alternative to temporary tables by allowing you to create a variable defined as type TABLE and then you can populate and use it in a variety of ways. There has been A LOT of debate over whether or not you should always use table variables or always use temp tables. My response is that I ALWAYS avoid the word always! My point is that table variables are NOT always better nor are temp tables always better. There are key uses to each. I tend to like temp tables in scenarios where the object is used over a longer period of time – I can create non-key indexes on it and it's more flexible to create to begin with (SELECT INTO can be used to create the temp table). I also have the ability to use the temporary table in nested subprocedures because it's not local to the procedure in which it was created. However, if you don't need any of those things then a table variable might be better. When it is likely to be better – when you have smaller objects that don't need to be accessed outside of the procedure in which it was created and when you only need KEY indexes (a table variable ONLY supports the indexes created by a create table statement – meaning PRIMARY KEY and UNIQUE KEY).
  • Objects created by client applications – this is possibly a large part of your problem… Profiling can help you to determine if there's a lot of TempDB usage from your client applications.

OK, so now that you know what goes there – how do you make it optimal?

First and foremost, TempDB is in cache just as any other database is in cache. TempDB does not spill to disk unless you are low on cache and/or if you have a lot of inflight transactions in TempDB. Although TempDB is not persisted from shutdown to restart – it still needs to do some logging and therefore you should consider its optimization a lot like other databases.

Things you should do for TempDB (that are a lot like what you should do for every database):

  1. Isolate the data and log portion of TempDB.
  2. Place them on clean, defragmented disks.
  3. Pre-allocate them so they don't need to do a lot of autogrowth.
  4. Make sure you have sufficient memory to support active objects (check for disk activity to the disks that contain TempDB files).
  5. Make sure that transactions are written efficiently so that there are no unusually long running transactions that are unnecessarily holding resources (and therefore locks and therefore log activity).

And – if you need to move TempDB, you should review this KB Article: Moving SQL Server databases to a new location with Detach/Attach

Things you should do SPECIFICALLY for TempDB (especially if you're running on a multiproc machine):

Before I say what… let me tell you why? TempDB has a large number of objects being created all the time. For an object to be created, space must be allocated to it. Space allocation is determined by looking at some of the internal system pages (the GAM, and SGAM). In the end, it is these pages that start to have significant contention (with just one file) in a VERY active TempDB. To minimize that contention you can create multiple files.

  1. Consider creating multiple files for TempDB (even if on the same physical disks) so that there is less of a bottleneck when objects are being allocated. Make sure to read associated KB.
  2. Consider setting a trace flag to have object allocation grab extents rather than pages. Make sure to read associated KB.

BOTH of these last two are described in detail by a KB article: FIX: Concurrency enhancements for the tempdb database.

OK – so that should really help! Moving forward (meaning SQL Server 2005), having multiple files can still help for TempDB.

5 thoughts on “Understanding TempDB, table variables v. temp tables and Improving throughput for TempDB

  1. Hi,
    I use heavily temporary tables. I have my tempDB grows up to 5 GIG.
    Should I backup or truncate in order to bring it back to it’s normal size
    Is there an articel about TempDB maintenace

  2. Hey there – LOB (Large Objects) are NOT manipulated in TempDB. And – as far as TempDB maintenance, NO… you should not need to backup or clear your transaction log for TempDB. However, you might want to Profiler your server and see if there are long running and/or poorly written transactions which are causing TempDB’s log to grow quite large. That could be what’s causing you grief!

    Thanks for reading/asking!

  3. Hi I really like this blog it’s an excellent idea.

    You say that its good to consider "creating multiple files for TempDB" if you have multiple processors (I assume you mean logical processors & not physical ones) – but I can not find any description(s) on the web as to how you create multiple files for TempDB (I’m still learning my way around SQL Server)

    If you can point me in the right direction I’d appreciate the hand.

  4. Hi Kimberley.

    I realize this blog post is old now, but I have a question for you. I have an update script which involves intermediate storage of data in temporary table. This table holds 32 million records (On average at 60 bytes per record), which is 1.8 GB in total, excluding the index.

    I have 4 cores so I have created 4 files of each 1GB.

    My question is, how are temp tables this big stored in tempdb? Is the temporary table evenly spread out over the 4 files? Or does the table fill one file completely, and then store the rest in second file?

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched


Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.