The Accidental DBA (Day 30 of 30): Troubleshooting: Transaction Log Growth

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. You can find all the other posts in this series at http://www.SQLskills.com/help/AccidentalDBA. Enjoy!

For the final post in our 30-day series I want to cover one of the most common problems for Accidental DBAs to come across – uncontrolled transaction log growth.

I remember an email we were sent a few years ago, which basically said: I’ve got a 10-GB database and a 987-GB transaction log. What happened?!?!

The number 1 cause of uncontrolled transaction log growth is the following:

  1. Someone creates a database, where the default recovery model is Full; or someone changes the recovery model of a database to Full (remember I explained about these back on day 7 of the series.)
    • Even though the recovery model is Full, the database behaves as if it’s using the Simple recovery model, where the transaction log will automatically clear (to allow reuse and help prevent growth) whenever a checkpoint occurs. This is called the pseudo-Simple recovery model. This situation continues until a database backup is performed.
  2. Someone performs a database backup, completing the transition of the database into the Full recovery model.
    • This database backup provides a basis for further log backups to be performed, and so the log will no longer clear until a log backup is performed.
  3. Nobody performs log backups.

As no-one is performing log backups, and the database is properly using the Full recovery model, the log will never clear. As the log doesn’t clear, it will eventually fill up and then require more space for more log records, and so it will automatically grow. And grow, and grow, until the disk volume runs out of space.

The trick here is simple: perform regular log backups of databases using the Full (or Bulk-Logged) recovery model.

This is all well and good as long as you know why the log cannot clear, as you may be taking log backups and the log is still growing because it cannot clear. SQL Server knows why the log can’t clear, and you can ask it using the following code:

SELECT [log_reuse_wait_desc] FROM sys.databases;
GO

This will list the reasons why the log could not clear, for all databases, at the time the most recent log clearing was attempted. In the case I described above, the output for the database would be LOG_BACKUP, and there are a bunch of other reasons, including long-running transactions, unreplicated transactions, active data backups, and more. You can read the full list in SQL Server Books Online topic Factors That Can Delay Log Truncation.

Whatever the reason is that has made your database’s log grow, take corrective action to allow the log to clear, and then take preventative action to stop the log having to grow again. And if your log has already grown really large, you’ll want to shrink it down to a more reasonable size, and you can read more details on how to do that in Kimberly’s (old but still relevant) post here.

Apart from the log growing and potentially running out of space, log growth is problematic because:

  • Every time the log grows, the new portion of the log has to be zero initialized, which takes some time and causes the write operations on the database to pause.
  • Every time the log grows, more internal chunks are added to the log, and the more of these there are, the more performance impact there is on log operations (Kimberly’s post discusses this too).
  • As the log gets larger, the potential for more downtime during a disaster increases. If the entire database is destroyed, restoring from backups involves recreating the data and log files. The larger the log file is, the longer this takes because of zero initialization again.

Careful transaction log management is required as part of being responsible for a SQL Server instance, whether as an Accidental or experienced DBA. You can learn much more about the transaction log in my online training course SQL Server: Logging, Recovery, and the Transaction Log.

Finally, we really hope you’ve enjoyed reading our post-a-day series during June as much as we’ve enjoyed writing it, and we also hope you’ve learned a bunch of information that will help you increase your DBA skills. To continue your DBA education, make sure to follow our blogs, subscribe to our bi-weekly SQLskills Insider newsletter, check out our online training, and consider coming to one of our in-person training classes – for the ultimate learning experience!

Thanks – Paul & the SQLskills team

The Accidental DBA (Day 27 of 30): Troubleshooting: Tempdb Contention

This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. You can find all the other posts in this series at http://www.SQLskills.com/help/AccidentalDBA. Enjoy!

One of the most common performance problems that exists in SQL Server instances across the world is known as tempdb contention. What does that mean?

Tempdb contention refers to a bottleneck for threads trying to access allocation pages that are in-memory; it has nothing to do with I/O.

Consider the scenario of hundreds of concurrent queries that all create, use, and then drop small temporary tables (that by their very nature are always stored in tempdb). Each time a temp table is created, a data page must be allocated, plus an allocation metadata page to keep track of the data pages allocated to the table. This requires making a note in an allocation page (called a PFS page – see here for in-depth info) that those two pages have been allocated in the database. When the temp table is dropped, those pages are deallocated, and they must be marked as such in that PFS page again. Only one thread at a time can be changing the allocation page, making it a hotspot and slowing down the overall workload.

Back in SQL Server 2005, my dev team at Microsoft implemented a small cache of temp tables, to try to reduce this contention point, but it’s only a small cache, so it’s very common for this contention to be an issue, even today.

What’s really interesting though, is that many people don’t realize they have this problem – even seasoned DBAs. It’s really easy to figure out whether you have this kind of problem using the sys.dm_os_waiting_tasks DMV. If you run the query I have below, you’ll get an idea of where the various threads on your server are waiting, as Erin discussed earlier this month.

SELECT
    [owt].[session_id],
    [owt].[exec_context_id],
    [owt].[wait_duration_ms],
    [owt].[wait_type],
    [owt].[blocking_session_id],
    [owt].[resource_description],
    CASE [owt].[wait_type]
        WHEN N'CXPACKET' THEN
            RIGHT ([owt].[resource_description],
            CHARINDEX (N'=', REVERSE ([owt].[resource_description])) - 1)
        ELSE NULL
    END AS [Node ID],
    [es].[program_name],
    [est].text,
    [er].[database_id],
    [eqp].[query_plan],
    [er].[cpu_time]
FROM sys.dm_os_waiting_tasks [owt]
INNER JOIN sys.dm_exec_sessions [es] ON
    [owt].[session_id] = [es].[session_id]
INNER JOIN sys.dm_exec_requests [er] ON
    [es].[session_id] = [er].[session_id]
OUTER APPLY sys.dm_exec_sql_text ([er].[sql_handle]) [est]
OUTER APPLY sys.dm_exec_query_plan ([er].[plan_handle]) [eqp]
WHERE
    [es].[is_user_process] = 1
ORDER BY
    [owt].[session_id],
    [owt].[exec_context_id];
GO

Note that the [est].text line does not have text delimited – it throws off the plugin.

If you see a lot of lines of output where the wait_type is PAGELATCH_UP or PAGELATCH_EX, and the resource_description is 2:1:1 then that’s the PFS page (database ID 2 – tempdb, file ID 1, page ID 1), and if you see 2:1:3 then that’s another allocation page called an SGAM (more info here).

There are three things you can do to alleviate this kind of contention and increase the throughput of the overall workload:

  1. Stop using temp tables
  2. Enable trace flag 1118 as a start-up trace flag
  3. Create multiple tempdb data files

Ok – so #1 is much easier said than done, but it does solve that problem :-) Seriously though, you might find that temp tables are a design pattern in your environment because they made a query go faster once and then everyone started using them, whether they’re *really* needed or not for enhancing performance and throughput. That’s a whole other topic though and outside the scope of this post.

#2 prevents contention on the SGAM pages by slightly changing the allocation algorithm used. There is no downside from having this enabled, and I even say that all SQL Server instances across the world should have this trace flag enabled by default (and I said the same thing when I ran the dev team that owned the allocation code in the SQL Server Storage Engine).

#3 will help to remove the PFS page contention, by spreading the allocation workload over multiple files, thus reducing contention on the individual, per-file PFS pages. But how many data files should you create?

The best guidance I’ve seen is from a great friend of mine, Bob Ward, who’s the top Escalation Engineer in Microsoft SQL Product Support. Figure out the number of logical processor cores you have (e.g. two CPUS, with 4 physical cores each, plus hyperthreading enabled = 2 (cpus) x 4 (cores) x 2 (hyperthreading) = 16 logical cores. Then if you have less than 8 logical cores, create the same number of data files as logical cores. If you have more than 8 logical cores, create 8 data files and then add more in chunks of 4 if you still see PFS contention. Make sure all the tempdb data files are the same size too. (This advice is now official Microsoft guidance in KB article 2154845.)

Here’s an example screenshot of a 100-connection workload on tempdb.

tempfiles

On the left was when I had a single tempdb data file and I was seeing nearly all the connections waiting for PAGELATCH_UP on 2:1:1. After adding three more tempdb data files and stabilizing, the throughput jumped, as you can clearly see.

This has just been an overview of this problem, but as I said, it’s very common. You can get more information from the links below:

New IE1 class added in Chicago in November

Our last scheduled IE1 Immersion Event on Internals and Performance in Bellevue, WA in September has just sold out at 36 students, and there’s still huge demand for IE1 this year so we’ve added another one!

The final class of the year will be IE1 in our usual location just outside Chicago (the same hotel where we held eight sold-out classes in May). We decided to go back to Chicago rather than a new location because we know the hotel and its staff really well, they provide excellent service, the sleeping rooms are large and great value, and we know exactly how to host our events there.

The class will run November 11-15, 2013 and has an early-bird price of US$3,295 for registrations received by September 28th.

It’s open for registration right now – click HERE for details.

You can always see our complete training schedule on the training section of our website here.

We hope to see you there!

PS Don’t forget our new Hardware, Accidental/Junior DBA, and Advanced T-SQL classes in September too!