Multiple log files and why they’re bad

About a month ago I kicked off a survey with some code to run to figure out how many log files your databases have (see here). There are all kinds of misconceptions about transaction logs and how to configure them (and how they work) and I’m amazed at the misinformation I continue to see published. For instance, a few weeks back I skimmed through a video that stated in multiple places that the default transaction log growth rate is 10MB – it’s not, it’s 10% and has been since SQL Server 2005.

I got data back from 1300 SQL Server instances across the world from 75 people (thanks!), with an average of 18 databases per instance including system databases (which jumped to an average of 29 per instance if I included 4 anomalous instances with many thousands of databases each).

Out of all those instances, only 32 had databases with more than one log file:

  • 23 instances with one database with two log files
  • 3 instances with two databases with two log files
  • 1 instance each with 3, 4, 8, 9, and 27 databases with two log files
  • 2 instances with one database with four log files

So 2.5% of instances surveyed had at least one database with more than one log file.

I think I’m pleased about that as I expected the number to be higher, but I also suspect I’m equating poorly configured VLF totals with general log mis-configuration, and a higher percentage of systems out there have transaction logs with too many VLFs. Let’s settle for pleased :-)

But why do I care? And why should you care? Although it seems like there’s nothing damaging about having multiple log files, I don’t think that’s true.

Firstly, having multiple log files implies that the first one ran out of space and because the second one still exists, the first one might still be pretty large (maybe the second one is large too!). I don’t really care about that for crash recovery (which is bounded by how much un-recovered log there is), or performance of HA technologies like database mirroring, Availability Groups, or replication, which are bounded by transaction log generation rate, not size.

What I care about is performing a restore during disaster recovery. If the log files don’t exist, they must be created and zero-initialized, and twice if you restore a diff backup too as both the full and diff restores zero out the log. If the first log file is as big as it can be, and there’s a second log file still, that’s potentially a lot of log file to zero initialize, which translates into more downtime during disaster recovery.

I would much rather that all the log files apart from the first one are removed once they’re no longer needed, by simply waiting until all the active VLFs (marked with status 2 in the output from DBCC LOGINFO – see here) are in the first log file and then simply doing an ALTER DATABASE and removing the extra file, and then reducing the size of the remaining log file to something reasonable.

Which brings me to the second thing I care about: why was the extra log file needed in the first place? Why did the transaction log run out of space, necessitating creating another log file? That’s the only explanation I can think of for having more than one log file as there is no performance gain from multiple log files – SQL Server will write to them sequentially, never in parallel (Jonathan demonstrates this neatly with Extended Events here.)

(I like to draw a parallel here with page splits. People fixate on the fact that they’ve got fragmentation, not the massive performance issue that created the fragmentation in the first place – page splits themselves!)

I blogged about the Importance of proper transaction log file size management more than three years ago (and here five years back), and many others have blogged about it too, but it’s still one of the most common problems I see. Log growth can easily be monitored using the Log Growths performance counter in the Databases performance object and I’m sure someone’s written code to watch for the log growth counter being incremented for databases and alerting the DBA.

For someone who’s a DBA, there’s no excuse for having out-of-control transaction logs IMHO, but for involuntary DBAs and those who administer systems where SQL Server sits hidden for the most part (e.g. SharePoint), I can understand not knowing.

But now you do. Get reading these articles and get rid of those extra log files, and the need for them! Use the code in the original survey (see the link at the top) to see whether you’ve got an extra log files kicking around.

Enjoy!

PS For a general overview of logging, recovery, and the transaction log, see the TechNet Magazine article I wrote back in 2009.

All SQLskills 2013 Immersion Events open for registration!

All of our 2013 public classes are now open for registration!

Based on requests from people, attendee ratings of the hotels we used this year, and the ease of using hotels we know, we’re using the same locations again. This means we cover both sides of the US, central US, and Europe.

Please know that these classes are final as the hotel contracts are signed, and the classes will not be cancelled or moved for any reason, nor will the dates change.

  • February 4-8, 2013: Internals and Performance (IE1) in Tampa, FL – USA
  • February 11-15, 2013: Performance Tuning (IE2) in Tampa, FL – USA
  • April 29-May 3, 2013: Internals and Performance (IE1) in Chicago, IL – USA
  • April 29-May 3, 2013: Immersion Event for Business Intelligence (IEBI) in Chicago, IL – USA (co-located but in a different training room. Attendance is for one event or the other; these cannot be combined for one attendee where they move back/forth.)
  • May 6-10, 2013: Performance Tuning (IE2) in Chicago, IL – USA
  • May 13-17, 2013: High Availability & Disaster Recovery (IE3) in Chicago, IL – USA
  • May 13-17, 2013: Immersion Event for Developers (IEDev) in Chicago, IL – USA (co-located but in a different training room. Attendance is for one event or the other; these cannot be combined for one attendee where they move back/forth.)
  • May 20-24, 2013: Development Support (IE4) in Chicago, IL – USA
  • June 3-7, 2013: Internals and Performance (IE1) in London – UK
  • June 10-14, 2013: Performance Tuning (IE2) in London – UK
  • June 17-21, 2013: High Availability & Disaster Recovery (IE3) in London – UK
  • June 24-28, 2013: Development Support (IE4) in London – UK
  • September 16-20, 2013: Internals and Performance (IE1) in Bellevue, WA – USA
  • September 23-27, 2013: Performance Tuning (IE2) in Bellevue, WA – USA

One thing to note is that the course prices have increased slightly for 2013, reflecting increasing food, logistics, travel, and accommodation costs. We kept our prices the same for the last three years but now we have to raise them a little.

For US classes, the new early-bird price is US$3,295 and the full-price is US$3,795. However, for all registrations received before January 1, 2013, and for all past attendees in the 12 months prior to registration, we will only charge the 2012 early bird price of US$2,995 – super-early-bird! – so get your registrations in early!

For UK classes, the new early-bird price is US$3,795 and the full-price is US$4,295. There is a similar super-early-bird and past-attendee price equal to the 2012 UK early bird price of US$3,495 – so again, get your registrations in early!

See here for the main Immersion Event Calendar page that allows you to drill through to each class for more details and registration links.

So, that’s it for now. You can't get deeper and more comprehensive SQL Server training than we provide, anywhere in the world!

We hope to see you soon!

Do you want free hardware? We can help!

Our hardware guru, Glenn Berry, recently recorded a podcast with our good friends at RunAs Radio about the licensing changes in SQL Server 2012 and how many people are now finding they have hardware they can't fully utilize without spending a bunch more money. The podcast was published today and you can listen to it here. You've probably also heard about his best-selling book – SQL Server Hardware – I strongly recommend it.

The problem is that most companies don't know the ins-and-outs of choosing hardware, and how to maximize its performance while minimizing the hardware and licensing costs. Once you've spent your money, you're stuck with whatever you bought until the next hardware budget cycle comes around, whether it fits the bill or not.

Or maybe you just decide to upgrade in place, on old, under-performing hardware – that's a recipe for disaster as your performance needs grow with your workload volume.

You really don't want to see a message like this in SQL Server's error log:

SQL Server detected 4 sockets with 8 cores per socket and 8 logical processors per socket, 32 total logical processors; using 20 logical processors based on SQL Server licensing.

This is where we come in. Coincident with Glenn's podcast being published today, we've got a new consulting offering to help you out.

We can help you:

  • Evaluate an existing hardware and storage subsystem
  • Evaluate and recommend hardware and storage for an upgrade
  • Giving a sanity check on your proposed hardware specifications

With the savings we can show you, you may end up essentially getting your new hardware for free! That's a pretty good deal that's easy to sell to your management.

Contact us through our Hardware page and we'll hook you up with Glenn – you can't get better advice!