Multiple log files and why they’re bad

About a month ago I kicked off a survey with some code to run to figure out how many log files your databases have (see here). There are all kinds of misconceptions about transaction logs and how to configure them (and how they work) and I’m amazed at the misinformation I continue to see published. For instance, a few weeks back I skimmed through a video that stated in multiple places that the default transaction log growth rate is 10MB – it’s not, it’s 10% and has been since SQL Server 2005.

I got data back from 1300 SQL Server instances across the world from 75 people (thanks!), with an average of 18 databases per instance including system databases (which jumped to an average of 29 per instance if I included 4 anomalous instances with many thousands of databases each).

Out of all those instances, only 32 had databases with more than one log file:

  • 23 instances with one database with two log files
  • 3 instances with two databases with two log files
  • 1 instance each with 3, 4, 8, 9, and 27 databases with two log files
  • 2 instances with one database with four log files

So 2.5% of instances surveyed had at least one database with more than one log file.

I think I’m pleased about that as I expected the number to be higher, but I also suspect I’m equating poorly configured VLF totals with general log mis-configuration, and a higher percentage of systems out there have transaction logs with too many VLFs. Let’s settle for pleased :-)

But why do I care? And why should you care? Although it seems like there’s nothing damaging about having multiple log files, I don’t think that’s true.

Firstly, having multiple log files implies that the first one ran out of space and because the second one still exists, the first one might still be pretty large (maybe the second one is large too!). I don’t really care about that for crash recovery (which is bounded by how much un-recovered log there is), or performance of HA technologies like database mirroring, Availability Groups, or replication, which are bounded by transaction log generation rate, not size.

What I care about is performing a restore during disaster recovery. If the log files don’t exist, they must be created and zero-initialized, and twice if you restore a diff backup too as both the full and diff restores zero out the log. If the first log file is as big as it can be, and there’s a second log file still, that’s potentially a lot of log file to zero initialize, which translates into more downtime during disaster recovery.

I would much rather that all the log files apart from the first one are removed once they’re no longer needed, by simply waiting until all the active VLFs (marked with status 2 in the output from DBCC LOGINFO – see here) are in the first log file and then simply doing an ALTER DATABASE and removing the extra file, and then reducing the size of the remaining log file to something reasonable.

Which brings me to the second thing I care about: why was the extra log file needed in the first place? Why did the transaction log run out of space, necessitating creating another log file? That’s the only explanation I can think of for having more than one log file as there is no performance gain from multiple log files – SQL Server will write to them sequentially, never in parallel (Jonathan demonstrates this neatly with Extended Events here.)

(I like to draw a parallel here with page splits. People fixate on the fact that they’ve got fragmentation, not the massive performance issue that created the fragmentation in the first place – page splits themselves!)

I blogged about the Importance of proper transaction log file size management more than three years ago (and here five years back), and many others have blogged about it too, but it’s still one of the most common problems I see. Log growth can easily be monitored using the Log Growths performance counter in the Databases performance object and I’m sure someone’s written code to watch for the log growth counter being incremented for databases and alerting the DBA.

For someone who’s a DBA, there’s no excuse for having out-of-control transaction logs IMHO, but for involuntary DBAs and those who administer systems where SQL Server sits hidden for the most part (e.g. SharePoint), I can understand not knowing.

But now you do. Get reading these articles and get rid of those extra log files, and the need for them! Use the code in the original survey (see the link at the top) to see whether you’ve got an extra log files kicking around.


PS For a general overview of logging, recovery, and the transaction log, see the TechNet Magazine article I wrote back in 2009.

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.