Multiple log files and why they’re bad

About a month ago I kicked off a survey with some code to run to figure out how many log files your databases have (see here). There are all kinds of misconceptions about transaction logs and how to configure them (and how they work) and I’m amazed at the misinformation I continue to see published. For instance, a few weeks back I skimmed through a video that stated in multiple places that the default transaction log growth rate is 10MB – it’s not, it’s 10% and has been since SQL Server 2005.

I got data back from 1300 SQL Server instances across the world from 75 people (thanks!), with an average of 18 databases per instance including system databases (which jumped to an average of 29 per instance if I included 4 anomalous instances with many thousands of databases each).

Out of all those instances, only 32 had databases with more than one log file:

  • 23 instances with one database with two log files
  • 3 instances with two databases with two log files
  • 1 instance each with 3, 4, 8, 9, and 27 databases with two log files
  • 2 instances with one database with four log files

So 2.5% of instances surveyed had at least one database with more than one log file.

I think I’m pleased about that as I expected the number to be higher, but I also suspect I’m equating poorly configured VLF totals with general log mis-configuration, and a higher percentage of systems out there have transaction logs with too many VLFs. Let’s settle for pleased :-)

But why do I care? And why should you care? Although it seems like there’s nothing damaging about having multiple log files, I don’t think that’s true.

Firstly, having multiple log files implies that the first one ran out of space and because the second one still exists, the first one might still be pretty large (maybe the second one is large too!). I don’t really care about that for crash recovery (which is bounded by how much un-recovered log there is), or performance of HA technologies like database mirroring, Availability Groups, or replication, which are bounded by transaction log generation rate, not size.

What I care about is performing a restore during disaster recovery. If the log files don’t exist, they must be created and zero-initialized, and twice if you restore a diff backup too as both the full and diff restores zero out the log. If the first log file is as big as it can be, and there’s a second log file still, that’s potentially a lot of log file to zero initialize, which translates into more downtime during disaster recovery.

I would much rather that all the log files apart from the first one are removed once they’re no longer needed, by simply waiting until all the active VLFs (marked with status 2 in the output from DBCC LOGINFO – see here) are in the first log file and then simply doing an ALTER DATABASE and removing the extra file, and then reducing the size of the remaining log file to something reasonable.

Which brings me to the second thing I care about: why was the extra log file needed in the first place? Why did the transaction log run out of space, necessitating creating another log file? That’s the only explanation I can think of for having more than one log file as there is no performance gain from multiple log files – SQL Server will write to them sequentially, never in parallel (Jonathan demonstrates this neatly with Extended Events here.)

(I like to draw a parallel here with page splits. People fixate on the fact that they’ve got fragmentation, not the massive performance issue that created the fragmentation in the first place – page splits themselves!)

I blogged about the Importance of proper transaction log file size management more than three years ago (and here five years back), and many others have blogged about it too, but it’s still one of the most common problems I see. Log growth can easily be monitored using the Log Growths performance counter in the Databases performance object and I’m sure someone’s written code to watch for the log growth counter being incremented for databases and alerting the DBA.

For someone who’s a DBA, there’s no excuse for having out-of-control transaction logs IMHO, but for involuntary DBAs and those who administer systems where SQL Server sits hidden for the most part (e.g. SharePoint), I can understand not knowing.

But now you do. Get reading these articles and get rid of those extra log files, and the need for them! Use the code in the original survey (see the link at the top) to see whether you’ve got an extra log files kicking around.

Enjoy!

PS For a general overview of logging, recovery, and the transaction log, see the TechNet Magazine article I wrote back in 2009.

55 thoughts on “Multiple log files and why they’re bad

  1. hello,
    I would like first to give you tons of thanks for your always interesting (and even more than that!) posts.
    Regarding the size of Tlog, I’m afraid we (I mean DBA) don’t always have other choice than having big TLog, because of Index rebuild operations.
    I manage some big databases that contains big tables with several billions of records that are not partionned (can’t rebuild at partition level). I let you imagine the size of Tlog (database is in full recovery mode as it concerns high critical business) when we have to rebuild Non clustered index (and in fact, I never rebuild clustered index in such tables…). That explain in my case why I get some hudge transaction log, and it doesn’t necessary means that it is an out-of-control transaction log.
    Should I consider expand tlog before rebuild operation then shrink it after ?

    O.

  2. Good post, Paul. You mentioned the "Importance of proper transaction log file size management" – how easy people ignore this. And how easily it is to greatly reduce your RTO by ensuring your log is only as big as it needs to be.

  3. Hi Paul

    i too have encountered the same issues with transaction log configuration on clients SQL server systems. In a previous contract the client had a 3rd party IT provider setup their Sharepoint system. The databases all had 1 file per core for the database files and for the transaction logs. I quizzed the architect who had implemented the system and he argued and pointed me to a Microsoft "best practice" document. I then pointed out to him that the doc stated that this was only for data files and not t-log and also that the doc was an optimisation for TempDB!!

    In my current contract we have a large number of t-log files for each database, again, along the old 1 file per core configuration <yawn>
    They even have 1 database that has 50 files in the primary filegroup all of differing sizes, growth and maxsize lol

    Regards

    Perry Whittle

  4. in our case the SAN infrastructure dates back over 10 years and the drive volumes have been migrated to newer SAN’s with others added along the way. 10 years ago most of our SAN volumes were in the 10GB range. our databases were a lot smaller too.

    so as the older volumes ran out of space we added new transaction log files to different drives. add SAN snapshots and replication to different volumes and its very hard to fix old mistakes

  5. Hi Paul – Slightly off topic – But it is about the t-log ;) – why cant it be used for Instant File Initialization, but the data file can?

    Apologies if you have blogged about this already.

  6. Sorry for the delay replying to comments!
    @arun Yes, I’ve blogged about it before – look in the Instant File Initialization category on my blog.
    @olivier No – you should use index reorganize instead of rebuilding

  7. Hi Paul,

    Looking over the results of the survey would indicate that there are a large number of users that are still using a single log for TempDB or have intentionally excluded this information.

    I know the common recommendation tempdb recommendation (as you pointed at in https://www.sqlskills.com/blogs/paul/post/A-SQL-Server-DBA-myth-a-day-(1230)-tempdb-should-always-have-one-data-file-per-processor-core.aspx) is about 1/2 – 1/4, even up to 1:1 on some machines with PAGELATCH issues.

    Example; we run a server that has 16 cores and started it at 1/2:1 ratio giving us 8 tempdb log files. We have since adjusted based on watching our perf counters.

    Our results are:
    LogFiles Databases
    1 9
    8 1

    NOTE: The last result is our TempDB

    I’m curious what your take/guess is on why the results don’t show more single database with 2 or more log files?

  8. Hi Paul,
    great article as always!
    we have a server which has about 300 DBs (of moderate size). we have a dedicated Log drive, which is configured to be of 100GB. In most of those databases we have a single log file. However there are a few databases, which have lots of activity and some big tables. to accomodate for the small size of the drive, i have created secondary log files on other drives. this is purely done to be cautios, if the log drive fills up, the application does not stop. this only happens once in a year, and therefore does not warrant a bigger log drive.

  9. Hi Paul

    Perhaps there is a mis-understanding about the purpose of multiple log files on a database? for data, I would use multiple files within a FileGroup to have one file placed on each LUN and thereby increasing performance by splitting the IO across LUNs.

    My understanding is that this is NOT the way it works with log file. Perhaps some people imagine otherwise …

    It would be a nice feature though, so feel free to tell me I’m wrong :)

    1. I think that’s what some people think – that SQL Server parallelizes log writes, which it doesn’t. I can think of many clients where that would be beneficial, but others where the log flushing mechanism itself is the bottleneck, and that’s unlikely to change.

  10. I have 2 log files on a database and it is mirrored. How Can I remove one log file without breaking mirroring. Because DB size is almost 1 TB.

    1. Just wait until there are no active VLFs in the second log file (use DBCC LOGINFO to look – see my blog post on it) and then alter database drop file. Mirroring is orthogonal to what you’re trying to do, as it database size.

  11. Hi there, I know this is an old post, but I’m wondering about what do you do if you find that your MSSQL Server DB has multiple transaction logs, I assume if you find yourself with multiple log files, it’s not just as simple deleting one? (Assuming you have solved the space issues you had in the first place)

  12. Wow that simple, safe to assume swapping to the Simple Recovery mode would ensure this? and gosh hadn’t realised you’d answer so quickly :-)

  13. Paul,

    When multiple databases are hosted on a single instance, what is the best practice regarding placement of the log file for each database?

    1. Host all log files on on LUN? Will this break sequential IO?
    2. Place each log file on separate physical device. RAID 1? RAID 5?

    Thanks
    Ben

  14. Hi Paul,

    Please advise what if we want to create multiple log files to overcome heavy I/O issues in a Full Database Recovery Model?

    Regards,
    FK

  15. Hi Paul,

    Please advise me, I’ve 4 databases on a SQL instance and want to put the log files of all of them on a single drive. Does it hurt performance if I place log files of different databases on a single drive?

    Thanks,
    Prahlad

  16. I have a case where the developers are converting old data for use in SAP and some ETL transactions are huge and ran my 1TB log file disk out of space and is now in recovery. Until i can get more space the only option it seems i have is to give the TLog another file on another disk. I have maintenance scheduled for adding a 2TB volume. So in this case it seems to me that running out of space for the TLog is worse than having multiple files for the TLog?

    P.S. SQL 2014 12.0.4422

    P.S.S. Also now in ‘Recovery of database is 35% complete with 90478 seconds remaining’ phase 3 of 3. :(

      1. Thanks Paul.

        So with billions of reads and millions of writes and no batching and multiple transactions and a couple of them were ‘stopped’ in SSMS this log file is too interesting. Thanks for your articles.

  17. Hi Paul,

    Thank you for blogging :)
    I have a question about your statement of using reorganize instead of rebuild.
    We use Ola Hallengrens index optimize solution, that I find to be very useful, However, I believe that it uses rebuild for indexes with fragmentation over a given limit (30 % I think). Does that mean I should not use that solution after a data file shrink? (I know shrinking is a last resort, we have just removed 1,5 TB of data from a 2 TB database).

    Thank you for any suggestions.

    1. You can change the configuration if you’re using Ola’s scripts do whichever method you want, at whatever threshold you want. If you rebuild a large index after shrinking, it’s going to grow the data file(s) again. After doing a shrink, you should use reorganize to remove the fragmentation caused by the shrink, to avoid the file growth again.

  18. Hi Paul,
    I’ve a requirement to move a log fiel for an 24/7 database. I’ve put in a proposal to do this wit “very” minimal downtime. While always maintaining full ability to recover point in time. This involves agreeing with customer on very quiet processing time. Then shrink log file as low as will go. take DB offline, alter database log file location, manually move the low file then bring db online. Customer has asked can I do without taking DB offline. This is related to a space issue on log volume so move 1 DB log to a new dedicate log volume

    The only way I think I could potentially do this is to create a new log on new volume. Then shrink the log file on old volume as low as it goes. Maybe set a max size on the log file on old volume.

    As per the post above that would leave with me with 2 log volumes, although the first will be very small. So the question is when active VLFs are only in the new file – can I delete/remove the first log file?

    Thanks, Paul

      1. Thanks Paul, that’s good to know. Would be good if it became a supported feature in some future version. Would make moving DB log file a whole lot easier. As its one of the most common tasks we do when we inherit poorly setup servers…

  19. Dear Paul,
    Can help if we have 2 transaction log for a database, can we have a script to run shrink log if 1 file is full, which it will switch to another log available for that particular database.

    While it switch to another log, we can shrink log and set the allocated space for the old log.

    Is there any way to automate this? with stored procedure in SQL?

    Thanks.

    1. You should only have a second log file temporarily, in which case remove it when you don’t need it any more. If you need two because there’s not enough space where the first one is, leave them however large they need to be – don’t shrink them over and over. I’m sure someone has a script to do what you want, but I don’t ever recommend automating log shrinking.

      1. As per what I understand from the blog:
        2 logs – if 1 full, it will auto switch to another log allocated for the database. Meanwhile, 1st log will have enough time for log autogrowth.
        1 log – set autogrowth, keep it growth until maximum size or shrink log if up to 5-6gb log size (depends on system performance).

        Now we set to simple recovery, if full option: sometimes log up to 8gb in a week. So we have to shrink log once a week, since it slow down the system performance.

        I am looking for script if the free space for log is 10% from current size, it will growth as what we have set in the autogrowth setting.

        1. If 2 logs it will use them sequentially, in a circular manner.

          No – don’t shrink your logs every time they grow – leave them at the size they need to be. Having an 8GB log does not affect your performance. You should not be shrinking the log every week. Set to full so you can take log backups. If the log is growing, you need more frequent log backups. If you don’t want log backups, use simple. You need to go read my TechNet Magazine article from February 2009 on Understanding Logging and Recovery.

  20. Will log file restores too initialize log files like full and differential backups. This is really eye opener, I wasn’t aware and not found anywhere on internet.

    Your site is unique

  21. Hey, Paul.

    Great info as always. Could you clarify “If 2 logs it will use them sequentially, in a circular manner….” in your response to Nor Zaili.

    We are planning to create 2 tempdb log files – the 1st on its own drive and the 2nd on a drive shared with user database logs.

    Does SQL server always use the 1st tempdb log file first as long as it has enough space? In another word, is it true that the 2nd tempdb log file will NOT get into action until the 1st tempdb log file is full?

    Your insight is highly appreciated.

    1. No, the second log file will be used as soon as the first one has been filled up, and then round-and-round in sequence. I would not do what you’re planning to do.

      1. Paul,

        We have implemented a RamDisk dedicated for the 1st tempdb log file. The RamDisk is large enough under normal operations.

        We have added the 2nd tempdb log file and placed it to a regular drive to take care of the special “spill away” scenario.

        This setup should buy us reaction time in case the the 1st tempdb log file is indeed filled up.

        Again thank you for your insight.

        1. You shouldn’t do that – that configuration is incorrect. SQL Server will use the second log file before going back to re-use the first log file, so you’re going to alternate between very fast and regular write speeds to whichever log file is holding the current log. It doesn’t only use the second log file when the first one is full – it writes the log records sequentially through the whole of the first file, then all of the second file, then back around to the first again.

          1. Without the 2nd log file, would it not subject us to an emergency situation if the dedicated TempDB Log drive is filled up?

            Are you suggesting we create that drive with enough padding to reduce the change of full drive case? Even so, it appears a good idea to me to have a 2nd file as safety measure.

            It could be that I have missed something obvious in your logic/suggestion. Sorry, Paul.

  22. Yes, if a single log file fills up and cannot grow, you’re in a bad spot. If you’re concerned about that, make the drive large enough and put some monitoring in place to check for the log growing. It’s rare that people create a second file, except in cases of space constraints.

    My concern is that I think you’re under the impression that the 2nd file, on the slower I/O subsystem, will only be used when the first one fills up. That’s not the case. It’ll be used all the time, so you’re going to see variable log throughput because you have one very fast file and one slower one.

    1. Hey, Paul.

      I think I got it.

      After all I am armed with DBCC LOGINFO. I am able to run with it, now that you have pointed in the right direction.

      Again much appreciated.

  23. Maybe people think it is like Oracle and Redo logs cycling?

    Or if an instance has 2000 plus SPID’s/ connections which I do support, will it all write to the 1 VLF, then the next OR
    Does it write to different VLF’s? with different transactions all running at the same time.
    Then would 2 .LDF on different disk help ?
    Paul: don’t shoot me for asking – LOL

    1. It will all write to one VLF (well, log block within the VLF) and then when they fill up, move to the next, and so on. Writes to the log are never in parallel, so two log files won’t help from a performance perspective.

  24. hi,

    there is one situation i have come across and why i thought that it could be a benefit to have multiple log files for tempdb.

    – my current tempdb’s log file is placed in a 350 GB local drive (business constraint to have only cost effective local drive for tempdb).
    – something happened at system in the morning time and my tempdb log shoot to 350 GB triggering less disk space alert and i am notified of it.
    – observed that tempdb single log file is 95% utilized and is not able to shrink.
    – cleared the problematic query and then was able to shrink the log file. :-)

    Now, the thought is what if I was not notified about low disk space, or I was not able to shrink the log file in-time. So, there would have been a hung SQL Server and hence business impact.

    So, wouldn’t it be good idea to have multiple log files in similar multiple local drives and restrict the space of 1 to N-1 log files to some 80% of drive space and keeping Nth log file to grow indefinitely?

    Please suggest !

    1. Well, all you’re really doing there is working around the size of the drive rather than having one big log file. As long as you’re fine with all that space, there’s no problem. However, if that’s just one physical drive, that’s RAID 0 and a nasty single-point of failure that will cause SQL Server to shut down (tempdb unavailable = shutdown).

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.