Transaction Log VLFs – too many or too few?

(Be sure to join our community to get our monthly newsletter with exclusive content, advance notice of classes with discount codes, and other SQL Server goodies!)

Way back in June 2005, I blogged about ‘8 Steps to better transaction log throughput‘. I did this blog post after seeing (again and again) overly fragmented transaction logs… Transaction logs can become *VERY* fragmented when they are not preallocated and instead they grow excessively through unmanaged (and probably the default settings for) auto-growth.

First, here’s how the log is divided into VLFs. Each “chunk” that is added, is divided into VLFs at the time the log growth (regardless of whether this is a manual or auto-grow addition) and it’s all dependant on the size that is ADDED not the size of the log itself. So, take a 10MB log that is extended to 50MB, here a 40MB chunk is being added. This 40MB chunk will be divided into 4 VLFs. Here’s the breakdown for chunksize:

chunks less than 64MB and up to 64MB = 4 VLFs

chunks larger than 64MB and up to 1GB = 8 VLFs

chunks larger than 1GB = 16 VLFs

And, what this translates into is that a transaction log of 64GB would have 16 VLFs of 4GB each. As a result, the transaction log could only clear at more than 4GB of log information AND that only when it’s completely inactive.

[Edit Jan 2015: this algorithm has changed in SQL Server 2014 – see Paul’s post here for details.]

To have a more ideally sized VLF, consider creating the transaction log in 8GB chunks (8GB, then extend it to 16GB, then extend it to 24GB and so forth) so that the number (and size) of your VLFs is more reasonable (in this case 512MB).

Have fun and thanks for reading!!

kt

BEWARE – there’s a bug when growing the log in multiples of 4GB. Check out Paul’s post for more information: Bug: log file growth broken for multiples of 4GB. In the end, the best amount is to allocate with a value of 8000MB instead of 8GB.

47 thoughts on “Transaction Log VLFs – too many or too few?

  1. Hi Kimberly,

    Thank you for the post! Just to verify I’m understanding this correctly, are you suggesting that, upon restart of SQL Server, the initial size of the transaction log be set to 8GB and manually grown in chunks of 8GB until reaching its target size? Or should I initialize my transaction log to 512MB and set it to auto-grow in chunks of 8GB?

  2. Hey there Michelle – Transaction log size should be determined as part of capacity planning (primary factors are: log maintenance and types of transactions). Once size has been determined – and planned. To create the database (and therefore the transaction), create the database with an appropriately sized log and make sure that it’s not created in larger than 8GB chunks. So, if you need a 2GB log then just create that as one step. If you need a 20GB log, create that as 8GB, then extend it to 16GB and then to 20GB (but, if you think it might go larger – you might just go 8, 16, 24 instead). That is, if you can spare the extra 4GB.

    Have fun!
    kt

  3. Kim,

    After I run the script and then runn DBCC LOGINFO again, I still have over 150 results (before was over 200).

    I was expecting that the script would bring the VLFs to a minimal number.

    Jason Love
    JasonLove@live.com

  4. Kimberly:

    I have been reading about VLF fragmentation and it’s a very interesting topic. Thanks for all of the research. We have a couple VLDB reaching approximately 2TB in size with log files in the hundreds of GB. I have gone through many of the databases and found what I believe to be a high number of VLFs. Many in the hundreds and some in the thousands.
    I’m wondering about the point of marginal return balancing the number of VLF internal log files vs. fragmentation of the log file on the operating system.
    Is there some advice you can offer? Anything you have would be greatly appreciated. Thanks in advance,

    Drew

  5. Hi Kim,

    A colleague and I are at an impasse and numerous Googles have not found the answer. He thinks that SQL Server 2005 fixed the performance problem with too many vlf’s (I won’t even say how many we have. Lets just say it is more than 100). I didn’t think so, and I cannot find anywhere that says 2005 did indeed fix the problem. So, did 2005 fix the performance issue with too many vlf’s?

  6. Hello Kimberly,

    If a database is replicated should its transaction log be set up differently due to the extra reading?

    Regards,

    SCM.

  7. Hi Kim.
    There is an ongoing debate in my db group about too many vlf’s and SQL 2005. One side says that 2005 fixes the performance issue with too many vlf’s. The other side says that there is still an issue and it needs to be maintained low. (We currently have about 1000 vlf’s).

    I was hoping you could clear this up.

  8. Excellent post. I have went through the "8 Steps to better transaction log throughput" and got a much quicker database. Most of the steps I have done before, but the one about the vlf:s did the trick in some of the db:s.

    Thanks
    STefan

  9. re: Stefan/Drew: Yeah, the VLF thing doesn’t impact everyone’s database but for some… it can DEFINITELY make a difference. And, it’s not as big of a problem as it used to be (there have been a few fixes). But, it’s still better to create this cleanly and pro-actively because some databases do still see a gain!

    re: SCM: no, a replicated database won’t have any different requirements BUT, it should be optimized the same way. I suppose the ONE issue is that the publisher’s log cannot be cleared until replication has picked up the transactions. So, that might make a publisher’s log grow larger. So, you should just make sure that it’s sized appropriately and that the servers to which you are replicating – are stable.

    Cheers!
    kt

  10. Hey Kim,

    Excellent post! I’ve been preaching keeping the number of vlfs down for awhile but never gave much consideration to having too few.

    I have a client that keeps their databases in simple mode for there heavily used oltp applications, and rely on transactional replication for DR. Would you expect any benefit in this case making the vlfs smaller ie 8gb chunks or would just size out to the max needed?

    Cheers,
    Zach

  11. Hi Kimberly,
    if i need 24 GB log file, as you mentioned, i will create 8 GB and now increase to 8 Gb (total 16 GB) and then increase to 8 GB ( total 24 GB). Below mentioned are my understanding, please correct me .. I would like to know ,out of the below 2 methods. which method sql server actually does.

    Method 1 :
    step1. 8 GB log creation creates 1 – 16 VLFs each of 0.5GB ( 0.5GB * 16 = 8GB) -> 8GB
    step2. Adding another 8 GB creates 17- 32 VLFs each of 0.5GB ( 0.5GB * 16 = 8GB) -> 16GB
    step3. Adding another 8 GB creates 33 – 48 VLFs each of 0.5GB ( 0.5GB * 16 = 8GB) -> 24 GB

    So now we have 16 VLFs * 3 => 48 VLfs

    Method2 :

    step1. 8 GB log creation creates 1 – 16 VLFs each of 0.5GB ( 0.5GB * 16 = 8GB) -> 8GB
    step2. Adding another 8 GB creates 1- 16 VLFs each of 1 GB ( 1 GB * 16 = 16 GB) -> 16GB
    step3. Adding another 8 GB creates 1 – 16 VLFs each of 1.5GB ( 1.5GB * 16 = 24GB) -> 24 GB

    So now we have 16 VLFs

    Thanks in advance.

    Regards
    Vijayasarathi

      1. Thank you Paul for your help here by answering so many of these posts. I have read the posting here and many simlar other places. I have a 40GB Log that I have been diligent to watch. I have set it to 40 gig initially but it has not auto grown. We have a highly OLTP environment and take T-Log backups every 15 min ( known and set by the big bosses).
        I have 658 VLFs. From what I am reading I will either 1.Shrink the file to 8 gig and then manually grow it 5 times (5*8GB=40GB) to 40 gig giving us a value of about 80 VLFs (8G=16VLFs). But People in other places have said 50VLFs is a good number which bothers me because 80VLFs is too high. 2. I will leave it alone with 658 VLFs because no one is complaining and we have pretty good throughput (because you said “Hi Anthony – you should leave the log at whatever size it needs to grow to”). We are working with SQL server 2008 R2. Also, good work on DBCC stuff. I caught a server that was questionable and I had good backups :)

  12. Excellent article and a real eye opener. We have a 58Gb database with an initial log file of 200Mb set to the default 10% autogrowth. During the day the log file is <1Gb (trans log is backed up every 30 mins) but each night we run a reorganize index and update stats job which increases it to 70Gb, which we then shrink back down to 200Mb.

    Based on your post I won't be shrinking the log file anymore but what should the initial log file size be and the autogrowth value? I was thinking 4Gb with 4Gb growth but I don't know if this would impact the trans log backups happening during the day.

    Regards
    Anthony Fletcher Rogers

    1. Hi Anthony – you should leave the log at whatever size it needs to grow to. I’d make sure that all the reorganizing is really necessary every night too – sounds like you’re reorganizing everything, which will produce 70GB of log backups every day. Whatever size the log needs to be, row it to that size in 4GB chunks. 4GB autogrowth is way too much as the zero initialization will produce a noticeable pause. Set it to something like 256MB or 512MB. Thanks

  13. Hello everyone

    Does the VLF and the “manual” 8 GB extent advice still apply to SQL Server 2012 SP1 ?
    I have a large DB on SQL 2012 (SP1) which needs to have about 600-800 GB T-Log.
    This is required for large maintenance tasks.
    This database will also be Log Shipped to DR site so the roll-forward speed is critical.
    How should such as large T-Log be initially created for best results.
    thank you.

    1. Write a loop to create it at 8000 MB and then extend it by 8000 MB up to your desired size. In general, yes, we still recommend the 8GB chunking. Having said that, the bug has been fixed in 2012 SP1 so you could just go with 8GB and then increments of 8GB. If you plan to use it across versions then 8000MB might be safer.

      Cheers,
      kt

  14. Hello,
    based on this excellent post I started to check our VLF’s and come to find, that we have some databases with > 1K !

    Now, based on what I read above and from other related post, I gather that the chunks should be around 512MB. We have a 3.6TB database with 4 t-logs (don’t ask why….) of 76GB each. We want to “fix” this, but to keep the around 300GB in log size, in increments of 8GB that would lead to aprox. 624 VLF’s.
    Would the recommendation here be to go with bigger chunks? What could be an ideal size for the VLF’s?

    Thanks!
    Richard

  15. what the best thing to do my db is 800 gig i am live 24hr day 7 days a week i have 20292 vlf how can i reduce these without haveing tomuch problem with my users thank you !

    1. forgot to mention my db log is usually going to 70 gig for mainetance so i would grow it back to that size thx

  16. I was reading the below blog and was wondering if there was any truth to the portions about replication and mirroring being affected by large VLFs?

    http://blogs.msdn.com/b/sqladventurer/archive/2012/03/16/if-gt-50-vlfs-are-bad-then-16-is-good-right.aspx

    I understand that it could cause your log on the publisher to grow, but could large VLFs increase latency?

    You mention as well that backups are affected by large VLFs, but what about checkpoints for databases in the simple recovery model?

    Thanks for all you do!

  17. Hi Kim,

    Thanks for the article and especially thanks for the guide of how large each VLF *should* be – 512MB. We have a db that has a 200GB log file and currently ~1700 VLFs, each about 128MB each. In our case, would that be too many or should i still go with the method you propose and shrink the log file down to 8GB and then increase in 8Gb chunks – leaving me at the end with (200GB/ 8GB) * 16 = 400 VLFs (better than 1700 i presume)?

    thx in advance

    1. Hey there Mark – 128MB is not too small but, there will be a bit more management/overhead with these. So, you could do the cleanup to reduce your overall number and go to 512MB VLFs. I doubt you’ll see anything that’s significant from this though.

      Definitely let me know if you do but I don’t suspect you will!

      Hope that helps,
      Kimberly

  18. I really enjoy reading your blog for all the insight into SQLServer you offer.

    Your articles about VLFs are my reference for the topic.

    I work with SQL2008R2 on several instances in different environments and have written a script to list some database characteristics. This script tells me the number of VLFs in a log file and also the filesizes per creationLSN.

    I am puzzled because I found several filesizes with the same createLSN (the first one). According to your blog notes I assumed an even size for each of the 4/8/16 VLFs created in one step.

    I created all chunks as 2000MB. I set the initial size to 2000MB and changed that through 4 ALTER TABLE commands with new size = old size + 2000.

    The VLF with the highest startoffset from the 16 first VLFs (createLSN=0) has a different size than the other 15 VLFs (chunk=2000MB, filesize=15 x 131006464 + 1 x 132046848).

    None of the first 16 VLFs have the same size as the VLFs created afterwards (chunk=2000MB, filesize=131072000)

    Can you explain this?

    Regarding the advice to setup tempdb with multiple log files will this behaviour affect the balancing mechanism when I use multiple tempdb log files (I would create each tempdb logfile with the same size using the methode mentioned above).

    Thanks
    Udo

    1. Looks like a bit of a rounding error. I’m sure there’s a specific size that they try to maintain internally and since you used a value (2000) which is not on a specific 1024 multiple – they had to break it up. I wouldn’t be worried about that.

      And, yes, same for tempdb.

      Cheers,
      kt

  19. Hi, I understand how db engine uses TL, but I still don’t understand why would large number of VLFs affect backup and insert/update/delete performance… What kind of processing overhead would each VLF require that would affect performance that much?

    1. It’s not something that you would see with individual statements as much as more of an internal/management overhead for all that is log related. The more intense your log operations (replication, backup/restore, CDC, large rollbacks) the more you might feel the effects.

      Sorry there isn’t a much better answer but I do hope that helps!
      kt

  20. Hi Kimberly,

    You mentioned above that 4GB-multiple bug is fixed in 2012 SP1. Just to clarify, is it now safe to use 8192MB (the full 8GB) instead of 8000MB?

    Thanks,
    Roger

    1. If you’re using 2012 SP1 or higher then yes. You can certainly use 8192 instead of 8000. There isn’t a huge benefit of one over the other. But, I would agree that it’s a lot more mathematically pleasing. ;-) ;-)

      Cheers,
      kt

  21. Hi Kimberly,

    In talking about this with my colleagues, we were thinking that a database in simple recovery that has a rather large log because of a nightly data load would not necessarily benefit from small VLFs; i.e. As long as it is grown in one command with 98GB, who cares that each VLF is large as long as it is not fragmented? Again, this is simple recovery, so I’m not worried about the size for backups… Right?

    Thanks in advance,

    Jared

    1. It’s really more than just backups that’s affected. But, if the database is in simple then I doubt it has some of these other things that might be affected (replication, CDC, etc.).

      So… there, it’s not nearly as big of a deal. But, I’d still be consistent on my creation methods (what if you later decide to change to FULL and/or implement some of these features). I always create logs in 8GB increments (starting at 8 and growing by 8 to the desired level). It’s just a good (and really straightforward) practice. Keep it simple IMO. ;-)

      hth,
      kt

  22. Re: “AND possibly cause you performance problems because it’s now clearing 8GB of log information.”

    Is this a fair statement? I thought that SQL just changed the Status for a VLF from 2 to 0, marking it as available for reuse. It doesn’t actually clear the log with physical writes.

    1. Sorry for the confusion. The problem really comes from having to follow the chain of VLFs and reset the bit. If there are thousands – this can be slow.

      Hope that helps!
      kt

  23. Hi Kimberly,

    Does the “8GB chunks” advice still hold for log sizes around 100 GBs? Because the number of VLFs is around 200 at that stage – when using the 8GB chunks sizing. I’m referring to the 50 VLF criteria, which is sometimes mentioned (also in your “8 Steps to better Transaction Log throughput” blog).

    Cheers!
    Blaž

    1. As stated in the post, it depends on the size of the file (or amount added to the file). And, these also change in 2014. Please review the post for how it works prior to 2014 as well as for the link to Paul’s blog post for the changes in 2014.

      hth,
      k

  24. We have a kind of competition going on to see the highest number of VLFs we can find in our environment.

    The max so far is 270,000. Of course we fixed it. Can anyone beat us?

    1. Actually, I think the current winner is in the millions. But, their reply was on twitter. Not sure I remember the number now…

      But, well done for fixing these!

      Cheers,
      k

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.