Important change to VLF creation algorithm in SQL Server 2014

Since SQL server 2014 was released back in April last year, there have been some rumblings about changes to how many VLFs are created when the log is grown or auto-grown (I’ll just say auto-grown from now on, as that’s the most common scenario). I experimented a bit and thought I’d figured out the algorithm change. Turns out I hadn’t. There was a question on the MVP distribution list last week that rekindled the discussion and we collectively figured out that the algorithm was behaving non-deterministically… in other words we didn’t know what it was doing. So I pinged my friends in CSS who investigated the code (thanks Bob Ward and Suresh Kandoth!) and explained the change.

The change is pretty profound, and is aimed at preventing lots of auto-growth from creating huge numbers of VLFs. This is cool because having too many (it depends on the log size, but many thousands is too many) VLFs can cause all manner of performance problems around backups, restores, log clearing, replication, crash recovery, rollbacks, and even regular DML operations.

Up to 2014, the algorithm for how many VLFs you get when you create, grow, or auto-grow the log is based on the size in question:

  • Less than 1 MB, complicated, ignore this case.
  • Up to 64 MB: 4 new VLFs, each roughly 1/4 the size of the growth
  • 64 MB to 1 GB: 8 new VLFs, each roughly 1/8 the size of the growth
  • More than 1 GB: 16 new VLFs, each roughly 1/16 the size of the growth

So if you created your log at 1 GB and it auto-grew in chunks of 512 MB to 200 GB, you’d have 8 + ((200 – 1) x 2 x 8) = 3192 VLFs. (8 VLFs from the initial creation, then 200 – 1 = 199 GB of growth at 512 MB per auto-grow = 398 auto-growths, each producing 8 VLFs.)

For SQL Server 2014, the algorithm is now:

  • Is the growth size less than 1/8 the size of the current log size?
  • Yes: create 1 new VLF equal to the growth size
  • No: use the formula above

So on SQL Server 2014, if you created your log at 1GB and it auto-grow in chunks of 512 MB to 200 GB, you’d have:

  • 8 VLFs from the initial log creation
  • All growths up to the log being 4.5 GB would use the formula, so growths at 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 GB would each add 8 VLFs = 56 VLFs
  • All growths over 4.5 GB will only create 1 VLF per growth = (200 – 4.5) x 2 = 391 VLFs
  • Total = 391 + 56 + 8 = 455 VLFs

455 is a much more reasonable number of VLFs than 3192, and will be far less of a performance problem.

A commenter asked whether compatibility level affects this? No – compatibility level is ignored by the Storage Engine internals.

I think this is an excellent change and I can’t see any drawbacks from it (apart from that it wasn’t publicized when SQL Server 2014 was released). CSS will be doing a comprehensive blog post about this soon, but they were cool with me making people aware of the details of the change ASAP to prevent confusion.

You might think that it could lead to very large VLFs (e.g. you set a 4 GB auto-growth size with a 100 GB log), and it can. But so what? Having very large VLFs is only a problem if they’re created initially and then you try to shrink the log down. At a minimum you can only have two VLFs in the log, so you’d be stuck with two giant VLFs at the start of the log and then smaller ones after you’d grown the log again. That can be a problem that prevents the log being able to wrap around and avoid auto-growth, but that’s not anywhere near as common as having too many VLFs. And that’s NOT a scenario that the new algorithm creates. (As an aside, you can fix that problem by creating a database snapshot and then reverting to it, which deletes the log and creates a 0.5 MB log with two tiny VLFs… it’s a bugfeature that’s been there since 2005, but it breaks your log backup chain when you do it.)

There’s certainly more that can be done in future around VLF management (e.g. fixing the problem I describe immediately above), but this is a giant step in the right direction.


63 thoughts on “Important change to VLF creation algorithm in SQL Server 2014

  1. Paul:
    Thanks for the research and the write-up.
    Does it work that way for all DBs running in SQL 2014, even if at a lower compatibility level?
    Thanks again.

  2. Well, the better fix would be for SQL Server to periodically see through the VLFs and merge too small ones (that are adjacent) into bigger segments. Or split too large ones. This would not require data movement if the VLFs in question are currently unused (which is a common situation).

    The fact that customers even have to know about VLFs is, in my mind, unnecessary. All of this seems to be architecturally fixable.

    1. That could work, but only for inactive VLFs. There’s no possibility of a log record having it’s LSN changed as the algorithm would only touch inactive VLFs. One thing I haven’t thought through fully is whether those operations would have to be logged to make sure that logs on mirrors and secondary replicas remain byte-for-byte copies of those on principals/primaries.

      But yes, agreed. The current implementation dates from 1997/8 when logs weren’t large.

  3. Great Post Mr. Randal,

    I´ve published a blog post ( in my own personal blog about VFLs at the same day, but I was not aware about this change, I will surely edit my blog post.

    I confess a too too big VLF makes me scaried…

    For example: If you have a 200GB Transaction Log file, and you want its growth create VLFs using the old way, you will need to grow you file at once to a size about 25GB (seem too much IMHO, that what makes me scaried).


    Thanks for this great contibution… Awesome information,


    Edvaldo Castro

  4. Thanks Paul! Quick question, are the log files still zeroed through like before (the parity bits if I recall)? If so, I assume there’s no chance that will change anytime soon, else it would be yet another reason to make sure perform volume maintenance is enabled for the service account.

  5. I wrote below PowerShell script to calculate the example for SQL 2014 in the article and it gives me 458 VLFs. Where am wrong?

    $vlf = 0;

    $initial_size = 1GB;
    $max_size = 200GB;
    $increment = 500MB;

    if($initial_size -le 64MB){
    $vlf += 4;
    elseif($initial_size -le 1GB){
    $vlf += 8;
    $vlf += 16;

    while($initial_size -lt $max_size){
    if($increment -lt ($initial_size * 0.125)){
    $vlf += 1;
    if($increment -le 64MB){
    $vlf += 4;
    elseif($increment -le 1GB){
    $vlf += 8;
    $vlf += 16;
    $initial_size += $increment;

    Write-Host $vlf;

  6. Thanks Paul! Question about big VLF: Do you mean that it would better to create a database initially with a small log size (any reasonable value, minimum 512K which gives only 2 VLF), then ALTER DATABASE … MODIFY FILE to expand it?

    1. No. If I was creating, say a 64 GB log, I’d create it as 8GB then expand in 8GB chunks to 64GB to keep the number of VLFs small. That means each VLF will be 0.5 GB, which is a good size.

      1. Thanks for the post! Very helpful. Quick question. Above you said “…I’d create it as 8GB then expand in 8GB chunks to 64GB…”, do you feel creating the log at an initial 8GB and leaving at an autogrowth of 8GB is ok if I were aiming for a 400GB log as well? That would give us 128 0.5GB VLFs for up to 64GB and then 42x 8GB VLFs after that, correct? You said large VLFs are not a problem at the end of the log, but how close to the beginning of the log is too close (just not the first couple of VLFs?) and how large of a VLF is too large? Thanks.

        1. No – because then the 8GB autogrowths will happen during your workload and will likely lead to a pause while the log grows and the new portion is zero initialized. I’d grow it in chunks when creating the log initially.

          Larger VLFs after the first few are fine – just that you can’t easily shrink below 2-3 VLFs so if they’re enormous, that could be a problem if you want to make the log smaller permanently for some reason.

  7. I rebuilt a log file
    Dbase GB Num_Of_VLFs
    DbName 248 562 Original size
    DbName 4 2 After truncate and defining 4GB
    DbName 8 34 16file increments for each 4GB modification
    DbName 12 50
    DbName 16 66
    DbName 20 82
    DbName 24 98
    DbName 28 114
    DbName 32 130
    DbName 36 146
    DbName 40 147 1 File increments for each 4GB modification
    DbName 44 148
    DbName 48 149
    DbName 248 199

    1. Whoops – sorry for early send.

      I did not expect this. At the very least, its not a fixed number since others have a different count. But I think in general, I would do an 8GB as Paul suggested above, since my target is 248GB.

      We are seeing queue at the log drive which is dedicated to log (its AWS ephemeral). My options are limited so hoping this helps.

  8. I’m confused. You report the algorithm as “When growth size is less than 1/8 current log size, create 1 VLF, Otherwise use formula”. Your example does the opposite; “All growths up to log being 4.5 GB use the formula”.

    Is the algorithm stated in correctly? or am I just reading it wrong?

    1. You’re reading it wrong. In my example, initial size is 1GB and growth is 0.5 GB. So the growth size doesn’t become less than 1/8 the current log size until the log is more than 4GB, i.e. until the autogrowth that occurs when the log is 4.5GB in size.

  9. As far as VLFs on TEMPDB go, I am thinking that those should be smaller due to the constant I/O on this database. The plan for me is to have the VLFs to be 128MB each, as opposed to the OLTP databases where I want VLFs to be 512MB, as those logs tend to be much larger. Is there anything that I am missing in this assumption?

    Thank you,

  10. Happy to see model db data and log in 2016 are default/set to grow in 64 MB chunks instead of %. Nice round numbers, right at the upper limit that creates 4 VLFs, and happens to be double the Hitachi AMS DP growth increment (as if this was 2012 :) ) Curious which cloudy ☁ blobs or Storage Spaces align on similar numbers. Best thing imho is the ignored/unmanaged DB file sizes will line up nicely.

  11. We’ve recently run into some nasty issues with large VLFs (>4GB) in SQL 2016. Tran log backups were crashing the SQLService and full backups were completing but were corrupt. They’re targeting a fix in March 2017.

  12. Hi Paul,

    I have read that large number of VLFs can slow down database start-up and also log backup and restore operations. but I didn’t find anywhere, WHY? I would really appreciate if you please help me understand that what’s the mechanism behind this process to cause all said problems due to large number of VLFs. Also how does it affect to have too few number of VLFs like may be 19 VLFs for a 42GB log file ?
    any useful links might also be a help….

    Thanks in advance…

    1. I don’t know of anywhere online that explains it in detail. It’s because of the way SQL Server has to search for a log record. The more VLFs there are, the longer each search takes. The only problem with having so few VLFs is if the first two are very large and you want to shrink the log to be small, you can’t easily shrink it below the size of the first two VLFs.

  13. Hey Paul! Wondering if you could update the VLF info for SQL Server 2016. We heard that it was different. Thanks! Question from Chicago SQL User Group members. :)

      1. It seems to me from some testing that, in SQL Server 2016, it creates one VLF every time the DB size grows, as opposed to creating, for example, 4 VLFs when it grows at a size of under 64 MBs. Let me know if you’re seeing a difference! Thanks.

  14. Hey Paul,

    For TempDB log file, do you see an issue with initial size of 325G? Would those VLFs are too large in size? Or I would be better off making this a 2 step process – (1) creating tempdb log with initial size of 8G; and (2) following up by many expansions to reach 325G?

    I don’t foresee a need to shrink tempdb log file and we are on SQL 2016.

    Thanks a lot.

  15. Paul!

    I run DBCC Shrinkfile (2,0) on a database and then run DBCC loginfo.
    There are two VLFs, each of which is one gigabyte.
    How can I minimize the size of each one? (1024 kb)
    And then put the size at 8 gigabytes (8000 mb).

    1. The only way is to create a database snapshot and then revert from it, which creates two 256KB VLFs, and breaks your log backup chain. Don’t do that – leave them alone at 1GB.

  16. Excellent post.Thanks!!

    One question, I have many VLFs and I must shrink the log down..
    What is the recommendeded “time” to do that? Before a Full backup? Is there a chance shrink break my log backup chain?

  17. Paul,
    Thank you. I’ve learnt a lot from your blog. I have couple questions
    – Does VLF have the same effect when database is Simple vs Full recovery?
    – Would the big VLF cause log backup taking longer?

  18. These posts in conjunction with your Logging, Recovery, and transaction log pluralsight are just amazing resources (slowly catching up..) Very eye opening as I disaster recover my team’s DBA position though. I can’t seem to find it in documentation, did the issue get corrected with adjusting the transaction log using snapshot backups in any future version, or is that still a safety net for blog posters that can’t get their VLFs the right size? :)

    I’m not a fan of having to remember that refactor (regrowing) and have been using snapshots to test for a while without being versed enough to have noticed (usually in simple recovery in tests, so no usual concern).

    Thank you very much for these lessons (as well as for writing checkdb/emergency mode repair) and general knowledge share. Excellent articles from your wife as well. You’re quickly becoming one of my professional heroes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.