Important change to VLF creation algorithm in SQL Server 2014

Since SQL server 2014 was released back in April last year, there have been some rumblings about changes to how many VLFs are created when the log is grown or auto-grown (I’ll just say auto-grown from now on, as that’s the most common scenario). I experimented a bit and thought I’d figured out the algorithm change. Turns out I hadn’t. There was a question on the MVP distribution list last week that rekindled the discussion and we collectively figured out that the algorithm was behaving non-deterministically… in other words we didn’t know what it was doing. So I pinged my friends in CSS who investigated the code (thanks Bob Ward and Suresh Kandoth!) and explained the change.

The change is pretty profound, and is aimed at preventing lots of auto-growth from creating huge numbers of VLFs. This is cool because having too many (it depends on the log size, but many thousands is too many) VLFs can cause all manner of performance problems around backups, restores, log clearing, replication, crash recovery, rollbacks, and even regular DML operations.

Up to 2014, the algorithm for how many VLFs you get when you create, grow, or auto-grow the log is based on the size in question:

  • Less than 1 MB, complicated, ignore this case.
  • Up to 64 MB: 4 new VLFs, each roughly 1/4 the size of the growth
  • 64 MB to 1 GB: 8 new VLFs, each roughly 1/8 the size of the growth
  • More than 1 GB: 16 new VLFs, each roughly 1/16 the size of the growth

So if you created your log at 1 GB and it auto-grew in chunks of 512 MB to 200 GB, you’d have 8 + ((200 – 1) x 2 x 8) = 3192 VLFs. (8 VLFs from the initial creation, then 200 – 1 = 199 GB of growth at 512 MB per auto-grow = 398 auto-growths, each producing 8 VLFs.)

For SQL Server 2014, the algorithm is now:

  • Is the growth size less than 1/8 the size of the current log size?
  • Yes: create 1 new VLF equal to the growth size
  • No: use the formula above

So on SQL Server 2014, if you created your log at 1GB and it auto-grow in chunks of 512 MB to 200 GB, you’d have:

  • 8 VLFs from the initial log creation
  • All growths up to the log being 4.5 GB would use the formula, so growths at 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 GB would each add 8 VLFs = 56 VLFs
  • All growths over 4.5 GB will only create 1 VLF per growth = (200 – 4.5) x 2 = 391 VLFs
  • Total = 391 + 56 + 8 = 455 VLFs

455 is a much more reasonable number of VLFs than 3192, and will be far less of a performance problem.

A commenter asked whether compatibility level affects this? No – compatibility level is ignored by the Storage Engine internals.

I think this is an excellent change and I can’t see any drawbacks from it (apart from that it wasn’t publicized when SQL Server 2014 was released). CSS will be doing a comprehensive blog post about this soon, but they were cool with me making people aware of the details of the change ASAP to prevent confusion.

You might think that it could lead to very large VLFs (e.g. you set a 4 GB auto-growth size with a 100 GB log), and it can. But so what? Having very large VLFs is only a problem if they’re created initially and then you try to shrink the log down. At a minimum you can only have two VLFs in the log, so you’d be stuck with two giant VLFs at the start of the log and then smaller ones after you’d grown the log again. That can be a problem that prevents the log being able to wrap around and avoid auto-growth, but that’s not anywhere near as common as having too many VLFs. And that’s NOT a scenario that the new algorithm creates. (As an aside, you can fix that problem by creating a database snapshot and then reverting to it, which deletes the log and creates a 0.5 MB log with two tiny VLFs… it’s a bugfeature that’s been there since 2005, but it breaks your log backup chain when you do it.)

There’s certainly more that can be done in future around VLF management (e.g. fixing the problem I describe immediately above), but this is a giant step in the right direction.

Enjoy!

69 thoughts on “Important change to VLF creation algorithm in SQL Server 2014

  1. Paul:
    Thanks for the research and the write-up.
    Does it work that way for all DBs running in SQL 2014, even if at a lower compatibility level?
    Thanks again.

  2. Well, the better fix would be for SQL Server to periodically see through the VLFs and merge too small ones (that are adjacent) into bigger segments. Or split too large ones. This would not require data movement if the VLFs in question are currently unused (which is a common situation).

    The fact that customers even have to know about VLFs is, in my mind, unnecessary. All of this seems to be architecturally fixable.

    1. That could work, but only for inactive VLFs. There’s no possibility of a log record having it’s LSN changed as the algorithm would only touch inactive VLFs. One thing I haven’t thought through fully is whether those operations would have to be logged to make sure that logs on mirrors and secondary replicas remain byte-for-byte copies of those on principals/primaries.

      But yes, agreed. The current implementation dates from 1997/8 when logs weren’t large.

  3. Great news!
    Does the initial vlf count with new DB creation still follow the old algorithm as above?

  4. Great Post Mr. Randal,

    I´ve published a blog post (http://edvaldocastro.com/vlf_control) in my own personal blog about VFLs at the same day, but I was not aware about this change, I will surely edit my blog post.

    I confess a too too big VLF makes me scaried…

    For example: If you have a 200GB Transaction Log file, and you want its growth create VLFs using the old way, you will need to grow you file at once to a size about 25GB (seem too much IMHO, that what makes me scaried).

    Again,

    Thanks for this great contibution… Awesome information,

    Att.

    Edvaldo Castro

  5. Thanks Paul! Quick question, are the log files still zeroed through like before (the parity bits if I recall)? If so, I assume there’s no chance that will change anytime soon, else it would be yet another reason to make sure perform volume maintenance is enabled for the service account.

  6. I wrote below PowerShell script to calculate the example for SQL 2014 in the article and it gives me 458 VLFs. Where am wrong?


    $vlf = 0;

    $initial_size = 1GB;
    $max_size = 200GB;
    $increment = 500MB;

    if($initial_size -le 64MB){
    $vlf += 4;
    }
    elseif($initial_size -le 1GB){
    $vlf += 8;
    }
    else{
    $vlf += 16;
    }

    while($initial_size -lt $max_size){
    if($increment -lt ($initial_size * 0.125)){
    $vlf += 1;
    }
    else{
    if($increment -le 64MB){
    $vlf += 4;
    }
    elseif($increment -le 1GB){
    $vlf += 8;
    }
    else{
    $vlf += 16;
    }
    }
    $initial_size += $increment;
    }

    Write-Host $vlf;

  7. Thanks Paul! Question about big VLF: Do you mean that it would better to create a database initially with a small log size (any reasonable value, minimum 512K which gives only 2 VLF), then ALTER DATABASE … MODIFY FILE to expand it?

    1. No. If I was creating, say a 64 GB log, I’d create it as 8GB then expand in 8GB chunks to 64GB to keep the number of VLFs small. That means each VLF will be 0.5 GB, which is a good size.

      1. Thanks for the post! Very helpful. Quick question. Above you said “…I’d create it as 8GB then expand in 8GB chunks to 64GB…”, do you feel creating the log at an initial 8GB and leaving at an autogrowth of 8GB is ok if I were aiming for a 400GB log as well? That would give us 128 0.5GB VLFs for up to 64GB and then 42x 8GB VLFs after that, correct? You said large VLFs are not a problem at the end of the log, but how close to the beginning of the log is too close (just not the first couple of VLFs?) and how large of a VLF is too large? Thanks.

        1. No – because then the 8GB autogrowths will happen during your workload and will likely lead to a pause while the log grows and the new portion is zero initialized. I’d grow it in chunks when creating the log initially.

          Larger VLFs after the first few are fine – just that you can’t easily shrink below 2-3 VLFs so if they’re enormous, that could be a problem if you want to make the log smaller permanently for some reason.

  8. looking at your algorithm, i wanted to know what 200 is indicating so i can calculate on my end as well.

  9. I rebuilt a log file
    Dbase GB Num_Of_VLFs
    DbName 248 562 Original size
    DbName 4 2 After truncate and defining 4GB
    DbName 8 34 16file increments for each 4GB modification
    DbName 12 50
    DbName 16 66
    DbName 20 82
    DbName 24 98
    DbName 28 114
    DbName 32 130
    DbName 36 146
    DbName 40 147 1 File increments for each 4GB modification
    DbName 44 148
    DbName 48 149
    DbName 248 199

    1. Whoops – sorry for early send.

      I did not expect this. At the very least, its not a fixed number since others have a different count. But I think in general, I would do an 8GB as Paul suggested above, since my target is 248GB.

      We are seeing queue at the log drive which is dedicated to log (its AWS ephemeral). My options are limited so hoping this helps.

  10. I’m confused. You report the algorithm as “When growth size is less than 1/8 current log size, create 1 VLF, Otherwise use formula”. Your example does the opposite; “All growths up to log being 4.5 GB use the formula”.

    Is the algorithm stated in correctly? or am I just reading it wrong?

    1. You’re reading it wrong. In my example, initial size is 1GB and growth is 0.5 GB. So the growth size doesn’t become less than 1/8 the current log size until the log is more than 4GB, i.e. until the autogrowth that occurs when the log is 4.5GB in size.

  11. As far as VLFs on TEMPDB go, I am thinking that those should be smaller due to the constant I/O on this database. The plan for me is to have the VLFs to be 128MB each, as opposed to the OLTP databases where I want VLFs to be 512MB, as those logs tend to be much larger. Is there anything that I am missing in this assumption?

    Thank you,
    Marc

  12. Happy to see model db data and log in 2016 are default/set to grow in 64 MB chunks instead of %. Nice round numbers, right at the upper limit that creates 4 VLFs, and happens to be double the Hitachi AMS DP growth increment (as if this was 2012 :) ) Curious which cloudy ☁ blobs or Storage Spaces align on similar numbers. Best thing imho is the ignored/unmanaged DB file sizes will line up nicely.

  13. We’ve recently run into some nasty issues with large VLFs (>4GB) in SQL 2016. Tran log backups were crashing the SQLService and full backups were completing but were corrupt. They’re targeting a fix in March 2017.

  14. Hi Paul,

    I have read that large number of VLFs can slow down database start-up and also log backup and restore operations. but I didn’t find anywhere, WHY? I would really appreciate if you please help me understand that what’s the mechanism behind this process to cause all said problems due to large number of VLFs. Also how does it affect to have too few number of VLFs like may be 19 VLFs for a 42GB log file ?
    any useful links might also be a help….

    Thanks in advance…

    1. I don’t know of anywhere online that explains it in detail. It’s because of the way SQL Server has to search for a log record. The more VLFs there are, the longer each search takes. The only problem with having so few VLFs is if the first two are very large and you want to shrink the log to be small, you can’t easily shrink it below the size of the first two VLFs.

  15. Hey Paul! Wondering if you could update the VLF info for SQL Server 2016. We heard that it was different. Thanks! Question from Chicago SQL User Group members. :)

      1. It seems to me from some testing that, in SQL Server 2016, it creates one VLF every time the DB size grows, as opposed to creating, for example, 4 VLFs when it grows at a size of under 64 MBs. Let me know if you’re seeing a difference! Thanks.

  16. Hey Paul,

    For TempDB log file, do you see an issue with initial size of 325G? Would those VLFs are too large in size? Or I would be better off making this a 2 step process – (1) creating tempdb log with initial size of 8G; and (2) following up by many expansions to reach 325G?

    I don’t foresee a need to shrink tempdb log file and we are on SQL 2016.

    Thanks a lot.

  17. Paul!

    I run DBCC Shrinkfile (2,0) on a database and then run DBCC loginfo.
    There are two VLFs, each of which is one gigabyte.
    How can I minimize the size of each one? (1024 kb)
    And then put the size at 8 gigabytes (8000 mb).

    1. The only way is to create a database snapshot and then revert from it, which creates two 256KB VLFs, and breaks your log backup chain. Don’t do that – leave them alone at 1GB.

    1. I don’t know – you need to figure that out for yourself based on the speed of your disks – anywhere from 1/4 GB to 2 GB. *There is no right answer*

  18. Excellent post.Thanks!!

    One question, I have many VLFs and I must shrink the log down..
    What is the recommendeded “time” to do that? Before a Full backup? Is there a chance shrink break my log backup chain?

  19. Paul,
    Thank you. I’ve learnt a lot from your blog. I have couple questions
    – Does VLF have the same effect when database is Simple vs Full recovery?
    – Would the big VLF cause log backup taking longer?
    Thanks
    Eric

  20. These posts in conjunction with your Logging, Recovery, and transaction log pluralsight are just amazing resources (slowly catching up..) Very eye opening as I disaster recover my team’s DBA position though. I can’t seem to find it in documentation, did the issue get corrected with adjusting the transaction log using snapshot backups in any future version, or is that still a safety net for blog posters that can’t get their VLFs the right size? :)

    I’m not a fan of having to remember that refactor (regrowing) and have been using snapshots to test for a while without being versed enough to have noticed (usually in simple recovery in tests, so no usual concern).

    Thank you very much for these lessons (as well as for writing checkdb/emergency mode repair) and general knowledge share. Excellent articles from your wife as well. You’re quickly becoming one of my professional heroes.

  21. Hi Paul.

    A couple of years ago I read in a blog (not sure if it was you or Kimberly who wrote it) that for big log-files it was best to grow in increments of 8GB. Do you still recommend this for SQL Server 2019? I was reading the below statement in Microsofts documentation for managing the file size of the log files:
    “As a best practice, do not set the FILEGROWTH option value above 1,024 MB for transaction logs.”

    Thank you for your help.

    1. Hey Gert – that recommendation was for manual growths, and before they made the changes to the algorithm. Microsoft’s recommendation is for autogrowth, and it’s so that the zero-initialization that takes places doesn’t cause any noticeable workload pause. Thanks

  22. Hi Paul,
    first of all, thanks for all the knowledge sharing!

    I read the your article “Top Tips for Effective SQL Server Database Maintenance” on Microsoft TechNet magazine where you wrote that is possible to “enable instant file initialization, which skip zero-initializing the files and hence makes growth and auto-growth virtually instantaneous”.

    Do you recommend to enable the Instant file initialization?
    Can this have any bad side effect?

    regards,
    Alex

  23. Hello, thanks for the article. I wonder if there is a magic number of VFL at which one should consider reducing the amount? Or is it that you should always compare the time after a retart to the time of the entries in the database log file, e.g. based on the wait

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.