BACKUP LOG WITH NO_LOG – use, abuse, and undocumented trace flags to stop it

This has come up several times over the last few days, and is something that Kimberly and I discuss (and deplore) when we lecture – the use of BACKUP LOG WITH NO_LOG (or  TRUNCATE_ONLY – they’re synonymous) to allow log truncation.

How is it used?

The common use is when the transaction log grows to be inordinately large on a database that’s in FULL (or BULK_LOGGED) recovery mode. This happens after a database backup has been taken, which switches the log into a mode where it won’t truncate until its been backed up. In these circumstances, if you don’t take a transaction log backup, the log will continue to grow until it runs out of disk space.

The alternative to taking a real log backup is to issue a BACKUP LOG dbname WITH NO_LOG command. Let’s see this in action – note that I’m not advocating its use but I want to show you what it does. First off I’ll create some transaction log after taking a full database backup:

USE [nologtest];

BACKUP DATABASE [nologtest] TO DISK = N'C:\sqlskills\nologtest.bck' WITH INIT;

CREATE TABLE [t1] ([c1] INT IDENTITY, [c2] CHAR (8000) DEFAULT 'a')


GO 10000

How large is the transaction log now?

SELECT [name], [size] FROM sys.database_files;
name            size
--------------- --------
nologtest       90264
nologtest_log   104128

A little bit larger than the data file (which is what I’d expect after the operation I just performed) and they’re both around 100MB. I’ll pretend that I haven’t been paying attention to the size of the database and log and now I don’t have any space to perform a backup. Can I just shrink the log?

DBCC SHRINKFILE (nologtest_log, 2);
Cannot shrink log file 2 (nologtest_log) because all logical log files are in use.
DbId   FileId      CurrentSize MinimumSize UsedPages   EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
11     2           102944      63          102944      56

No. What’s stopping me (well DBCC SHRINKFILE just told me, but let’s double-check)?

SELECT log_reuse_wait_desc FROM sys.databases WHERE NAME = 'nologtest';

Ok – so I can’t perform a backup so I’ll use BACKUP LOG WITH NO_LOG:


SELECT [log_reuse_wait_desc] FROM sys.databases WHERE [name] = N'nologtest';

Now it looks like I can do the shrink:

DBCC SHRINKFILE (nologtest_log, 2);
DbId   FileId      CurrentSize MinimumSize UsedPages   EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
11     2           256         63          256         56


Hold on, are you sure?

Why is it bad?

Do you realize what just happened? We discarded the contents of the transaction log since the last full/differential/log backup. That means that until the next full or differential backup completes, all work since the last backup completed will be lost if the database is destroyed. So if your business relies on not losing data/work in the event of a disaster – then what I just did is sacrilege!

The whole point of FULL (and BULK_LOGGED) recovery modes is to preserve the transaction log so that the database can be recovered right up to the point of a failure, or to do point-in-time restores. Using BACKUP LOG WITH NO_LOG negates all of that. (For completeness, note that you can’t do a point-in-time restore to any time in a transaction log backup containing a bulk-logged transaction).

What are the alternatives?

If you don’t want the FULL recovery mode behavior, then don’t use FULL recovery mode – switch to SIMPLE mode. But be aware that you lose the ability to restore to any point in between your full or differential backups.

If you want the FULL recovery mode behavior, but don’t want to run out of log space – then the answer is again simple (pun intended :-), take log backups! If your log is growing too quickly, take more frequent log backups! If you don’t have enough disk space to store the log backups, talk to your management about the options – buying more disks and continuing to use the FULL recovery options, or being forced to move to SIMPLE recovery mode.

One thing to bear in mind – you actually have to monitor the size of your log to tell whether its growing. That’s what gets people into trouble in the first place – a combination of:

  • Not knowing that the database is in FULL recovery mode
  • Not tracking the size of the log

The problem comes for those people who just use an app and have no idea that the developer put the database (maybe for a web application log) into FULL recovery mode, or even that they have SQL Server on their system. For these people there is no good answer unfortunately apart from educating developers.

How to prevent it being used

If you’re a sysadmin who wants to stop your database owners and backup operators from using the NO_LOG option, there is way to do it. Trace flag 3231 in SQL Server 2000 and SQL Server 2005 will turn the NO_LOG and TRUNCATE_ONLY options into no-ops in FULL/BULK_LOGGED recovery mode, and will clear the log in SIMPLE recovery mode. Trace flag 3031 in SQL Server 2005 turns them in checkpoints in all recovery modes. These trace flags are undocumented but they’re allowed to be publicized.

This way no-one can truncate the log unless you allow them to, and when they come to you to complain, you can educate them!

35 thoughts on “BACKUP LOG WITH NO_LOG – use, abuse, and undocumented trace flags to stop it

  1. Paul,
    When Log is full in Full recovery mode, we should have option to take atleast Log backup without changing the recovery mode. Changing the recovery mode breaks the DR and Log Shipping.
    Any comments?


  2. Of course – and you always have that option. My point was that if you never take log backups at all, there’s no point being in FULL recovery mode and risking your transaction log filling up – you may as well change to SIMPLE recovery mode.


  3. Hi,

    Somewhat bemused by this article – but not the article itself.

    Our DBAs (I’m a developer) have always been reluctant to allow us to use BULK LOGGED recovery, on the premise that it doesn’t allow point in time.

    Which is right? I’ll admit to never having played on a server which I can do such things with.


  4. Hi Graham,

    There’s no right answer – it all depends (my favorite answer :-) Using BULK_LOGGED doesn’t allow point in time to any time within a transaction log backup containing a bulk-logged transaction, but it does allow large bulk operations to be performed without causing the transaction log to grow inordinately large (although transaction log *backups* will still be roughly the same size as if FULL recovery mode had been used). It all comes down to whether the data recovery service-level agreements your DBAs must support include point in time recovery to *any* point in time.


  5. Hi,

    Wonderful article. I have used used TRUNCATE and NO_LOG options are lot of times when there was no option left but to dump it to the disk.

    Amazing to find out that there is trace flag that can be used. Thanks much!!!!

  6. backup log with no_log is a very important tool

    I have a lot of small to medium size companies that run software packages with SQL Server databases but do not have an on-site DBA and most do not have a full-time server/network administrators. So who exactly is going to monitor these log files? That simply is not the real world.

    I have these companies do a nightly full database backup because it’s simple to implement. (I’ve found that training semi-technical people on incremental backups and recovery is just too complicated.) Once a week, the transaction logs get dumped immediately after the full database backups.

    Why not do simple? Because if there was an oops during the day (somebody deleted all the manufacturing bill of materials by accident), you need full recovery mode so that there is a way to restore to a point in time. That’s the level of protection that small companies need.

    Buying more servers, drives, etc. is easy to say in articles but again, that is not the real world. The money just isn’t there after they pay Microsoft for software assurance and all the other software vendors for annual maintenance, etc.

    I sincerely hopes that Microsoft reconsiders because it is a valuable tool in certain situations.


  7. Hi Joe,

    I respectfully disagree with you – there’s no need for NO_LOG in the situation you describe. The alternative is to take a transaction log backup just before the daily full backup, then when the full backup completes, delete the transaction log backup. It achieves the same thing without sacrificing the ability to recover during the day. That’s no more difficult to implement than a full database backup, right?


  8. backup log with no_log
    (I ran through your example, i was looking for the physical backup file of the log but can’t find it).
    Does this command not actually do a physical backup?
    Hence the name i assume the answer is no.

    So if the system was to crash i could only go back to the last full backup.

  9. Hi Tracey – that’s correct, and the whole point – all a NO_LOG backup does is truncate the log – nothing’s actually backed up.

  10. Hi Paul,

    Do you think the following DB maintenance plan can help the DB performance? (DB Size = 100GB)

    Step 1: Backup DB (Full Backup)
    Step 2: Maintain Index (rebuild index of the entire DB)
    Step 4: Backup Log (BACKUP WITH NO_LOG)

    Remarks: We go though the above steps 2 times a week.

  11. Hi dMark,

    No – this seems a little confused. At the end of the sequence you’re throwing away the log file – which leads me to believe you don’t want point-in-time recovery capabilities. If not, you should switch to the SIMPLE recovery model so the log doesn’t grow. If you DO want point-in-time recovery capabilities, start taking periodic transaction log backups so the log doesn’t grow.

    Also, you shouldn’t rebuild all the indexes – only those that have fragmentation that’s affecting workload performance. See for details.

    Send me mail if you have any more questions.


    1. I disagree… There is one situation when even you do not need point in time recovery but you must keep FULL recovery model. When? But of course when you have Replication…!!! In this situation when the log file is growing and you can not use SIMPLE recovery model to stop it periodic truncation of logs is NECESSARY…

  12. Hi Paul :

    Thanks for your aticle and it really clean up few things for me,
    I have a 20 GB database runs on SQL 2005 in full mode and I do daily:
    step 1: backup database
    step 2: backup log files — DBCC SHRINKFILE (N’PROD_log’ , 0)
    step 3: shrink database

    but log file (only one log file) keeps growing, and there is no any connections to this database when I doing shrink (tried manully), message I am getting is :
    "Cannot shrink log file 2 (Prod_log) because all logical log files are in use"

    Any idea ? and much appreciated


  13. Hey Ji,

    Glad it was helpful. Ok – two things: one – don’t shrink the database (not sure whether you’re shrinking the database or the transaction log). You shouldn’t be doing either on a daily basis.

    To find out what’s got your log tied up, look in the sys.databases table at the log_reuse_wait_desc field – that should describe what’s holding up log truncation.


  14. Hi Paul,

    it might seem a little strange at first but I came here in search for answers that came to me during the setup of a Database Mirroring environment.

    Some days back it was decided here that DR was required for our production databases. I choose Database Mirroring because at first it seemed easy to setup and easy to manage.

    We had to switch from SIMPLE to FULL recovery model to be able to use Database Mirroring. And this is what brings me here. The transaction log files are growing faster than I can do backups of them (it’s not that bad – but they are just growing too fast).

    While taking backups on the Principle machine is fairly simple I have no answer on how to proceed on the mirror. Since the database is constantly "In Recovery" it is not possible to take backups. I’m near at giving up Database Mirroring because of this. My last hope is that I can suspend the Mirroring Session, Backup NO_LOG and resume it (hoping that enough information on the Mirror LOG survices to catch up with the Principle again). What do you think of this?


  15. Hi Shrew,

    The log file on the mirror should be exactly the same as on the principal – there’s no way to have the mirror one be smaller.

    The solution that you suggest won’t work.


  16. Hi Paul,

    You are totally right. The transaction logs on mirror and principal are the same. The only way to alter them is through the principal. That’s actually great! When I find a way to shrink the logs on the principal the mirror will just fall in line!

    Obviously I can’t use NO_LOG. (It’s just not allowed for Database Mirroring). Aha! I didn’t know that until I tried.
    This leaves me with only one option: log backups.
    But they don’t shrink my log file.
    – There are no open transactions
    – There is lots of space available in the log file
    – log_reuse_wait_desc is waiting for NOTHING

    I made backups of the log repeatedly but the log will just grow.

    So I decided to call DBCC SHRINKFILE
    – log_reuse_wait_desc is still waiting for NOTHING (before and after)
    – SHRINKFILE will print: Cannot shrink log file 2 (*******_log) because all logical log files are in use.

    Last I tried to call DBCC SHRINKFILE over and over again and suddenly the log file would shrink.

    It is obvious to me that something was blocking the shrink operation and that neither BACKUP nor SHRINKFILE can be used to handle this in the background automatically.

    So what is happending and how can it be handled with grace?

    Thanks in advance

  17. You need to do a log backup and shrinkfile a couple of times sometimes to make the log wrap around to the start of the file again – only when the portion of the log at the end of the log file is not in use can it be shrunk.


    PS I’ve just written a feature article on logging and recovery that explains this for the February 2009 issue of TechNet Magazine – also see ‘Managing the Transaction Log’ in Books Online.

  18. Regarding "If you’re a sysadmin who wants to stop your database owners and backup operators from using the NO_LOG option" Did some testing and found that the trace flag makes NO_LOG a no-op even when run by a sysadmin.

    Knowing about this capability would of helped a few weeks ago when a Junior DBA "fixed" a problem by running "NO_LOG", which caused database replication to fail.

    Thanks for the information.

  19. Hi Paul,

    We are having our dbs in Always ON. So, the log is growing very fast even though we are taking tlog baks for every 15 mins. We actually don’t need point in time restore but just HA to the users and we are more concerned abt the space it is occupying. Can we use Backup with NO_LOG (or) we need to delete the Tlog backups just before the next Tlog bak run?

    Which option is better?


    1. BACKUP LOG WITH NO_LOG was removed in 2008.

      Why is the log growing? Are you running in async with a huge send queue? If so, you’re looking at a lot of potential data loss.

      I don’t understand your strategy – what if both copies of your database are destroyed? You can only recover back to your last full backup. You need backups and Always On. There’s no way I’ll advise what you suggest doing.

  20. Hi Paul,

    I am new in MSSQL as working in mysql, so may you check below steps to shrink log file on production server as it is critical for us and we can’t compromise with point-in-time recovery and guide if you see any issue in these steps or any other valuable suggestion:

    Thanks in advance for your valuable suggestion.

    Shrink log file steps in mssql: This activity will be just before full backup of DB.

    Step1: lock MyDB by below command-

    Step2: Execute below commands to shrink the DB Logs.
    DBCC SHRINKFILE (MyDB_log); — Shrink file to its default size.

    Step3: Take full backup of MyDB.

    Step4: Release to MyDB by below command.
    USE master;Alter Database MyDB SET MULTI_USER;

  21. May you help me how I can reduce my log file size with out compromising with point-in-time recovery.

    Also my understanding is that whenever we take a backup (full/transactional) db log file size will grow while as per my understanding only logs after last full backup will be required to restore complete data in case of any mis-happening. So we should purge old logs along with old backups. Please correct me if I am wrong and may you share any script to do this task.


  22. Hi Paul,

    As per the above lined documents provided by you, max. log file size should 20 to 30% of DB size, as transaction will be either committed or rolled back, logs will be truncated from log file and either space will be free or reused by new logs.

    Then why log file size is growing more than DB size even 2 or 3 times bigger than DB size (you can check so many users problem on internet). Yes as per this document it can grow in some situations but these situations are rare so it should not so frequent with mostly users.

    May you suggest any command by which I can know reason of extra size of my log file. Yes I checked in ms sql documentation to check state of sys.databases. But I am not getting any reason out of them.

    In my case my db size is 59 GB and its log file size is 25 GB and log_reuse_wait_desc = ‘LOG_BACKUP’ and recovery_model_desc = ‘FULL’, so it should not prevent log truncation.

    Please correct me if I am wrong any where.


  23. i am shrinking log file of SQL Server 2008 R2 database, but it is not shrinking. i was using this code


    i also tried following code but this also not solved my problem.

    BACKUP LOG [database] TO DISK = ‘D:\database.bak’ GO


    use [database] declare @log_File_Name varchar(200)

    select @log_File_Name = name from sysfiles where filename like ‘%LDF’

    declare @i int = FILE_IDEX ( @log_File_Name)

    dbcc shrinkfile ( @i , 50)


    please help


  24. You said many times that “if someone do not need point in time recovery then it can be used the SIMPLE recovery model to reduce the size of transaction log file instead of NO_LOG”. Yes but in a situation that a FULL recovery model is required but in the same time you have problem with log’s file growth rate? For example I have a Replication to one table in my database (2.000 tables). So I have to switch database’s recovery model to FULL otherwise Replication does not work. So, even if I do not need a point in time recovery (I have a daily backup) I am forced to accept the increase of the log size of all database just because I have a Replication to one table….
    In this situation the NO_LOG is very helpful… BUT… Microsoft remove this from Sql Server 2008… What a pity… I have to bye one disk per month… THIS IS NOT A SOLUTION….

    1. No – the solution is to make you replication process faster so the log doesn’t grow. This means using a faster I/O subsystem for the log so the log scanner works faster and other things. You do not need to be in full for replication to work. Log truncation does not occur in simple when replication still hasn’t scanned the log.

  25. Hi Paul,

    Thanks for your explanations. Here is my question based on our scenario.

    We perform sql full database backups on a daily basis. At the end of the business day we would either be 1) happy with the days transactions based on the auto-checks and go ahead with the fullbackup (OR) the auto check fails and the database might require restoring to a point in time during the day only.

    With discontinuation of “with truncate_only /no-log”, wonder what are our options to server our requirement better.

    Thanks in anticipation,

    1. You need to continue using log backups so that if you need to restore after a disaster, you can restore up to the point of the crash. There is no situation that requires using truncate_only/no_log.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.