Why you should not shrink your data files

(New for 2020: we’ve published a range of SQL Server interview candidate screening assessments with our partner Kandio, so you can avoid hiring an ‘expert’ who ends up causing problems. Check them out here.)

(Check out our Pluralsight online training course: SQL Server: Understanding and Using DBCC Commands.)

One of my biggest hot-buttons is around shrinking data files. Although I used to own the shrink code while I was at Microsoft, I didn’t write it (so don’t blame me! :-) and it was never deemed a big enough customer pain-point to fix. But I really don’t like the data-file shrink process.

Now, don’t confuse shrinking the transaction log with shrinking data files. Shrinking the log may be necessary if your log has grown out of control, or as part of a process to remove excessive VLF fragmentation (see Kimberly’s excellent posts on this here and here). However, shrinking the log should be a rare operation and should not be part of any regular maintenance you perform.

In either case, I’m not talking about using the TRUNCATEONLY option – all that does is lop off any unused space at the end of the files – that’s perfectly fine. I’m talking about actually running the shrink algorithm.

Shrinking of data files should be performed even more rarely, if at all. Here’s why: data file shrink can cause *massive* index fragmentation (of the out-of-order pages kind, not the wasted-space kind) and it is very expensive (in terms of I/O, locking, transaction log generation). Let me demonstrate with a simple script you can run. The script below will create a data file, create a 10MB ‘filler’ table at the start of the data file, create a 10MB ‘production’ clustered index, and then analyze the fragmentation of the new clustered index.

USE [master];
GO

IF DATABASEPROPERTYEX (N'DBMaint2008', N'Version') IS NOT NULL
    DROP DATABASE [DBMaint2008];
GO

CREATE DATABASE DBMaint2008;
GO
USE [DBMaint2008];
GO

SET NOCOUNT ON;
GO

-- Create the 10MB filler table at the 'front' of the data file
CREATE TABLE [FillerTable] (
    [c1] INT IDENTITY,
    [c2] CHAR (8000) DEFAULT 'filler');
GO

-- Fill up the filler table
INSERT INTO [FillerTable] DEFAULT VALUES;
GO 1280

-- Create the production table, which will be 'after' the filler table in the data file
CREATE TABLE [ProdTable] (
    [c1] INT IDENTITY,
    [c2] CHAR (8000) DEFAULT 'production');
CREATE CLUSTERED INDEX [prod_cl] ON [ProdTable] ([c1]);
GO

INSERT INTO [ProdTable] DEFAULT VALUES;
GO 1280

-- Check the fragmentation of the production table
SELECT
    [avg_fragmentation_in_percent]
FROM sys.dm_db_index_physical_stats (
    DB_ID (N'DBMaint2008'), OBJECT_ID (N'ProdTable'), 1, NULL, 'LIMITED');
GO

 

avg_fragmentation_in_percent
-----------------------------
0.390625

The logical fragmentation of the clustered index before the shrink is a near-perfect 0.4%.

Now I’ll drop the ‘filler’ table, run a shrink to reclaim the space, and re-analyze the fragmentation of the clustered index:

-- Drop the filler table, creating 10MB of free space at the 'front' of the data file
DROP TABLE [FillerTable];
GO

-- Shrink the database
DBCC SHRINKDATABASE ([DBMaint2008]);
GO

-- Check the index fragmentation again
SELECT
    [avg_fragmentation_in_percent]
FROM sys.dm_db_index_physical_stats (
    DB_ID (N'DBMaint2008'), OBJECT_ID (N'ProdTable'), 1, NULL, 'LIMITED');
GO

 

DbId  FileId  CurrentSize  MinimumSize  UsedPages  EstimatedPages
----- ------- ------------ ------------ ---------- ---------------
6     1       1456         152          1448       1440
6     2       63           63           56         56

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

avg_fragmentation_in_percent
-----------------------------
99.296875

Wow! After the shrink, the logical fragmentation (out-of-order pages) is almost 100%. The shrink operation *completely* fragmented the index. Now, will that have any effect on your workload performance? If the index is larger than a few thousand pages, is typically not all in the buffer pool, and is scanned a lot, then possibly. Otherwise, probably not.

Why does this happen? A data file shrink operation works on a single file at a time, and uses the GAM bitmaps (see Inside The Storage Engine: GAM, SGAM, PFS and other allocation maps) to find the highest page allocated in the file. It then moves it as far towards the front of the file as it can, and so on, and so on. In the case above, it completely reversed the order of the clustered index, taking it from perfectly defragmented to perfectly fragmented.

The same code is used for DBCC SHRINKFILE, DBCC SHRINKDATABASE, and auto-shrink – they’re equally as bad. As well as introducing index fragmentation, data file shrink also generates a lot of I/O, uses a lot of CPU, and generates *loads* of transaction log – as everything it does is fully logged.

Data file shrink should never be part of regular maintenance, and you should NEVER, NEVER have auto-shrink enabled. I tried to have it removed from the product for SQL 2005 and SQL 2008 when I was in a position to do so – the only reason it’s still there is for backwards compatibility. Don’t fall into the trap of having a maintenance plan that rebuilds all indexes and then tries to reclaim the space required to rebuild the indexes by running a shrink – that’s a zero-sum game where all you do is generate a log of transaction log for no actual gain.

So what if you *do* need to run a shrink? For instance, if you’ve deleted a large proportion of a very large database and the database isn’t likely to grow, or you need to empty a file before removing it?

The method I like to recommend is as follows:

  • Create a new filegroup
  • Move all affected tables and indexes into the new filegroup using the CREATE INDEX … WITH (DROP_EXISTING = ON) ON syntax, to move the tables and remove fragmentation from them at the same time
  • Drop the old filegroup that you were going to shrink anyway (or shrink it way down if its the primary filegroup)

Basically you need to provision some more space before you can shrink the old files, but it’s a much cleaner mechanism. Beware that if a table has off-row LOB data, rebuilding will NOT move the LOB data to the new filegroup and there’s no good way to do that.

If you absolutely have no choice and have to run a data file shrink operation, be aware that you’re going to cause index fragmentation and you might need to take steps to remove it afterwards if it’s going to cause performance problems. The only way to remove index fragmentation without causing data file growth again is to use DBCC INDEXDEFRAG or ALTER INDEX … REORGANIZE. These commands only require a single 8KB page of extra space, instead of needing to build a whole new index in the case of an index rebuild operation (which will likely cause the file to grow).

Bottom line – try to avoid running data-file shrink at all costs! I’m often misquoted as saying ‘never run data-file shrink’ – that’s not true. I say ‘never run a regular shrink operation of any kind’, and if you absolutely must run a data-file shrink, be aware of the problems it can cause.

117 thoughts on “Why you should not shrink your data files

  1. Wow, that’s a fairly radical suggestion–moving all tables into a new file group instead of running a shrink.

    Not quite sure, but I get the vague feeling you don’t like shrink ;)

    Is that CREATE INDEX … WITH (DROP_EXISTING) syntax supported on SQL 2000?

  2. I know you’re a busy man, but what about writing a 3rd party utility that does a "smart shrink" without fragmenting indexes?

    You can work on that right after you finish your "run DBCC CHECKDB on a database backup" utility :)

  3. Brad,
    >>Is that CREATE INDEX … WITH (DROP_EXISTING) syntax supported on SQL 2000?
    Yes, it is supported. Here is the BOL link. http://msdn.microsoft.com/en-us/library/aa258260(SQL.80).aspx

    >>Not quite sure, but I get the vague feeling you don’t like shrink ;)
    Easy one here. Paul doesn’t like SHRINK and a lot of DBA’s also don’t like that. You may want to catch up his previous post (https://www.sqlskills.com/blogs/paul/post/Auto-shrink-e28093-turn-it-OFF!.aspx)

    >>what about writing a 3rd party utility that does a "smart shrink" without fragmenting indexes?
    Never heard about this. Looks like a disk defragmenter kind of stuff; it may remove external fragmentation of
    the files but shouldn’t run this as regular maintenance. Keep the files (data & log) as large as possible and not allow auto growth to kick in. But keep the auto growth turned on just in case.

    Paul, hope you don’t mind this.

  4. I’m not sure how to apply this in a data-warehouse environment, where most of my slack space is caused by the process of creating the clustered index. Say I create a new database and bulk-insert a 250 gig table into it. I then build a clustered index and it swells to have 200 gig of free space when it’s done. SORT_IN_TEMP_DB helps, but it’s not always an option. Where does the sort occur when the table is moved to a new filegroup?

  5. Hi Paul,

    Thanks for this post. I have many databases in my enviroment with Auto_Shrink option set on. This post make me write a script to revert this situation. Follow below..(works well in SQL 2000, i dont test in 2005+)

    DECLARE @DataBase sysname

    DECLARE database_cursor CURSOR
    FOR SELECT
    quotename(name) as Base
    FROM
    master..sysdatabases
    WHERE
    DATABASEPROPERTYEX(name,’Status’)=’ONLINE’
    AND DATABASEPROPERTYEX(name,’IsAutoShrink’) = 1

    OPEN database_cursor
    FETCH NEXT
    FROM database_cursor
    INTO @Base

    WHILE @@FETCH_STATUS = 0
    BEGIN
    EXEC (‘ALTER DATABASE ‘ + @BASE + ‘ SET AUTO_SHRINK OFF’)

    FETCH NEXT
    FROM database_cursor
    INTO @Base
    END
    CLOSE database_cursor
    DEALLOCATE database_cursor
    GO

  6. No – it needs to be a new filegroup – and you can’t get rid of the primary filegroup (although you could shrink it down after moving some of the larger objects to the new filegroup)

  7. Paul, First of all, thanks for the great blog.
    I get it that shrink ops cause massive index fragmentation. It appeared that prior versions of shrinkfile/db single thread operations. Is that still the case? I currently have a shrinkfile running on a mid sized data file (100gb) that has been running for quite a few hours – I’m trying to find the bottleneck. The machine is dedicated to running this shrink. Via perfmon, I’m seeing a lot of activity in the SQLServer:Databases Shrink Data Movements Bytes/Sec, Log Flushes/sec, and Log Bytes Flushed/sec. I see usage across all processors (4) – typically not more than 5% or so. There is no paging going on and disk queue lengths are consistenly below 1.

  8. Yes, shrink is always single-threaded – which of course contributes to the long run-time. In your case, it could be that shrink is waiting for locks, it could be moving LOB data or heap data (which is very slow). Shrink’s just not a fast beast.

    Thanks

  9. Understand the anomaly of Shrink, but what is the best way to move tables with its index from primary group at one go. i mean no manual drop and create. script which does it automatically for each tables in the the file group.

  10. Hello,
    I’m seeking to understand if the above discussion also applies to Sql Server Compact 3.5. I’m dealing with a mobile application whose privately deployed DB should be capped at about 200MB. The plan was to programmatically purge old data as it becomes stale, followed by the shrink operation described here: http://msdn.microsoft.com/en-us/library/ms171752(v=sql.100)

    Since Sql CE is a file-based DB that apparently doesn’t support clustered indexes, I’m in doubt as to whether the above discussion applies. At the same time, performance degradation is a top concern. Could someone share their knowlege?

  11. I’m new to all of this, so i’d like to know what can I do to fix the fragmentation that weekly shrinking (over many years, apparently) has caused in my SQL 2005 database. Coincidentally, this is the only database that was set up this way and it is the only one where users report huge problems with performance.

  12. Super…

    I have one doubt here…

    If we enable the auto-Shrink option on database. at what time It can be shrink..Is there any specific time to shrink the database like “check point occurs” will it be shrink..
    How SQL server decide when It can be shrink…

  13. Paul,

    Thanks for the article,

    I have one question about –
    >> As well as introducing index fragmentation, data file shrink also generates lot of I/O, uses a lot of CPU, and generates *loads* of transaction log – as everything it does is fully logged

    What is it logging in transaction log? If you kill dbcc shrinkfile does it undo all changes (page movements up to that point)?
    I thought process will work one page at a time – first copy the contents of the page from back of the file to new location reset page linkage (pointers/GAM) and free up the space of old page and so on..(check pointing after every operation).

    Thanks in Advance,
    RT

  14. What if it’s a SharePoint database( SharePoint content db and SharePoint system db)? Should I create new file group or I have to shrink the db?

      1. Thanks for the answer, in the first place!

        The situation I had – 350 GB content db(moss 2007 content db);approximately 200 GB content deleted. So I decide to shrink the db ….

  15. Hi Paul, I once got a tip to set up a maintenance plan to run a Shink Database task whith the “Retain space in database files” option (NOTRUNCATE), followed by a Rebuild index task that is then followed by a update stats task.
    Will this not then compact the database, eliminating hollow space ( I know this will fragment the indexes) but by then running the Rebuild index job correcting the pointers of the indexes?

    Then ending up whit a less fragmented database file and at the same time not ha fragmented indexes?

    1. No – in that case the shrink is entirely superfluous – you should just rebuild. It’s extremely unlikely that the database file will become so fragmented for free space between maintenance runs that it requires the brute force shrink to provide enough contiguous free space for the index maintenance to be optimal. Just don’t regularly shrink, ever.

  16. What about a database that has approx. 50% empty space due to the data possibly being moved from one file group to another and so on. There are 8 file groups the default install for this database (not chosen by myself). The DB is 150Gb of which 75Gb is just empty. I am not sure if I should reclaim the space or just leave it at half empty.

  17. Paul,

    I had a database for a third party mailing application that grew to an unsightly size due to an issue with the grooming job (external to SQL Server). The data file used to be in the neighborhood of 250GB and eventually we had to grow it to 700GB (just the data file (and much to the chagrin of my SAN Admin)) before the company provided a fix for the issue.

    Now I have 75% free space in a 700GB data file for this database (which also contains 125 LOB data fields!) and shrinking this is necessary to regain the disk space and painfully slow due to the LOB fields.

    https://www.sqlskills.com/blogs/paul/why-lob-data-makes-shrink-run-slooooowly-t-sql-tuesday-006/

    Any suggestions?

    Thanks,
    Gary

    1. You’ve really got no choice I’m afraid except to shrink. Even if you rebuilt the indexes to a new filegroup, you’d still have the LOB data to deal with, which would require shrink.

  18. Paul,

    First off thanks for writing this, it’s been a huge help! Quick question however, why would Msft put out the info regarding the shrink db if it is not a best-practice? Per this link, there is a small blip about fragmentation in the ‘Best Practices’ section but states it only fragments to a degree. This does not help when I’m telling my boss to not shrink a db because of the bad fragmentation, when at the same time, Msft is saying otherwise.

    Thanks,
    Chris

    1. There are all kinds of operations you can do that aren’t a best practice. It’s just not practical to list everything that’s *not* recommended to do regularly.

  19. When I do data compression, the file size grows unless I do a dbcc shrinkfile. I know that it is not recommended to use dbcc shrinkfile. However, I am doing REORGANIZE for indexes after shrinking files.

    Is there a better way?

    1. The file size grows because you need to rebuild to enable data compression. Best way to do that is to rebuild the indexes into a new filegroup, then you can drop the old one. Otherwise you’re stuck with what you’re doing now.

  20. Hi Paul,

    I’m new to SQL Server. We periodically delete lots of data in our warehouse env(have SQLServer 2000 style patitioned views with new daily partitioned tables created every day and an archival job to move old data to a different archive database).

    Even though our file size are big, more than 50% of the file is empty(checked using FILEPROPERTY() and sys.database_files. Our SQL Server is connected to SAN. Based on sp_spaceused output, DBAs keep adding space to the existing LUNs/files or adding new files to accomodate the new data. I’m not sure if this is necessary as the existing files are about 50% empty when I check based on FILEPROPERY(filename, spaceused) output. I feel we are wasting multiple TB of space and bloating up the DBs.

    Can you please advise if my thinking is correct?

  21. Hi Paul,

    I’m new to SQL Server. We periodically delete lots of data in our warehouse env(have SQLServer 2000 style patitioned views with new daily partitioned tables created every day and an archival job to move old data to a different archive database).

    Even though our file size are big, more than 50% of the file is empty(checked using FILEPROPERTY() and sys.database_files. Our SQL Server is connected to SAN. Based on sp_spaceused output, DBAs keep adding space to the existing LUNs/files or adding new files to accomodate the new data. I’m not sure if this is necessary as the existing files are about 50% empty when I check based on FILEPROPERY(filename, spaceused) output. I feel we are wasting multiple TB of space and bloating up the DBs.

    Can you please advise if my thinking is correct?

  22. Hi Paul,

    I did the shrink file on July 3rd and the size grew from 30GB to 140GB by July 4th. What is my best option now to bring the transaction log back to formal state and maintain my small size fro the database.

    Any professional help and suggestion will be appreciated.

    Thanks,

    Bisi.

  23. Hi Paul,

    Thanks for the advise on shrink… Can you recommend a stored proc or similar to move table(s) to a new file group? Seems there are many challenges invovled in achieving it – When a table is heap, table has LOB data, different schema exists within a database, table has FK, constraints, etc. ours is a muti-terabyte database and there are all sorts of tables…

    Thanks,
    Kumar

  24. Hi Paul, Great article! I was having a discussion with another DBA today…..he had used DBCC Shrinkfile to shrink the data file on a prod databases (he was in a tight spot). My position was using SHRINKFILE on a data file is bad, i’ve read articles by you and other explaining the fragmentation this causes.

    However he made the point that the actual command he ran was DBCC SHRINKFILE TRUNCATEONLY. in BOL for the TRUNCATEONLY option it states “Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. ”

    So my question is does the use of TRUNCATEONLY stop DBCC SHRINKFILE from causing fragmentation because its working differently and not moving pages within the file? Or is it still a bad option?

  25. Paul,
    First off love your work its helped me out alot in day to day activities. Now ? .. in my case I have databases where records are deleted from time to rather then tables being deleted. Now I setup the following test script to delete records then checked fragmentation 25% , next ran dbcc shrinkdatbase checked the index fragmentation. Any ideas why..

    declare @i int = 0

    while @i < 1280
    begin
    delete from ProdTable
    where c1 = @i
    set @i = @i + 5
    end

  26. If the goal is to remove unused data space (that will not grow again), and assuming you have a clustered index on the table, would it not always be better to just run an ALTER INDEX REBUILD? (If you have the time and space to run it that is).

    Won’t that both free up the same data space as a shrink AND avoid the fragmentation issue?

    “The only way to remove index fragmentation without causing data file growth again is to use DBCC INDEXDEFRAG or ALTER INDEX … REORGANIZE. These commands only require a single 8KB page of extra space, instead of needing to build a whole new index in the case of an index rebuild operation.”

    I was curious about this so just for fun I ran a test where I shrank the data files in a testing DB that had 100s of GB of data removed. The file size dropped and fragmentation was heavy as expected. I then ran an ALTER INDEX REBUILD on the fragmented tables. File sizes increased by a moderate amount. Looking at sys.database_files I noticed that the empty space in some of the files had increased. Am I correct in assuming this empty space is the result of the index rebuild operation? (ie: new index built, old index dropped, but the space is not reclaimed).

    1. No a rebuild won’t free up space – only compact the index being rebuilt. If there’s a lot of free space from deleting data, rebuild won’t affect that. It also won’t move all the free space to the end of the file where it can be lopped off, giving that space back to the OS.

      Yes, the empty space is from the old index after the rebuild completes.

      The advice you quoted is to avoid the data files growing again when removing any fragmentation caused by the shrink.

  27. Paul,

    I ran into a situation today where I needed some space. I did the normal investigation of what my options were and decided to try the shrinkfile and then just defrag. The shrinkfile didn’t fragment anything. (I was shocked) So I got my 50GB I was looking for relatively pain free. Is the dbcc shrinkfile fixed in SQL 2012? (It did of course generate a boatload of log activity).

    1. No, same algorithm in all versions. Maybe there wasn’t much to move or you got luck with the way the free space was laid out. Rare to see no fragmentation after a shrink.

  28. I can’t help but think Microsoft could fix this by:

    o Calculate how much space is used. HWM
    o Loop
    * Find the highest page allocated in the file.
    * Move it to the Highest Free Block below the HWM

    1. Yup – there are several much better algorithms that would work – I was never permitted to fix shrink as it wasn’t deemed important enough. It won’t ever get fixed now.

  29. Paul, is there anything wrong with using ALTER INDEX … REORGANIZE after a DBCC SHRINKFILE? The fragmentation is very high after the shrink but I see no alternative in my situation. We have a partitioned index with partitions in a different filegroup for each month. I’m not very familiar with partitioning, but it looks like I would have to use create index to migrate to a new filegroup, but I don’t know how to preserve the separation of monthly partitions into separate filegroups. If there is a way to move one filegroup at a time, that would be preferred, but I haven’t found a way.

  30. I’ve a customer who restores a 800 GB production database to the Reporting server every night before updating a data warehouse from that.

    It takes 6 hours, but that’s ok, it feeds an in-house application (I’ve had the conversion about differential-backups and replication).

    I would like to remove all the space I added when it was in production (1x primary mdf, 11x ‘fg_main’ ndf’s, 1x log file) NOTE: most of the production indexes are not needed (in reporting or dw).

    Advise please?

  31. Thanks Paul, I suspected that. :( I like to re-size data files in Prod to 30% periodically, but this is making space tight when its restored on Reporting. Was hoping you’d say to something about dropping all the unused nc-indexes and doing something fascinating with a new Filegroup. But no worries – I’ll re-think that 30%.

  32. Is there any math or general rule that how much free space we need to keep in DATAFILE basing on data growth and database size etc., to avoid performance hiccoughs?

  33. It is a great post being referenced for years! Thank you for responding in advance Paul. It is understood that data file shrink is bad for performance due to fragmentation, IO and CPU cost and log growth. Also, eventually if database will use up the data, it seems logical to leave it alone. But every backup file is bigger than it needs to be. Also if the database needs to be restored on to a new environment, it seems like unnecessary taxing the system by restoring big data file. And in most of the existing databases there is one filegroup, which cannot be dropped (if I try to follow your preferred method to recreate index in a different filegroup) . Is there an option to do t-sql switch to move the data file or index to a new filegroup?

  34. I just tried every bit of steps on SQL 2008R2 and i got same result in both run. the disk fragmentation in percent was 0.625. Am I doing anything wrong?

  35. Hi Paul,

    If i really have to need to perform the auto shrink on the data files, would running Rebuild Index afterward help resolve the index fragmentation issue?

    Thanks,
    Kim

    1. There is no circumstance where you need to auto shrink. And no, rebuilding the indexes will cause the files to grow again. If you must shrink, reorganize the indexes afterwards.

  36. I tried re-producing the de-fragmentation in SQL Server 2016.
    No fragmentation observed.
    Tested with DBCC SHRINKDATABASE and DBCC SHRINKFILE.
    Absolutely zero defragmentation.

    SQL 2016 always allocate full extents, possibly this is the reason, I do not see any defragmentation in SQL 2016 when database shrink is performed.

    1. It reproduces perfectly for me on SQL Server 2016. There are no changes in any internals in SQL Server 2016 that would change this. Sounds like you didn’t drop the filler table before doing the shrink. Full vs. mixed extents have nothing to do with index fragmentation btw.

  37. Paul,

    I am trying to determine how much log space is generated by the shrink. Will the following script return an accurate count of log bytes used?

    USE DBMaint2008;

    DECLARE @maxlsn NVARCHAR(46);
    SELECT @maxlsn = CONCAT(N’0x’,MAX([Current LSN])) FROM fn_dblog(NULL,NULL);

    — Shrink the database
    DBCC SHRINKDATABASE ([DBMaint2008]);

    SELECT SUM([Log Record Length])
    FROM fn_dblog(@maxlsn,NULL);

    As usual, thanks for sharing your knowledge.

  38. Hi Paul,

    We need to delete roughly 40% of data from a few very large databases (> 2 TB). I am interested in your suggested option of moving everything to a new filegroup then shrinking what’s left in the old filegroup (in our case the PRIMARY filegroup). But…I’m now realizing this is not going to be a very easy process. I’m amazed ALTER REBUILD has no option to push to new filegroup. This means I am going to have script out all CREATE INDEX scripts for the whole db. Yes easy enough…you can script that using SSMS. But, on a large database I can’t just run a hard coded script. This will take days to run and I don’t have time to spare (if it stops in the middle, etc.). So I will need to customize for various things like restartability, etc. The more custom code I add to a hard coded script the more risk I add. Correct me if I’m wrong, this is not a simple case of marching through indexes/objects and change filegroups.

    1. Right – I wouldn’t do that for all tables in a database. Do you really need to get the space back or will the space eventually be reused? If the former, you could do a shrink and then ALTER INDEX … REORGANIZE, but shrink is going to take an age, and then you’ll generate a ton of log doing the fragmentation removal. Basically there’s no really easy AND resource-efficient way to do this, which is a problem with SQL Server.

  39. Hi Paul,
    Good Day !!
    we recently performed table movement into different file group and the operation was successful but when we try to shrink the free space in the primary file group it is not releasing space back to disk. I can see there is a lot of free space internally in the primary file group but it is not releasing space back to disk

    could you advise what is stopping and how to fix this issue?

    I can

      1. Hi Paul,
        Thanks for the reply,

        There is no error or info messages while shrinking Data file. Total Disk size is 1 TB, out of that database occupied nearly 950 above and i can see 500 GB internal free space in the database

        we are running shrunk operation via DBCC SHRINKFILE().

        Bad part in the database is

        1. it contains only primary file group and 1 file.
        2,it is not enabled for rebuild/reorganize operation consider database is using 24/7 OLTP
        operations and not getting time for maintenance.

        Good part is, database is part of Database Mirroring..

        current problem is we are unable to shrink the database and each time it is taking long time to perform shrunk operation with zero result (not getting free space back to disk)

        could you help advising what is next course of action rather extending drive space? Also advise how to check pages that can’t shrink ?

  40. Hi Paul,
    Trying your advice to use CREATE INDEX….DROP_EXISTING.
    I have a SQL 2012 partitioned table and when trying to use DROP_EXISTING on the clustered index errors it with “The new index definition does not match the constraint being enforced by the existing index” because (I believe) the clustered index was created at table creation as a PRIMARY KEY CLUSTERED constraint. I can drop the index and recreate (assigned to a new partition scheme that points to the new filegroup) but then I lose the PRIMARY KEY, although the clustered index is fine and partitioning still works.
    Researching recovering free space from a multi-terabyte 24/7 system with minimal downtime.

    1. You can do it but you need to use the primary key’s name as the index name in the CREATE INDEX statement, but you can’t change anything in the index definition.

  41. Does DBCC SHRINKFILE shrink down data within an individual page? Or does it only perform shrinking by moving pages from ‘later’ in the file to an earlier unused page in the file? From reading the documentation it only talks about pages, so I guess the answer is the latter.

    If so, is there any way to compact the data *within* a page? If not, how can a database be shrunk if a large number of rows are deleted from random places throughout a table?

  42. Hi Paul,

    Thanks for your interesting post and answers. We freed up half of the data in a 2T database. It still is 2T and now we want to shrink it but it is a 24/7 operation and the shrink takes ages. We only want to shrink because we need to make backup’s of the database and that would be much faster if the database is only 1 T. But is that really true? Is the backup really faster after SHRINK?

    1. No – as a backup only backs up used space, and as shrinking doesn’t compact pages, just moves them, the used space will be basically the same. However, a restored copy will require less space if the database is smaller.

  43. I just stumbled across this page, since I’m deleting a lot of old, unnecessary data, millions of records, from a database and was wondering about any maintenance that should be performed afterwards.

    I don’t have any questions. I just wanted to commend Mr. Randal for continuing to answer questions on this page, for over 10 years! Wow! :D

  44. Hi Paul,

    Thanks for this very informative blog.
    I’ve ran in this situation because I have a SQL database in azure with 1TB, 820GB allocated space and only 100GB data!
    Performing a shrink will take nearly 4 days looks like.
    Do you think shrinking with truncateonly would be the best option or just leave as is?

    Many thanks for all the posts and help to the dev/dba community.
    Vini

  45. Hi Paul,

    I know this post is really old, but I see it is still referenced a lot. My situation is a bit different: I have a database that uses about 1.5TB (of a 2TB drive) and a table inside it that uses 1TB of space. My idea is to give up that table (i will move it to another DB, on another drive and let it rot there). What’s the best course of action – drop the table after moving or use DELETE FROM OldHugeTable – so that I can get my 1TB of disk space back?
    The plan afterwards is to shrink the drive altogether by 1TB as disk space is extremely expensive and this will reduce my operating costs significantly.

    1. Once you’ve moved the data to a different database, simply doing a DROP TABLE is the most efficient way to deallocate all that space, and then you can shrink the data file.

  46. Hi Paul. Accidental DBA here. Recently had a wayward programmer’s process write 800 Million records into a table, and our index file grew and almost filled the drive. I have dropped the file, and index, and re-created them so we went from 800 million records in total to about 1000 a day.

    The table is a normal size now. Is it safe to shrink the index file with truncate only to reclaim some of the disk space? (Went from 80 gig to 190 gig)

  47. Hi, this appears to be the go to place for shrinking SQL dbs so what would stop the mdf file from shrinking? Here’s the scenario, we have a 7tb db that is made up of 4 file groups. The 1st file group had 5 files in it, 2 of which were 1.5+tb in size and 2 others at 800gb, the last 1 at 100gb. All this sitting on 5 2tb drives. New boss wants this cleaned up so I’ve managed to empty 1 file of 1.6tb into 6 250gb files and it was an NDF. The MDF can’t be emptied as its got system objects in it but I’ve removed 1.6tb out of it leaving 16gb of used space or 97% free. I did a shrinkfile on it and it shrank to 700gb and will go no further. I’m shooting for 100gb so what’s stopping the other 600 from freeing up? What I’ve tried, I’ve done a full backup and a log backup, gave it 48 hours to clear any ghost records, tried a shrink with notruncate to reorg any remaining pages to the front of the file and then a shrinkfile again. Nothing is giving me back that last 600gb. Any light or thoughts you have would be appreciated.

    1. Does it give any reason why it can’t progress or it stopped? Are you sure the ghost cleanup task is running? Try enabling trace flag 660 with DBCC TRACEON (660, -1), wait for 30 seconds, turn it off again and look in the error log. Nothing about ghost cleanup means it died and you’ll need to restart SQL Server. You didn’t say which version and build – there have been bugs in the past.

  48. sorry for the tardy response but had a couple of fires to deal with. This is a SQL 2017 instance that is part of an AAG group. All the work is being done on the primary and we’ve failed over and back and powered down and rebooted several times in the last few days. This morning the mdf is at 760Gb with 27 gb used so I ran another empty file on it and it now tells me 21 mb used and it can’t move everything out of the file because its got system objects in it. So I run a shrinkfile(filename, notruncate) to force any of the remaining 21 mb to the front of the file. Runs in about 90 minutes with no issues. Next I run a shrink(filename,100000) to make it a 100gb file. It runs for 11 seconds and comes back with min, max and current size. No error of any type and the file remains 760Gb So tried a recursive loop to try tiny increments. Loops shows the count down but the file size never changes and remember it started out at 1.6Tb, so why get to 760Gb and stop dead?

  49. so just to fill the last of this issue, I’ve found that its caused by LOB fields that have been assigned to the primary file group even though the table was created on a secondary or tertiary file group. There appears to be a setting we’ve missed over time that puts the LOB there. That is why the shrink no longer shrinks, the overhead of changing all the pointers associated to the LOB is immense. The only way around this I’ve found is to rebuild the tables and indexes in new file groups but that comes with its own risks and performance hits as well.

  50. hi paul,

    having database initial size of two files=(1 tetra) but the actual size is 20 gb. should I do shrink files?

    thanks

    1. I can’t answer that for you. If the data size is going to grow to use a lot of the space, then no. If it’s never going to grow, and you need the disk space for something else, then probably.

      1. Paul, when I empty the file group and run a shrinkfile on it, it still takes a LONG time with a large file. Is there anything I can do to make that go faster? At this point the file is empty (2TB on disk), as I have moved all the indexes off to a different FG.

        Would adding this line:
        DBCC SHRINKFILE($fileName, EMPTYFILE) WITH NO_INFOMSGS;

        Before this line help?:
        DBCC SHRINKFILE($fileName, 5) WITH NO_INFOMSGS

        1. Paul I ended up modifying the script to perform the shrink in 15% file size increments. Which worked awesomely. We were able to truncate off a near 2TB table, perform this shrink on a 2.2 TB db in 39 minutes with no ADDITIONAL fragmentation. Resulting file had 100mb unused space.

          Result file sizes
          – Normal Shrink + Rebuild = 195 GB
          – SafeShrink = 165 GB

          VERY VERY COOL.

  51. I wanted to reduce my db size using compression so I came to the conclusion that building (or rebuilding) clustered indexes in a new FG would be the best solution. New FG is a now a fraction of the size as primary (where the old tables were stored). Awesome! Problem is I can’t for the life of me shrink the file the primary FG is on. It appears empty but the shrink process runs for ever and appears to be moving pages allocated to tables now on the other FG!

    I used clustered column store indexes to move the data since the compression is very high. Some of the tables contain LOB data – I wonder if those pages are still in primary but unreported?

    Anyway, just a heads up to anybody tempted to do the same thing! Create a new db ;-)

      1. Thanks for the reply Paul.

        I’m surprised that this is potentially not being reported (or under-reported) in the dmv’s though. Also disappointed that no user objects appear in the primary filegroup :-(

        Additionally, the DBCC SHRINKFILE command is periodically being blocked by transactions from another database that has snapshot isolation enabled, even though the db being shrunk does not! That doesn’t seem fair either! :-D

        I guess I’m forced to create a new database and move the data. I am reluctant though! Is there a way to rebuild the off page LOB data so that is resides in the new FG I wonder?

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.