120 responses

  1. BradC
    June 24, 2009

    Wow, that’s a fairly radical suggestion–moving all tables into a new file group instead of running a shrink.

    Not quite sure, but I get the vague feeling you don’t like shrink ;)

    Is that CREATE INDEX … WITH (DROP_EXISTING) syntax supported on SQL 2000?

  2. BradC
    June 24, 2009

    I know you’re a busy man, but what about writing a 3rd party utility that does a "smart shrink" without fragmenting indexes?

    You can work on that right after you finish your "run DBCC CHECKDB on a database backup" utility :)

  3. Sankar
    June 24, 2009

    Brad,
    >>Is that CREATE INDEX … WITH (DROP_EXISTING) syntax supported on SQL 2000?
    Yes, it is supported. Here is the BOL link. http://msdn.microsoft.com/en-us/library/aa258260(SQL.80).aspx

    >>Not quite sure, but I get the vague feeling you don’t like shrink ;)
    Easy one here. Paul doesn’t like SHRINK and a lot of DBA’s also don’t like that. You may want to catch up his previous post (http://www.sqlskills.com/blogs/paul/post/Auto-shrink-e28093-turn-it-OFF!.aspx)

    >>what about writing a 3rd party utility that does a "smart shrink" without fragmenting indexes?
    Never heard about this. Looks like a disk defragmenter kind of stuff; it may remove external fragmentation of
    the files but shouldn’t run this as regular maintenance. Keep the files (data & log) as large as possible and not allow auto growth to kick in. But keep the auto growth turned on just in case.

    Paul, hope you don’t mind this.

  4. SQLdenis
    June 24, 2009

    Brad, DROP_EXISTING exists on SQL 2000

  5. JC
    June 25, 2009

    I’m not sure how to apply this in a data-warehouse environment, where most of my slack space is caused by the process of creating the clustered index. Say I create a new database and bulk-insert a 250 gig table into it. I then build a clustered index and it swells to have 200 gig of free space when it’s done. SORT_IN_TEMP_DB helps, but it’s not always an option. Where does the sort occur when the table is moved to a new filegroup?

  6. Felipe Antunes
    June 25, 2009

    Hi Paul,

    Thanks for this post. I have many databases in my enviroment with Auto_Shrink option set on. This post make me write a script to revert this situation. Follow below..(works well in SQL 2000, i dont test in 2005+)

    DECLARE @DataBase sysname

    DECLARE database_cursor CURSOR
    FOR SELECT
    quotename(name) as Base
    FROM
    master..sysdatabases
    WHERE
    DATABASEPROPERTYEX(name,’Status’)=’ONLINE’
    AND DATABASEPROPERTYEX(name,’IsAutoShrink’) = 1

    OPEN database_cursor
    FETCH NEXT
    FROM database_cursor
    INTO @Base

    WHILE @@FETCH_STATUS = 0
    BEGIN
    EXEC (‘ALTER DATABASE ‘ + @BASE + ‘ SET AUTO_SHRINK OFF’)

    FETCH NEXT
    FROM database_cursor
    INTO @Base
    END
    CLOSE database_cursor
    DEALLOCATE database_cursor
    GO

  7. hDt
    August 12, 2009

    will this work for a msdb database in SQL2005? by having a new file?

  8. paul
    August 12, 2009

    No – it needs to be a new filegroup – and you can’t get rid of the primary filegroup (although you could shrink it down after moving some of the larger objects to the new filegroup)

  9. tim m
    August 12, 2009

    Paul, First of all, thanks for the great blog.
    I get it that shrink ops cause massive index fragmentation. It appeared that prior versions of shrinkfile/db single thread operations. Is that still the case? I currently have a shrinkfile running on a mid sized data file (100gb) that has been running for quite a few hours – I’m trying to find the bottleneck. The machine is dedicated to running this shrink. Via perfmon, I’m seeing a lot of activity in the SQLServer:Databases Shrink Data Movements Bytes/Sec, Log Flushes/sec, and Log Bytes Flushed/sec. I see usage across all processors (4) – typically not more than 5% or so. There is no paging going on and disk queue lengths are consistenly below 1.

  10. paul
    August 14, 2009

    Yes, shrink is always single-threaded – which of course contributes to the long run-time. In your case, it could be that shrink is waiting for locks, it could be moving LOB data or heap data (which is very slow). Shrink’s just not a fast beast.

    Thanks

  11. Ronak Patel
    January 18, 2013

    Understand the anomaly of Shrink, but what is the best way to move tables with its index from primary group at one go. i mean no manual drop and create. script which does it automatically for each tables in the the file group.

    • Paul Randal
      January 18, 2013

      You’ll need to use CREATE INDEX .. WITH (DROP_EXISTING = ON) and specify the destination filegroup. You can come up with a script to do it easily.

  12. Vic
    January 22, 2013

    Hello,
    I’m seeking to understand if the above discussion also applies to Sql Server Compact 3.5. I’m dealing with a mobile application whose privately deployed DB should be capped at about 200MB. The plan was to programmatically purge old data as it becomes stale, followed by the shrink operation described here: http://msdn.microsoft.com/en-us/library/ms171752(v=sql.100)

    Since Sql CE is a file-based DB that apparently doesn’t support clustered indexes, I’m in doubt as to whether the above discussion applies. At the same time, performance degradation is a top concern. Could someone share their knowlege?

    • Paul Randal
      January 22, 2013

      I know nothing about Compact Edition at all I’m afraid, so can’t answer your question.

  13. araniya bipni
    February 6, 2013

    i don’t want to recover my deleted data n sql server 2008,,,….? so it’s happen with shrink option .. if yes the how can i solve it…

    please help me on this topic i have such a problem in my current project..

    my email id: araniyabipin@gmail.com / lotus.websolution@gmail.com

    • Paul Randal
      February 6, 2013

      Are you asking how to get data back about it’s been deleted and a shrink occurs? If so the answer is you restore from a backup.

  14. Linda Marie Wetzel
    March 18, 2013

    I’m new to all of this, so i’d like to know what can I do to fix the fragmentation that weekly shrinking (over many years, apparently) has caused in my SQL 2005 database. Coincidentally, this is the only database that was set up this way and it is the only one where users report huge problems with performance.

  15. Greg
    April 24, 2013

    Has this now been fixed in 2008 R2? I ran your above test and avg_fragmentation_in_percent comes out at 0.

    • Paul Randal
      April 24, 2013

      Nope – the repro works perfectly on R2 – just ran it to make doubly sure.

      • Greg
        April 24, 2013

        Interesting is there any other reason I would be getting 0?

      • Paul Randal
        April 24, 2013

        Which result are you getting 0 for, the first one of the second one? Try a select * from the DMV and see if it gives meainingful results.

      • Greg
        April 24, 2013

        Both the post and pre shrink results are 0.

  16. Atchi
    July 9, 2013

    Super…

    I have one doubt here…

    If we enable the auto-Shrink option on database. at what time It can be shrink..Is there any specific time to shrink the database like “check point occurs” will it be shrink..
    How SQL server decide when It can be shrink…

    • Paul Randal
      August 13, 2013

      Every 25 minutes the next database ID is shrunk.

  17. Ramdas
    September 9, 2013

    Paul,

    Thanks for the article,

    I have one question about –
    >> As well as introducing index fragmentation, data file shrink also generates lot of I/O, uses a lot of CPU, and generates *loads* of transaction log – as everything it does is fully logged

    What is it logging in transaction log? If you kill dbcc shrinkfile does it undo all changes (page movements up to that point)?
    I thought process will work one page at a time – first copy the contents of the page from back of the file to new location reset page linkage (pointers/GAM) and free up the space of old page and so on..(check pointing after every operation).

    Thanks in Advance,
    RT

    • Paul Randal
      September 9, 2013

      It’s logging all the changes it’s making in the database. If you kill it it may roll back a single page move. It doesn’t do checkpoints as it goes.

  18. sensiva
    October 28, 2013

    What if it’s a SharePoint database( SharePoint content db and SharePoint system db)? Should I create new file group or I have to shrink the db?

    • Paul Randal
      October 28, 2013

      You don’t have a choice there – you have to shrink if you want to shrink. But why do you want to shrink in the first place?

      • sensiva
        October 29, 2013

        Thanks for the answer, in the first place!

        The situation I had – 350 GB content db(moss 2007 content db);approximately 200 GB content deleted. So I decide to shrink the db ….

      • Paul Randal
        October 29, 2013

        Ok – that makes sense, if you’re not going to use that space again.

  19. Ariel Cancino
    January 30, 2014

    Hi Paul, I once got a tip to set up a maintenance plan to run a Shink Database task whith the “Retain space in database files” option (NOTRUNCATE), followed by a Rebuild index task that is then followed by a update stats task.
    Will this not then compact the database, eliminating hollow space ( I know this will fragment the indexes) but by then running the Rebuild index job correcting the pointers of the indexes?

    Then ending up whit a less fragmented database file and at the same time not ha fragmented indexes?

    • Paul Randal
      January 30, 2014

      No – in that case the shrink is entirely superfluous – you should just rebuild. It’s extremely unlikely that the database file will become so fragmented for free space between maintenance runs that it requires the brute force shrink to provide enough contiguous free space for the index maintenance to be optimal. Just don’t regularly shrink, ever.

      • Ariel Cancino
        January 31, 2014

        Thank you for the reply :)

  20. Charles
    February 6, 2014

    What about a database that has approx. 50% empty space due to the data possibly being moved from one file group to another and so on. There are 8 file groups the default install for this database (not chosen by myself). The DB is 150Gb of which 75Gb is just empty. I am not sure if I should reclaim the space or just leave it at half empty.

    • Paul Randal
      February 6, 2014

      If the database isn’t going to use that free space in the foreseeable future, go ahead and remove the free space, otherwise just leave it there.

  21. Sean Perkins
    March 19, 2014

    Just wanted to drop a line for praise, thank you Paul! I’ve had some hellacious times figuring out why my maintenance plans were failing on shrinks. I got a link from http://dba.stackexchange.com/questions/32905/shrink-database-operation-in-maintenance-plan-failed and I believe it has solved my problem and I learned a ton. Here is a beer for you efforts.

  22. Gary
    March 19, 2014

    Paul,

    I had a database for a third party mailing application that grew to an unsightly size due to an issue with the grooming job (external to SQL Server). The data file used to be in the neighborhood of 250GB and eventually we had to grow it to 700GB (just the data file (and much to the chagrin of my SAN Admin)) before the company provided a fix for the issue.

    Now I have 75% free space in a 700GB data file for this database (which also contains 125 LOB data fields!) and shrinking this is necessary to regain the disk space and painfully slow due to the LOB fields.

    http://www.sqlskills.com/blogs/paul/why-lob-data-makes-shrink-run-slooooowly-t-sql-tuesday-006/

    Any suggestions?

    Thanks,
    Gary

    • Paul Randal
      March 19, 2014

      You’ve really got no choice I’m afraid except to shrink. Even if you rebuilt the indexes to a new filegroup, you’d still have the LOB data to deal with, which would require shrink.

  23. Chris L.
    April 4, 2014

    Paul,

    First off thanks for writing this, it’s been a huge help! Quick question however, why would Msft put out the info regarding the shrink db if it is not a best-practice? Per this link, there is a small blip about fragmentation in the ‘Best Practices’ section but states it only fragments to a degree. This does not help when I’m telling my boss to not shrink a db because of the bad fragmentation, when at the same time, Msft is saying otherwise.

    Thanks,
    Chris

    • Paul Randal
      April 6, 2014

      There are all kinds of operations you can do that aren’t a best practice. It’s just not practical to list everything that’s *not* recommended to do regularly.

  24. Mirza
    April 25, 2014

    When I do data compression, the file size grows unless I do a dbcc shrinkfile. I know that it is not recommended to use dbcc shrinkfile. However, I am doing REORGANIZE for indexes after shrinking files.

    Is there a better way?

    • Paul Randal
      April 25, 2014

      The file size grows because you need to rebuild to enable data compression. Best way to do that is to rebuild the indexes into a new filegroup, then you can drop the old one. Otherwise you’re stuck with what you’re doing now.

  25. Ram
    July 3, 2014

    Hi Paul,

    I’m new to SQL Server. We periodically delete lots of data in our warehouse env(have SQLServer 2000 style patitioned views with new daily partitioned tables created every day and an archival job to move old data to a different archive database).

    Even though our file size are big, more than 50% of the file is empty(checked using FILEPROPERTY() and sys.database_files. Our SQL Server is connected to SAN. Based on sp_spaceused output, DBAs keep adding space to the existing LUNs/files or adding new files to accomodate the new data. I’m not sure if this is necessary as the existing files are about 50% empty when I check based on FILEPROPERY(filename, spaceused) output. I feel we are wasting multiple TB of space and bloating up the DBs.

    Can you please advise if my thinking is correct?

  26. Kumar
    July 3, 2014

    Hi Paul,

    I’m new to SQL Server. We periodically delete lots of data in our warehouse env(have SQLServer 2000 style patitioned views with new daily partitioned tables created every day and an archival job to move old data to a different archive database).

    Even though our file size are big, more than 50% of the file is empty(checked using FILEPROPERTY() and sys.database_files. Our SQL Server is connected to SAN. Based on sp_spaceused output, DBAs keep adding space to the existing LUNs/files or adding new files to accomodate the new data. I’m not sure if this is necessary as the existing files are about 50% empty when I check based on FILEPROPERY(filename, spaceused) output. I feel we are wasting multiple TB of space and bloating up the DBs.

    Can you please advise if my thinking is correct?

    • Paul Randal
      July 4, 2014

      If there’s free space in the database, there’s no need to add more space/files.

  27. Bisi
    July 6, 2014

    Hi Paul,

    I did the shrink file on July 3rd and the size grew from 30GB to 140GB by July 4th. What is my best option now to bring the transaction log back to formal state and maintain my small size fro the database.

    Any professional help and suggestion will be appreciated.

    Thanks,

    Bisi.

    • Paul Randal
      July 7, 2014

      You’re talking about log files – this is a post about data files. Google ‘kimberly 8 steps’ and read her blog post.

  28. Kumar
    July 22, 2014

    Hi Paul,

    Thanks for the advise on shrink… Can you recommend a stored proc or similar to move table(s) to a new file group? Seems there are many challenges invovled in achieving it – When a table is heap, table has LOB data, different schema exists within a database, table has FK, constraints, etc. ours is a muti-terabyte database and there are all sorts of tables…

    Thanks,
    Kumar

    • Paul Randal
      July 23, 2014

      No – you’ll need to search on Google or write your own.

  29. jaysnow
    August 8, 2014

    Hi Paul, Great article! I was having a discussion with another DBA today…..he had used DBCC Shrinkfile to shrink the data file on a prod databases (he was in a tight spot). My position was using SHRINKFILE on a data file is bad, i’ve read articles by you and other explaining the fragmentation this causes.

    However he made the point that the actual command he ran was DBCC SHRINKFILE TRUNCATEONLY. in BOL for the TRUNCATEONLY option it states “Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. ”

    So my question is does the use of TRUNCATEONLY stop DBCC SHRINKFILE from causing fragmentation because its working differently and not moving pages within the file? Or is it still a bad option?

    • Paul Randal
      August 8, 2014

      Yes, TRUNCATEONLY doesn’t do any data movement so won’t cause fragmentation.

  30. eric81
    September 17, 2014

    Paul,
    First off love your work its helped me out alot in day to day activities. Now ? .. in my case I have databases where records are deleted from time to rather then tables being deleted. Now I setup the following test script to delete records then checked fragmentation 25% , next ran dbcc shrinkdatbase checked the index fragmentation. Any ideas why..

    declare @i int = 0

    while @i < 1280
    begin
    delete from ProdTable
    where c1 = @i
    set @i = @i + 5
    end

    • Paul Randal
      September 17, 2014

      Any ideas why what? You didn’t say what the fragmentation was after the shrink. And how are you measuring fragmentation?

  31. Bill
    January 28, 2015

    If the goal is to remove unused data space (that will not grow again), and assuming you have a clustered index on the table, would it not always be better to just run an ALTER INDEX REBUILD? (If you have the time and space to run it that is).

    Won’t that both free up the same data space as a shrink AND avoid the fragmentation issue?

    “The only way to remove index fragmentation without causing data file growth again is to use DBCC INDEXDEFRAG or ALTER INDEX … REORGANIZE. These commands only require a single 8KB page of extra space, instead of needing to build a whole new index in the case of an index rebuild operation.”

    I was curious about this so just for fun I ran a test where I shrank the data files in a testing DB that had 100s of GB of data removed. The file size dropped and fragmentation was heavy as expected. I then ran an ALTER INDEX REBUILD on the fragmented tables. File sizes increased by a moderate amount. Looking at sys.database_files I noticed that the empty space in some of the files had increased. Am I correct in assuming this empty space is the result of the index rebuild operation? (ie: new index built, old index dropped, but the space is not reclaimed).

    • Paul Randal
      January 28, 2015

      No a rebuild won’t free up space – only compact the index being rebuilt. If there’s a lot of free space from deleting data, rebuild won’t affect that. It also won’t move all the free space to the end of the file where it can be lopped off, giving that space back to the OS.

      Yes, the empty space is from the old index after the rebuild completes.

      The advice you quoted is to avoid the data files growing again when removing any fragmentation caused by the shrink.

  32. Mike
    February 12, 2015

    Paul,

    I ran into a situation today where I needed some space. I did the normal investigation of what my options were and decided to try the shrinkfile and then just defrag. The shrinkfile didn’t fragment anything. (I was shocked) So I got my 50GB I was looking for relatively pain free. Is the dbcc shrinkfile fixed in SQL 2012? (It did of course generate a boatload of log activity).

    • Paul Randal
      February 15, 2015

      No, same algorithm in all versions. Maybe there wasn’t much to move or you got luck with the way the free space was laid out. Rare to see no fragmentation after a shrink.

  33. Brian Sullivan
    March 13, 2015

    I can’t help but think Microsoft could fix this by:

    o Calculate how much space is used. HWM
    o Loop
    * Find the highest page allocated in the file.
    * Move it to the Highest Free Block below the HWM

    • Paul Randal
      March 15, 2015

      Yup – there are several much better algorithms that would work – I was never permitted to fix shrink as it wasn’t deemed important enough. It won’t ever get fixed now.

  34. Frank Robinson
    June 3, 2015

    Paul, is there anything wrong with using ALTER INDEX … REORGANIZE after a DBCC SHRINKFILE? The fragmentation is very high after the shrink but I see no alternative in my situation. We have a partitioned index with partitions in a different filegroup for each month. I’m not very familiar with partitioning, but it looks like I would have to use create index to migrate to a new filegroup, but I don’t know how to preserve the separation of monthly partitions into separate filegroups. If there is a way to move one filegroup at a time, that would be preferred, but I haven’t found a way.

    • Paul Randal
      June 3, 2015

      You could do that. No – you can switch out a partition (the sliding window scenario). Recommend you read the partitioning whitepapers.

  35. Richard
    September 1, 2015

    I’ve a customer who restores a 800 GB production database to the Reporting server every night before updating a data warehouse from that.

    It takes 6 hours, but that’s ok, it feeds an in-house application (I’ve had the conversion about differential-backups and replication).

    I would like to remove all the space I added when it was in production (1x primary mdf, 11x ‘fg_main’ ndf’s, 1x log file) NOTE: most of the production indexes are not needed (in reporting or dw).

    Advise please?

    • Paul Randal
      September 1, 2015

      If you want to remove the extra space every night, don’t bother, it would take too long at the size you’ve specified.

  36. Richard
    September 2, 2015

    Thanks Paul, I suspected that. :( I like to re-size data files in Prod to 30% periodically, but this is making space tight when its restored on Reporting. Was hoping you’d say to something about dropping all the unused nc-indexes and doing something fascinating with a new Filegroup. But no worries – I’ll re-think that 30%.

  37. Krishna
    April 29, 2016

    Is there any math or general rule that how much free space we need to keep in DATAFILE basing on data growth and database size etc., to avoid performance hiccoughs?

    • Paul Randal
      April 29, 2016

      No rule or formula that I know of.

  38. SQLDBEnthusiast
    May 7, 2016

    It is a great post being referenced for years! Thank you for responding in advance Paul. It is understood that data file shrink is bad for performance due to fragmentation, IO and CPU cost and log growth. Also, eventually if database will use up the data, it seems logical to leave it alone. But every backup file is bigger than it needs to be. Also if the database needs to be restored on to a new environment, it seems like unnecessary taxing the system by restoring big data file. And in most of the existing databases there is one filegroup, which cannot be dropped (if I try to follow your preferred method to recreate index in a different filegroup) . Is there an option to do t-sql switch to move the data file or index to a new filegroup?

    • Paul Randal
      May 8, 2016

      Unfortunately, no.

  39. sanjeet
    September 22, 2016

    I just tried every bit of steps on SQL 2008R2 and i got same result in both run. the disk fragmentation in percent was 0.625. Am I doing anything wrong?

    • Paul Randal
      September 23, 2016

      Yes, you’re doing something wrong – most likely you didn’t run the shrink.

  40. Kim
    December 14, 2016

    Hi Paul,

    If i really have to need to perform the auto shrink on the data files, would running Rebuild Index afterward help resolve the index fragmentation issue?

    Thanks,
    Kim

    • Paul Randal
      December 19, 2016

      There is no circumstance where you need to auto shrink. And no, rebuilding the indexes will cause the files to grow again. If you must shrink, reorganize the indexes afterwards.

Leave a Reply

 

 

 

Back to top
mobile desktop