SHRINKFILE best practices and experience

by Csaba Toth   Last Updated January 11, 2018 18:06 PM

Preamble: In general it's a big no-no, but believe me that are rare cases when space is really needed. For example Express Edition is limited to 10GB. Imagine that you discovered that with a data type conversion (a blob column) you can free up significant amount of space. But after that the DB file still has the same size as we know, and the 10GB limit also didn't change magically. So some kind of SHRINK is needed. That was an example.

In my test environment I performed:


That did the trick (I know that that minimizes the free space, in real world I would leave free space). it took many-many hours to finish. As we know it's a single threaded process Strange behaviour DBCC Shrinkfile, "it works as a series of very small system transactions so there is nothing to rollback." - Paul Randal We also know that it messes up the index fragmentation big time and I can confirm that. I didn't experience log file grow though described in

I issued some INDEX REBUILD and REORGANIZE and those finished within seconds.

My questions:

  1. What's the big deal about shrink if I can just fix the index fragmentation after the shrink within seconds? I don't understand. Here on DBA stackexchange the "shrink" topic ( says "Pretty much the worst thing you could do to a SQL Server database. In short: It sacrifices performance to gain space." plus refers to another popular article about it. But index fragmentation can be fixed.
  2. Why I didn't experience any log file growth?
  3. What if REBUILD the index first after the space free-up operation. Can that substitute the first DBCC SHRINKFILE (DBFile, NOTRUNCATE) so I just need to DBCC SHRINKFILE (DBFile, TRUNCATEONLY)? I have a feeling that the two work on different logical level but I have to ask this.

Bottom line: I promise I won't do shrink regularly or anything. But this is a situation where a cap is hit and shrink is needed.

Answer to my 2nd question: I didn't experience transaction log file growth, because I was fiddling in a developer test environment. The transaction log file would grow in a production environment (assuming Full Recovery Model). You should mimic that in your testing environment too. The tr log file growth is actually bigger than the space you can free up inside the database file. In the end you not only want to shrink the database file, but also the transaction log (how and when to do that depends on the recovery model). Production environments I saw had so screwed up index fragmentation that it can be just better. My script reindexes every index which fragmentation level is above 30%.

To handle the growth, you may have to switch from full recovery to simple while you are doing the once-in-a-lifetime conversion. But that will break the backup chain.

Answer to my 3rd question: do the reindex/index rebuild after the DB file shrink, because the shrink messes up index fragmentation.

I haven't experimented how all of this affects query statistics.

Script for reindexing: see Example D of TechNet sys.dm_db_index_physical_stats (Transact-SQL). An article on incremental shrink, which will lead to other valuable content:

Answers 1

If you want to:

  1. ignore all of the warnings from very smart people with a lot of experience in SQL Server;
  2. assume that because in this case you didn't experience log growth and in this case fixing the fragmentation was quick and didn't cause any further growth, that will always be the case; and,
  3. not confirm that this grow/shrink/grow/shrink cycle has any impact on query times;

Go ahead. Since you say you will only be doing this in rare scenarios, I'm not sure what answer you're looking for here. Do you want a big thumbs-up that you can ignore everything you've read about it and roll the dice?

I will suggest at least one thing: hitting the 10GB limit in Express is going to happen if you have 10GB of data. You avoid that by removing data in the data file (or spreading your data across multiple databases). If you free up a bunch of space inside the data file, that space will be re-used (if you dropped a table, this will be automatic; if you deleted rows or changed column structure, you will need to rebuild - which will also take some space at least temporarily). So if you free up 5GB of data in the file, just because the file is 9GB does not mean that it will try to become 11GB if you add 2GB - it will reuse space inside the file until it can't find any more. It will be a much more expensive operation to shrink the file to 4GB, then force it to grow to accommodate the new data. This is like washing an already clean towel that you're about to use to wipe up a mess...

Aaron Bertrand
Aaron Bertrand
July 30, 2013 20:18 PM

Related Questions

Best Way to Reduce the Size of 8 TB DB?

Updated January 07, 2017 08:02 AM

Database size - MDF too large?

Updated July 15, 2015 13:02 PM

When Should You Shrink Your Database

Updated May 24, 2015 21:02 PM