Preamble: In general it's a big no-no, but believe me that are rare cases when space is really needed. For example Express Edition is limited to 10GB. Imagine that you discovered that with a data type conversion (a blob column) you can free up significant amount of space. But after that the DB file still has the same size as we know, and the 10GB limit also didn't change magically. So some kind of SHRINK is needed. That was an example.
In my test environment I performed:
DBCC SHRINKFILE (DBFile, NOTRUNCATE) DBCC SHRINKFILE (DBFile, TRUNCATEONLY)
That did the trick (I know that that minimizes the free space, in real world I would leave free space). it took many-many hours to finish. As we know it's a single threaded process Strange behaviour DBCC Shrinkfile, "it works as a series of very small system transactions so there is nothing to rollback." - Paul Randal http://www.sqlservercentral.com/Forums/Topic241295-5-1.aspx. We also know that it messes up the index fragmentation big time http://www.mssqltips.com/sqlservertip/2055/issues-with-running-dbcc-shrinkfile-on-your-sql-server-data-files/ and I can confirm that. I didn't experience log file grow though described in http://www.karaszi.com/SQLServer/info_dont_shrink.asp
I issued some
INDEX REBUILD and
REORGANIZE and those finished within seconds.
DBCC SHRINKFILE (DBFile, NOTRUNCATE)so I just need to
DBCC SHRINKFILE (DBFile, TRUNCATEONLY)? I have a feeling that the two work on different logical level but I have to ask this.
Bottom line: I promise I won't do shrink regularly or anything. But this is a situation where a cap is hit and shrink is needed.
Answer to my 2nd question: I didn't experience transaction log file growth, because I was fiddling in a developer test environment. The transaction log file would grow in a production environment (assuming Full Recovery Model). You should mimic that in your testing environment too. The tr log file growth is actually bigger than the space you can free up inside the database file. In the end you not only want to shrink the database file, but also the transaction log (how and when to do that depends on the recovery model). Production environments I saw had so screwed up index fragmentation that it can be just better. My script reindexes every index which fragmentation level is above 30%.
To handle the growth, you may have to switch from full recovery to simple while you are doing the once-in-a-lifetime conversion. But that will break the backup chain.
Answer to my 3rd question: do the reindex/index rebuild after the DB file shrink, because the shrink messes up index fragmentation.
I haven't experimented how all of this affects query statistics.
Script for reindexing: see Example D of TechNet sys.dm_db_index_physical_stats (Transact-SQL). An article on incremental shrink, which will lead to other valuable content: http://www.mssqltips.com/sqlservertip/3178/incrementally-shrinking-a-large-sql-server-data-file-using-powershell
If you want to:
Go ahead. Since you say you will only be doing this in rare scenarios, I'm not sure what answer you're looking for here. Do you want a big thumbs-up that you can ignore everything you've read about it and roll the dice?
I will suggest at least one thing: hitting the 10GB limit in Express is going to happen if you have 10GB of data. You avoid that by removing data in the data file (or spreading your data across multiple databases). If you free up a bunch of space inside the data file, that space will be re-used (if you dropped a table, this will be automatic; if you deleted rows or changed column structure, you will need to rebuild - which will also take some space at least temporarily). So if you free up 5GB of data in the file, just because the file is 9GB does not mean that it will try to become 11GB if you add 2GB - it will reuse space inside the file until it can't find any more. It will be a much more expensive operation to shrink the file to 4GB, then force it to grow to accommodate the new data. This is like washing an already clean towel that you're about to use to wipe up a mess...