SHRINK – The Evil Doctor

Every DBA faces the day to day challenge of making best use of the available resources. And on this note, it’s very obvious that storage is the one aspect that is quite tough to handle at times. Now there are a lot of ways to gain free space from the available designated storage in the servers, and we as DBA do get this frequently from a lot of developers and other stakeholders — “Lets just shrink the database, and we can get free space”.

There was a time when I’d readily say yes to it, but once you realize how evil that idea is, the first and foremost thing to say is — “That’s going to hit our servers performance”. The only time a shrink won’t affect performance is if we use the WITH TRUNCATEONLY option and there’s free space at the end of the file being shrunk. Shrink affects performance while it’s running, because it’s moving tons of data around, generating I/Os, fully logging everything it does and burning CPU, and as this operation is single-threaded, it is really slow.

Shrink affects performance after it’s run as well. As all the log has be backed up, and if the data file grows again, the new space has to be zeroed out again unless we have instant file initialization enabled. Worst of all, shrink causes index fragmentation, which badly impacts the performance of range scans.

The shrink option does allow us to regain some amount of free space, but it has its consequences. So let’s look at what workarounds do we have to gain free space with or without shrinking.

Option 1: Try a TRUNCATEONLY first

Try a DBCC SHRINKFILE TRAUNCATEONLY first. This just removes empty space from the end of the file, without re-organizing the used data pages first. It’s probably not going to work to our satisfaction, but it’s worth a try, especially as it’s much faster than a SHRINKFILE without the TRAUNCATEONLY option

Option 2: REORGANIZE/REBUILD indexes after the SHRINKFILE

If a SHRINKFILE must be done, allocate enough downtime for the process to complete, but be aware that we can stop the process at any time without incurring a roll-back. The process is logged, but any work done up to the point of cancelling is kept, so we can pick up again later and all the time we just spent isn’t totally waster. The downside is that we just fragmented a lot of our data.

Once the SHRINKFILE has completed, identify the indexes that need to be REBUILT/REORGANIZED and run that process. This is where it gets tricky to know how much time to allow for the downtime. Not only do we have to shrink file, but we have to estimate the time for the REBUILT/REORGANIZED operations after the SHRINKFILE has completed. This will also have the negative impact of growing our file again, though hopefully not as much as we just shrank it by.

Option 3: Move user data to a new file

Add a new file(s) to the database into a new FileGroup, rebuild all the indexes/data into the new FileGroup, then shrink the original files(s). There are several advantages to this method as given below:

1. Avoid the fragmentation problem on the SHRINKFILE operation for the original file as there is no user data to move around and fragment

2. Makes the SHRINKFILE operation for the original file go much faster as there’s not user data to move around

3. We don’t need to re-index tables in the original file, so we don’t end up re-growing it again

4. We can do it in smaller, more manageable chunks without impacting the data left in the original file

Of course, the new FileGroup should be set to be the Default, otherwise we are just going to have new tables/objects added to the old FileGroup. This may not be possible, for a wide variety of reasons, not enough disk space for new file(s), no indexes on Heap tables, management reluctance, etc. However, this option does have the distinct advantage of implementing a best practice of separating out the user data from the system data in the database.

As an end-note just remember, Datafile Shrink, ShrinkDatabase or Auto-shrink, they are never going to be fruitful in the long run and should be avoided as much as possible. Let me know your thoughts on shrinking data files in SQL Server. I would love to hear your thoughts.

This entry was posted in Best Practices, configuration, ms sql and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s