Better…..Faster….SQL Server Backups

Ever since I phased out the legacy backup system in one of my recent projects with a centralized SQL Server backup solution, I have been asked by many people regarding the results that I got from the transition. To put it simply, the backup size reduced by over 400 percent (from 3.4 TB to 700 GB) and the time spent on backup tasks went from 6 hours to 1.5 hours. Now these are some good numbers to put up for a case study scenario, but how exactly did I achieve these results.

My answers usually begin by explaining the process using some of the terms that are relevant to the SQL Server backup command, and if people are not using SQL Server backup command to backup their database, then they might not know some of them. Today, I just want to talk about some of these essential backup options provided in SQL Server that can help any DBA make SQL Server backups go faster and better.


Backup compression was added in SQL Server 2008 in Enterprise Edition, but is now available in standard edition as well from SQL Server 2008 R2 onwards. This makes your backups smaller, and can also speed up your backup operation as well as recovery operations. The only trade-off is that compressing and decompressing uses more CPU, but this is usually not a problem.


I have come across many situations where people have gone ahead and made a one-off full backup of databases that are scheduled for backups. In doing so, the one-off full backup becomes the basis for subsequent differential backups and so the backup chain disrupts. Especially, if the one-off full backup is not recorded properly. In case of a disaster recovery operation, this single one-off full backup can disrupt the disaster recover process. To avoid having one-off backups affecting the backup chain, you can use the COPY_ONLY option, as this option makes the backup NOT become the basis for subsequent differential backups, and for a log backup, this option makes the backup NOT become the part of the log backup chain.


This option does two things:

  1. As the data file pages are being read from the data files, their page checksums are validated. If an invalid checksum is found, by default the backup will fail and report a message indicating which page is corrupt.
  2. Calculate a checksum over the entire backup and store it in the backup header

From above we can conclude that this option is a way for SQL Server to determine that the I/O subsystem has corrupted a page. So, if you have backups setup with this option and your backups are failing then that could mean corruption in your database. This would make, usage of this option an absolute requirement and should be used on all backups, manual or automated.

The next and last of the BACKUP options are something that not many people are used to work with but can help in speeding up the backup process. There are no fixed values for these options to follow, but rather need a trial and measure approach. I suggest enabling the following trace flags prior to the tests to know what values are best for the following BACKUP options in your particular server setup

DBCC TRACEON (3605, 1)
DBCC TRACEON (3213, 1)


This specifies the physical block size, in bytes. The supported sizes are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536 (64 KB) bytes. The default is 65536 for tape devices and 512 otherwise. From my tests, I have found that by specifying a large 64 KB sector size instead of the default 512 bytes typically shows a 5-6% improvement in backup throughput.


This specifies the total number of I/O buffers to be used for the backup operation. The total space that will be used by the buffers is determined by buffercount * maxtransfersize.


This specifies the largest unit of transfer in bytes to be used between SQL Server and the backup media. The possible values are multiple of 64 KB ranging up to 4194304 bytes (4 MB). The default is 1 MB.

Once you have enabled the trace flags, you can play around these values and find a combination that gives you the maximum buffer space usage. Once that is reached it is not fruitful to increase the values of the parameters.

These are the parameters that I played with in the central backup system deployed by me making use of the above mentioned BACKUP options to get a better and faster SQL Server backup. To summarize, Backup compression is a great feature to use, as it will save you disk capacity and reduce the time needed to backup your data. You can also add multiple backup destination files which I have not mentioned here, but is another efficient way to increase backup throughput. Then, by specifying the BUFFERCOUNT, BLOCKSIZE and MAXTRANSFERSIZE parameters you can typically double the backup throughput, which means reducing the backup time-taken by half. So, the next time you need to a better and faster backup option, don’t just ask for better hardware, look for these options and see if you can achieve your task using these BACKUP options.

That’s all for today folks. Let me know if you have come across such backup performance improvement task and what exactly did you do to resolve it.

This entry was posted in backup, Best Practices, ms sql and tagged , , , , , , , . Bookmark the permalink.

1 Response to Better…..Faster….SQL Server Backups

  1. Pingback: Q&A Backup_001 | My DBA Lounge

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s