There’s no magic solution for many of the problems we face in SQL Server. Whether we are trying to maintain concurrency under high write loads, restore databases quicker, or ensure extremely high availability we need to make trade-offs. Sometimes those tradeoffs are frustrating, and people look to other solutions like NoSQL, not realizing that they are just making different trade-offs.
I see the frustrations and comments constantly on the SQLServerCentral forums, and one recent discussion was no exception. In this particular debate, a user was frustrated by the requirement to copy backup files across the Internet to another location. The transfers were taking too long and the poster were trying to find some magic way of decreasing the transfer time. Increasing the bandwidth, the simplest solution, wasn’t an option, and that usually isn’t easy to do in a corporate environment. The poster seemed to think there had to be some magic way of reducing the time it takes to copy large backup files, and there isn’t any magic solution. The things that help reduce transfer time are often the common things we think of.
Compression is an obvious solution. There are numerous free programs to compress files and even software that will compress the backups as they occur. There are techniques in Windows for speeding up copies, though many are Windows version dependent. Beware, however, since copies from your server might cause you memory issues. However there might be other, more creative solutions that people come up with.
Imagine that you track the transfers, keeping meta data about the process. Perhaps you can find a way to send less data overall the next time. In some sense, this is the idea behind log shipping. If you only send the changes (the transaction log backup), you might save a tremendous amount of time and resources. That assumes, of course, that the changes in a period of time contain less data than the overall database. That might not be the case in some workloads.
There are other services that might help. Imagine that you back up to the cloud, either because you use a tool, feature, or service if your application exists in the cloud. Spinning up a new instance with a copy of that data in the cloud might eliminate the need for transfers completely if the other location can just access the data in the cloud.
I’m sure some of you have other ideas, and I’m sure many of us would like to hear what creative ways you might have for moving data between locations.
The Voice of the DBA Podcast
No podcasts today due to some personal issues. The podcasts should return tomorrow.