Patching Challenges

In my career, I’ve had to manage many production database instances. In fact, there was a time when two of us were patching hundreds of instances (600?) when MS released patches. That wasn’t too often back then, unlike the every other month(ish) schedule that we have with 2017/2019 and the Cumulative Updates.

My process was often to test patches on QA/Test servers and then start to roll out the patches to production. We didn’t quite follow what Brent Ozar recommends, in that we often patched development servers later. We didn’t control those, and unless production was patched, we couldn’t get developers to patch their systems.

We didn’t patch DR servers first, but that’s a good idea. It’s one I think makes a lot of sense. We did patch secondaries first, ensuring that if there were issues here, we wouldn’t impact production, and these days with Availability Groups, hitting the secondary replicas, especially read-only ones, is a good idea.

The big thing for me wasn’t so much the type of servers, but in having a series of rings to roll out the patches in groups. We used automated processes (no one wants to click next-next-next) and while we might patch a lot of servers, we never wanted to patch them all. We typically had 3 rings. Ring 1 was test servers, and ring 2 was most of production. Ring 3 was any production servers that got an exemption from initial patching. There were times when some process was important and couldn’t be interrupted or a client needed a few more weeks to test. We’d let them delay a month, but not longer.

I think it’s important to have a strategy, and as Brent notes, also a protocol for how you handle things. I’ve often depended on our normal backup processes, especially in large environments, and the patches tended to stop the SQL services, so I don’t know how important it is to stop client apps, but think about it.

One note about backing out changes is that containers make this a lot easier. If you move to production containers (linux, HA challenges, features missing, etc.) you can swap out an updated container with a new (or old) patch level as needed. There are caveats here, and certainly I’d start implementing this in a dev area first to understand the implications, but I expect over time containers will make patch deployment and rollback much easier.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged . Bookmark the permalink.