Multiple Deployment Processes

We had a Simple Talks podcast recently where we discussed roll forward vs roll back. You can watch the episode and listen to our thoughts, but one interesting place was when we talked about deployments. Grant mentioned that he deployed from version control/source control at a previous employer. I asked him whether he did that for every system.

His response: “Well, …”

He admitted that most, but not all, databases came from a controlled source. There were some systems that had a more ad hoc change process. I wonder how many of you have consistent processes throughout your organization. I suspect not many of you do, especially if an organization isn’t small. Often, different groups and applications are in a constant state of flux, with lots of different processes and protocols.

Some groups are more mature and have stable staff who expect to deploy changes in a certain way. This might be on a known cadence, with documents or processes in place already. Others applications might have been developed quickly; perhaps they are newer and use more automation to deploy changes. Some might even use things like packages from an ORM or a vendor that takes control of database changes away from anyone managing the database. Does anyone deal with Spring Boot and very optimistic developers?

I wonder how many of you have a consistent process for promoting database code to production across all your teams. Maybe 80% is a better metric, as this accounts for those groups severely limited by legacy technology or those that might be experimenting with new ways of working.

Even those companies that have platform engineering groups in place to ease the flow for both developers and operations often aren’t consistent throughout the organization. Often, getting everyone to adopt a standard is hard and takes time.

That might be the biggest challenge with standardizing database deployments: time. Organizations grow and change, new technologies come, and by the time we think we’ve gotten everyone to agree to change, who everyone is has changed. We have someone or something new, and we’re forever chasing standardization. Even when we might have a great DevOps process or a platform engineering team for software, we don’t do this for databases.

I believe having a consistent, standardized process is a worthwhile goal, but one where 80% success is probably good enough in most organizations. If you can get most teams to follow the same process, you’ll increase efficiencies and ensure a better software development life cycle.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Leave a comment

Advice I Like: Knots

Learn how to tie a bowline knot. Practice in the dark. With one hand. For the rest of your life, you’ll use this knot more times than you would ever believe.” – from Excellent Advice for Living

I like this advice, not because I use this know a lot, but because there are things you should be able to do in the physical world, and some self-reliance and ability is handy.

For the record I do use bowlines regularly around the ranch, but we use some horseman’s slipknots (manger tie) and square knots more.

In my life, there are small skills like this that get used over and over. We still live in the physical world, and not only are physical skills useful, but there is a lot of satisfaction from being able to accomplish something yourself.

I’ve been posting New Words on Fridays from a book I was reading, however, a friend thought they were a little depressing. They should be as they are obscure sorrows. I like them because they make me think.

To counter-balance those, I’m adding in thoughts on advice, mostly from Kevin Kelley’s book. You can read all these posts under the advice tag.

Posted in Blog | Tagged , | Leave a comment

A Full Shutdown

I have the opportunity to work with a variety of customers on their database systems, often with the focus on how they can build and deploy changes to their databases. Often, they have a process around how and when they make changes. Some have maintenance windows, though often these are approved times for changes rather than a true window during which a system is shut down.

I ran into a customer recently who scheduled a system shutdown for their deployments. This was a surprise to me in 2026, as I thought most people would have learned to deploy changes to live systems. However, I know that many teams make changes that would render portions of the database inaccessible for a period of time, so maybe that’s not true. Maybe they just make changes and deal with the impact on clients.

I wanted to ask this question today: Do you shut down your system completely for a deployment? Not all systems, but the one you’re patching and possibly a few related ones, while preventing client access?

Or do you have to make changes while the system is still in use?

A lot of DevOps analogies revolve around the idea of cars and performing maintenance or improving them. One that I like is learning to replace all the parts of the car while it’s still running and in use. That can be hard in the real world, but we can often find ways to do this in software, including with databases. A little creativity can go a long way.

I love watching the evolution of Formula 1 Pit Stops as a way of visualizing the problem. This video is kind of amazing, but it shows the power of thinking about a problem and finding ways to improve it. In the 50s, pit stops could take 45-60 seconds or longer. If you look at the video, in 1990, they dropped this to less than 9 seconds. That seems amazing, but the power of continuous improvement shows this dropping to 7s in 2000. In 2010, 4 seconds. Then in 2020, 1.82 seconds for 4 tires changed.

This is still a full shutdown, albeit a few short one.

I constantly deal with people who think that they cannot find a way to make deployments easier, faster, or less impactful. I know that car racing teams used to feel that way about their pit stops. Once they tried to creatively work on their challenges, they found solutions that are truly amazing. Using new ideas and tools, they reached speeds no one could have imagined a decade ago.

I bet many of you can do the same thing to your databases with an open mind and a little tooling.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Leave a comment

The DIY Cost of Masking Test Data For Smaller Organizations

One of the things I’ve tried hard to do in database development situations if ensure I could easily refresh dev and test environments on demand. In a small startup, we wanted to be sure our weekly releases worked well, so if we found bugs in QA, we immediately filed a report with developers and once they viewed a repro, we refreshed QA with a fresh copy from production.

Not the best approach, but for a small database in the early 2000s when we were less concerned about data breaches, this worked well.

At the time I remember discussing this challenge with Andy during one of our SQL Server Central catchups. He had a similar issue, though for him, they needed to clean the data. Their system included a bunch of email notifications and they couldn’t take the chance of sending out test emails. They also were in a regulated industry and clients who were concerned about developers getting names and addresses from production.

Cleaning up names and addresses seems like a simple task, but there were endless variations and a new set of edge cases appearing constantly. It was a regular task for Andy to maintain and adjust his scripts to ensure data was masked well. This also resulted in no shortage of calls from others when things didn’t run smoothly.

A Time Sink

For smaller organizations (50-200 people), it wasn’t, and likely still isn’t, an option to purchase some of the more established tools in this area. They are too expensive and require a lot of resources (time and hardware) to get working.

At the same time, the DIY approach is essentially a commitment to a software development project, one that never ends and distracts people from their regular jobs. If you have staff on salary, this can seem like a good approach, but it’s often a waste of their efforts.

Even in the age of AI, I can see how this would be something that eats up sizable amounts of resources. While AI makes coding easier, directing that coding isn’t easy. And since models only keep limited context, I can see someone spending just as much time directing a model with prompts and correcting its mistakes as they might spend writing the code. I might be wrong, but since this isn’t always an easily defined task, I bet I’d spend a decent amount of time, even with Claude Code, constantly reshaping masking scripts.

Not to mention, I’d still be hoarding the knowledge in my head about how to direct the AI.

A Better Approach

I work for Redgate Software, and certainly I’m a bit biased here. We sell a solution in this area, but I’ve also helped shape (a little) how we approach this space based on the challenges I see from customers, and the needs they have to get a system working quickly. Not to mention an affordable solution that reduces the risk of accidental data loss or regulatory fines.

We’ve developed Test Data Manager to work within the constraints of small to medium sized organizations, both with functionality and price. It has a lot of what I want in a solution, though not everything. I still push on the product and engineering teams to add more features as well as reduce complexity wherever possible.

I want this to be ingeniously simple to use.

I’ve had the opportunity to work with a few customers that have become audit ready in hours by using the smart defaults and adding a bit of their knowledge about the system. The time to value keeps getting lower and I’m impressed by how the team responds to customers requests and demands.  Like many of our products, we’re releasing regularly and adding features constantly.

The approach of having a tool that codifies what you want, is easily version controllable, and gets updated regularly is what most people want from software. We try to be good partners, and we’re working to ensure that customers not only get the value for the price they pay, but that value continues to increase throughout the year as we mature the software.

We’re releasing in a DevOps manner, to ensure you can do so for your organization.

If you’d like to see how Test Data Manager can can keep you in control of your databases, reduce your risk of data loss, and help ensure compliance, give us a try .

We also have a webinar (

Compliance Without Compromise: Test Data Management That Finally Fits) coming up on Mar 18 that you might check out for a quick look at some of the benefits of TDM.

Posted in Blog | Tagged , , | Leave a comment