A Full Shutdown

I have the opportunity to work with a variety of customers on their database systems, often with the focus on how they can build and deploy changes to their databases. Often, they have a process around how and when they make changes. Some have maintenance windows, though often these are approved times for changes rather than a true window during which a system is shut down.

I ran into a customer recently who scheduled a system shutdown for their deployments. This was a surprise to me in 2026, as I thought most people would have learned to deploy changes to live systems. However, I know that many teams make changes that would render portions of the database inaccessible for a period of time, so maybe that’s not true. Maybe they just make changes and deal with the impact on clients.

I wanted to ask this question today: Do you shut down your system completely for a deployment? Not all systems, but the one you’re patching and possibly a few related ones, while preventing client access?

Or do you have to make changes while the system is still in use?

A lot of DevOps analogies revolve around the idea of cars and performing maintenance or improving them. One that I like is learning to replace all the parts of the car while it’s still running and in use. That can be hard in the real world, but we can often find ways to do this in software, including with databases. A little creativity can go a long way.

I love watching the evolution of Formula 1 Pit Stops as a way of visualizing the problem. This video is kind of amazing, but it shows the power of thinking about a problem and finding ways to improve it. In the 50s, pit stops could take 45-60 seconds or longer. If you look at the video, in 1990, they dropped this to less than 9 seconds. That seems amazing, but the power of continuous improvement shows this dropping to 7s in 2000. In 2010, 4 seconds. Then in 2020, 1.82 seconds for 4 tires changed.

This is still a full shutdown, albeit a few short one.

I constantly deal with people who think that they cannot find a way to make deployments easier, faster, or less impactful. I know that car racing teams used to feel that way about their pit stops. Once they tried to creatively work on their challenges, they found solutions that are truly amazing. Using new ideas and tools, they reached speeds no one could have imagined a decade ago.

I bet many of you can do the same thing to your databases with an open mind and a little tooling.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Leave a comment

The DIY Cost of Masking Test Data For Smaller Organizations

One of the things I’ve tried hard to do in database development situations if ensure I could easily refresh dev and test environments on demand. In a small startup, we wanted to be sure our weekly releases worked well, so if we found bugs in QA, we immediately filed a report with developers and once they viewed a repro, we refreshed QA with a fresh copy from production.

Not the best approach, but for a small database in the early 2000s when we were less concerned about data breaches, this worked well.

At the time I remember discussing this challenge with Andy during one of our SQL Server Central catchups. He had a similar issue, though for him, they needed to clean the data. Their system included a bunch of email notifications and they couldn’t take the chance of sending out test emails. They also were in a regulated industry and clients who were concerned about developers getting names and addresses from production.

Cleaning up names and addresses seems like a simple task, but there were endless variations and a new set of edge cases appearing constantly. It was a regular task for Andy to maintain and adjust his scripts to ensure data was masked well. This also resulted in no shortage of calls from others when things didn’t run smoothly.

A Time Sink

For smaller organizations (50-200 people), it wasn’t, and likely still isn’t, an option to purchase some of the more established tools in this area. They are too expensive and require a lot of resources (time and hardware) to get working.

At the same time, the DIY approach is essentially a commitment to a software development project, one that never ends and distracts people from their regular jobs. If you have staff on salary, this can seem like a good approach, but it’s often a waste of their efforts.

Even in the age of AI, I can see how this would be something that eats up sizable amounts of resources. While AI makes coding easier, directing that coding isn’t easy. And since models only keep limited context, I can see someone spending just as much time directing a model with prompts and correcting its mistakes as they might spend writing the code. I might be wrong, but since this isn’t always an easily defined task, I bet I’d spend a decent amount of time, even with Claude Code, constantly reshaping masking scripts.

Not to mention, I’d still be hoarding the knowledge in my head about how to direct the AI.

A Better Approach

I work for Redgate Software, and certainly I’m a bit biased here. We sell a solution in this area, but I’ve also helped shape (a little) how we approach this space based on the challenges I see from customers, and the needs they have to get a system working quickly. Not to mention an affordable solution that reduces the risk of accidental data loss or regulatory fines.

We’ve developed Test Data Manager to work within the constraints of small to medium sized organizations, both with functionality and price. It has a lot of what I want in a solution, though not everything. I still push on the product and engineering teams to add more features as well as reduce complexity wherever possible.

I want this to be ingeniously simple to use.

I’ve had the opportunity to work with a few customers that have become audit ready in hours by using the smart defaults and adding a bit of their knowledge about the system. The time to value keeps getting lower and I’m impressed by how the team responds to customers requests and demands.  Like many of our products, we’re releasing regularly and adding features constantly.

The approach of having a tool that codifies what you want, is easily version controllable, and gets updated regularly is what most people want from software. We try to be good partners, and we’re working to ensure that customers not only get the value for the price they pay, but that value continues to increase throughout the year as we mature the software.

We’re releasing in a DevOps manner, to ensure you can do so for your organization.

If you’d like to see how Test Data Manager can can keep you in control of your databases, reduce your risk of data loss, and help ensure compliance, give us a try .

We also have a webinar (

Compliance Without Compromise: Test Data Management That Finally Fits) coming up on Mar 18 that you might check out for a quick look at some of the benefits of TDM.

Posted in Blog | Tagged , , | Leave a comment

T-SQL Tuesday #196: Taking Risks

This month we have a new host, James Serra. I’ve been trying to find new hosts over the last few years to keep this party going and to expand the ways in which people look at database work.

There are definitely less bloggers, but in the age of AI, I think the more you can stand out and show that you add value, that you think about things, that you can spot AI mistakes, the better off you are, so keep blogging an encourage others to do so.

This is a great invite, and I’ll give you my response below.

A Big Leap

I had changed jobs a few times after I moved to Colorado. I worked at an established small company, which was still a bit of a startup, and after sleeping in my office 8-10 times in a year, I looked for another job. Another startup, which failed, left me looking again. I got a job at JD Edwards, which turned into PeopleSoft after an acquisition. I was promoted, and in a good place.

At the time, I’d been running SQL Server Central on the side with Andy and Brian for a couple of years. We were making it work, but it was stressful in that we spent time at night, on weekends, and during breaks keeping it growing. We were feeling the stress and decided someone needed to work at it fulltime. We’d made a big sale, so we had some cash to give us stability, but if things went poorly, this was about a half a year’s salary for any of us.

We debated it a bit and I decided to make the leap to working for the company fulltime. I took a small pay cut to do so, but we expected a monthly dividends to make up for most of the decrease in pay. Not all, but most. The main reason I decided to make the leap was that my wife was working fulltime and provided our health benefits. That was a cost Andy or Brian would have needed to shoulder themselves.

This was a risk. At the time, I worried about my skillset. Would they atrophy? If this didn’t work, could I go back to work as a consultant or fulltime DBA? Would I miss out on the learnings and growth that come from being involved fulltime in projects for an organization?

Looking back, it doesn’t seem like a big risk, but at the time, in the 2003 timeframe, it felt like a major leap away from the security and stability of corporate work. Sure, I’d changed jobs in the past, but I always had a strong track record of delivering results in a position. I was giving some of that up and the longer I was away, the more I was worried about coming back.

Over time I realized that a lot of what I do here with testing scenarios, mocking situations, and working through the challenges people face is the same type of work that I would do in a corporate environment. I don’t have the pressure from a manager, but I often put that on myself, so it’s very similar.

This was a calculated risk, but still a work. Fortunately, it was a one that worked out well.

Posted in Blog | Tagged , , | Leave a comment

Not Just an Upgrade

Upgrading my database server and moving from version 6 to version 7 because of a support cycle has always felt a little funny to me. In many cases, I’ve had systems that were running smoothly and performing as needed. If people were complaining, often this was because of a lack of resources, where we needed more hardware. In other cases, this was a lack of quality code, often from other developers who were unwilling to change their approach. In neither case was an upgrade likely to change anything.

However, an upgrade can be more than just buying new license and accessing new features. I was reminded up this by John Sterrett, with a post on how he talks to CEOs about upgrades. The upgrade isn’t just a new database server. It’s a chance to re-evaluate the system and consider something besides the application.

In the list, John looks at this as a cost, security, and compliance decision. These days, Standard might be a better fit than Enterprise and can save on licensing. Better security can lower risk and potentially prevent issues. Being out of support, which is going to happen 3 times in the next 3 years, can be an issue for some companies. New features might reduce the costs of maintaining existing systems.

I don’t know that this list would have made a lot of sense in the 2000-2005 timeframe, or even in the 2008-2014 range, but it might now. There are considerations beyond just the license cost. Certainly I’d be re-examining my Standard v Enterprise choice in many situations and perhaps using this argument as a reason to press developers to learn to better structure their data models and write better queries. Lowering the resource usage can lower costs. Even archival might be something I’d press on, as less data is less data to query, and honestly, are those old records in tables truly adding value?

Or are they muddying the waters of analysis?

Better security matters, and I do think modern auth systems are better, but often this might require a security change in other parts of the org, and still might require application redesign to account for a directory authenticating users. That might be entail its own costs and not be worth effort.

I don’t think upgrades should be automatic, and I am a fan of running a database server for ten years, but I also think that running one for 20 years might be a bad idea. Upgrades ought to be approached with the rational, logical view that this is an opportunity for us, but one that we might choose to take advantage of or pass on.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged , | Leave a comment