Poor Name Choice

I wrote recently about some work with Redgate Clone, and one of the things I did was start up a blank container instance of SQL Server from the image named empty-sql-current. This image contains SQL Server 2019. Clearly, “current” was a poor choice.

I see this often in various places, where someone will reference “current”, “new”, “latest”, or some other term that denotes the most recent changes. If everyone reading the reference is doing so with knowledge of the past and at a time close to publication, this works fine. However, a year later, does this make sense? At the same time, I do like consistent names that might be used in scripts. If I always want developers pulling the latest item, I might use latest. However, if versions are important, than “latest” or “current” might not be the best choice. Much of the time, I tend to try and get a version or some other specific indicator in a name.

It’s like seeing the words “the fastest SQL Server ever” (or pick your technology) in a release announcement. At that time, it might be the fastest SQL Server release, but when the next version is released (hopefully) that won’t be true.

As I’ve matured, I aim to build things that last for the future, thinking beyond what the world looks like right now. This includes architecture decisions and more, but it also includes naming. Reference specific versions, times, etc., with the idea that I want to convey some information with the name. I even name my containers with the port I use because it makes it really easy to see which database container is running on 1433 and which is 41433.

The other consideration for naming, for me, is to include data in the name that I will use for searching or sorting. Perhaps means using good date practices, like 2025-05-01 and 2025-10-03 to ensure my files sort correctly. That might be very important for things like backup files. Maybe it’s using something like “Customer_Copy_Delete_After_Year_Close” for a copy of data that might be relevant through our current financial cycle.

I often do like using names that come to mind first, as this can help me find things, but I also have learned to be more explicit when using names as a way to convey information. With modern computing and support for large names, it sometimes pays to be descriptive.

The only thing I try to avoid is spaces. For the most part, file explorers and web servers handle spaces, but sometimes things break, so I’ve learned to avoid spaces where possible.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Leave a comment

The Book of Redgate: What Our Customers Say

This is from 2010, but I loved that people felt this way about Redgate Software. A lot of these words are things that we aim to express to each other and to customers.

2025-10_line0104

Ingeniously Simple was a tagline that our founders aimed for with our first products. I still remember this and challenge developers to work towards this. However, I love these words being things that stand out: calm, dependable, Gold-Standard, Excellent, Trustworthy, even Love.

I have been proud to be a part of Redgate, and I want to ensure that customers not only get value from us for the money they spend, but that they want to, and like to, do business with us.

I have a copy of the Book of Redgate from 2010. This was a book we produced internally about the company after 10 years in existence. At that time, I’d been there for about 3 years, and it was interesting to learn a some things about the company. This series of posts looks back at the Book of Redgate 15 years later.

Posted in Blog | Tagged , , | 2 Comments

Finding and Killing Blockers with Redgate AI Tech

Redgate has a research arm, called the Foundry, that has been experimenting with AIs and DBA tasks. This post shows how GenAI tech can be helpful to DBAs in finding blocking and removing the offending client.

This is part of a series of experiments with AI systems.

Redgate Runbooks

One of the experiments the Foundry is running is with something we’ve called Runbooks. Here’s the main screen, where I have a welcome and a chat window. This is like what I see in Claude.ai.

2025-10_line0120

I have connected this to two instances in the settings, and given the tool permissions to run queries, but not execute commands. The first server is the Local 2022 Default and the second is the 2910-41433.

2025-10_line0122

A Blocking Problem

I’m going to set up a blocking session with this code. Notice it opens a transaction and then performs an update.

2025-10_line0110

In a second session, I’ll run this code. Notice this select is blocked and I have no results. The bottom shows this as “executing”.

2025-10_line0123

Now I’ll go to the Runbooks and enter a question. In this case, I ask it what is wrong with my 2022 server, as if someone called me and said there was an issue. Imagine the “Select” query owner wondering why things aren’t returning right away.

The Redgate Runbook responds by saying it needs to run something. The first time, it asks me to approve this, which I did. Then it runs it and shows executed.

2025-10_line0112

Below this I get some results of what’s returned. This isn’t different than SP_who2, but if you’ve used that tool, you often get a lot of system stuff. I Could use sp_whoisactive, but again, more results than I want without knowing anything. Here the Runbook as limited results to what I care about.

2025-10_line0114

What’s more, the Runbook then tells me something about what it analyzes. This isn’t perfect, but it’s been better than what a lot of help desk/first line support people have told me.

2025-10_line0115

If I know who 56 is, certainly I can ask them to close their tran. This isn’t perfect, but I can ask the Runbook to do this, as I do at the bottom of the image above.

It again asks me to run something, and when it does, I see the executed note.

2025-10_line0116

If I go back to SSMS, I see the query completed.

2025-10_line0124

I typically might not kill the session without more research. I could have asked for what this is, which I’ll do now for the 57 (blocked) session. What was running here? (since 56 was killed)

2025-10_line0117

Again, I approve this and get results.

2025-10_line0118

To me, that’s pretty cool. Using AI to help me get things done as a lever, rather than a replacement is useful. I could have set the AI checking while I finished another task, or used Slack/Teams to check with others as a troubleshoot. I could certainly let the AI run, but I want approvals. I could copy/paste the code to a tool to run it, but the AI let’s this run separately, while I could be multi-tasking with a phone call to the user, or to the help desk, or anything else. More importantly, if I had this AI working from a mobile phone (jump box, etc.) I could be doing minimal typing and have the tech working for me.

This isn’t a product, and unlikely to be one in its own right, but this is the type of thinking we do at Redgate. Harness AI, in a safe way, that’s useful.

And ingeniously simple.

Tom Hodgson runs the group that worked on this, and he have me a fun, unforgettable interview. You can watch here: https://www.red-gate.com/simple-talk/podcasts/coffee-chat-with-tom-hodgson/

Video Walkthrough

Watch this live

Posted in Blog | Tagged , , , | Leave a comment

Data > Hype

There is a ton of hype now about using GenAI for various tasks, especially for technical workers. There are lots of executives who would like to use AI to reduce their cost of labor, whether that’s getting more out of their existing staff or perhaps even reducing staff. Salesforce famously noted they weren’t hiring software engineers in 2025. I’m not sure they let engineers go, but it seems they did let support people go.

For many technical people, we know the hype of a GenAI agent writing code is just that: hype. The agents can’t do the same job that humans do, at least not for some humans. We still need humans to prompt the AIs, make decisions, and maybe most importantly, stop the agents when they’re off track. I’m not sure anyone other than a trained software engineer can do that well.

I was listening to a podcast recently on software developers using AI, and there was an interesting comment. “Data beats hype every time, ” which is something I hope most data professionals understand. We should experiment with our hypothesis, measure outcomes, and then decide if we continue on with our direction, or if we need to rethink our hypothesis.

Isn’t that how you query tune? You have an idea of what might reduce query time, you make a change, and check the results. Hopefully you don’t just rewrite certain queries using a pattern because this has helped improve performance in the past without testing your choice. Maybe you default to adding a new index (or a new key column/include column) to make a query perform better? I hope you don’t do those last two.

AI technology can be helpful, but there needs to be some thought put into how to roll it out, how to set up and measure experiments, and get feedback on whether it actually produces better code and helps engineers. Or if it’s just hype that isn’t helping.

Ultimately, I think that this is especially true for data professionals, as the training of models on SQL code isn’t as simple or easy as it might be for Python, Java, C#, etc. For example, I find some models are biased more towards one platform (MySQL) than another (SQL Server). Your experiments should include using a few different models and finding out which ones work well and (more importantly) which ones don’t. We also need to learn where models actually produce better-performing code for our platforms.

If you’re skeptical of AI, then conduct some experiments. Try to learn to use the tool to help you, rather than replace you. Look for ways to speed up your development, or have an assistant handle tedious tasks. I have found that when I do that, I get benefits from AI that save a bit of typing.

From the Pragmatic Engineer podcast, the best way to deal with some of the hype on AI is with data, take a structured approach to rolling it out, throw in a lot of AB testing measures with different groups or cohorts, evaluate, and see what works well. One of the things the guest noted was that the most highly regulated and structured groups are having the most success with AI. Because they’re careful about rollout, and they are measuring everything. They’ve been measuring time spent, accuracy of tasks and more. Then they decide where and when to use AI, which might be the best advice you get.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | 6 Comments