This is from 2010, but I loved that people felt this way about Redgate Software. A lot of these words are things that we aim to express to each other and to customers.
Ingeniously Simple was a tagline that our founders aimed for with our first products. I still remember this and challenge developers to work towards this. However, I love these words being things that stand out: calm, dependable, Gold-Standard, Excellent, Trustworthy, even Love.
I have been proud to be a part of Redgate, and I want to ensure that customers not only get value from us for the money they spend, but that they want to, and like to, do business with us.
I have a copy of the Book of Redgate from 2010. This was a book we produced internally about the company after 10 years in existence. At that time, I’d been there for about 3 years, and it was interesting to learn a some things about the company. This series of posts looks back at the Book of Redgate 15 years later.
Redgate has a research arm, called the Foundry, that has been experimenting with AIs and DBA tasks. This post shows how GenAI tech can be helpful to DBAs in finding blocking and removing the offending client.
One of the experiments the Foundry is running is with something we’ve called Runbooks. Here’s the main screen, where I have a welcome and a chat window. This is like what I see in Claude.ai.
I have connected this to two instances in the settings, and given the tool permissions to run queries, but not execute commands. The first server is the Local 2022 Default and the second is the 2910-41433.
A Blocking Problem
I’m going to set up a blocking session with this code. Notice it opens a transaction and then performs an update.
In a second session, I’ll run this code. Notice this select is blocked and I have no results. The bottom shows this as “executing”.
Now I’ll go to the Runbooks and enter a question. In this case, I ask it what is wrong with my 2022 server, as if someone called me and said there was an issue. Imagine the “Select” query owner wondering why things aren’t returning right away.
The Redgate Runbook responds by saying it needs to run something. The first time, it asks me to approve this, which I did. Then it runs it and shows executed.
Below this I get some results of what’s returned. This isn’t different than SP_who2, but if you’ve used that tool, you often get a lot of system stuff. I Could use sp_whoisactive, but again, more results than I want without knowing anything. Here the Runbook as limited results to what I care about.
What’s more, the Runbook then tells me something about what it analyzes. This isn’t perfect, but it’s been better than what a lot of help desk/first line support people have told me.
If I know who 56 is, certainly I can ask them to close their tran. This isn’t perfect, but I can ask the Runbook to do this, as I do at the bottom of the image above.
It again asks me to run something, and when it does, I see the executed note.
If I go back to SSMS, I see the query completed.
I typically might not kill the session without more research. I could have asked for what this is, which I’ll do now for the 57 (blocked) session. What was running here? (since 56 was killed)
Again, I approve this and get results.
To me, that’s pretty cool. Using AI to help me get things done as a lever, rather than a replacement is useful. I could have set the AI checking while I finished another task, or used Slack/Teams to check with others as a troubleshoot. I could certainly let the AI run, but I want approvals. I could copy/paste the code to a tool to run it, but the AI let’s this run separately, while I could be multi-tasking with a phone call to the user, or to the help desk, or anything else. More importantly, if I had this AI working from a mobile phone (jump box, etc.) I could be doing minimal typing and have the tech working for me.
This isn’t a product, and unlikely to be one in its own right, but this is the type of thinking we do at Redgate. Harness AI, in a safe way, that’s useful.
There is a ton of hype now about using GenAI for various tasks, especially for technical workers. There are lots of executives who would like to use AI to reduce their cost of labor, whether that’s getting more out of their existing staff or perhaps even reducing staff. Salesforce famously noted they weren’t hiring software engineers in 2025. I’m not sure they let engineers go, but it seems they did let support people go.
For many technical people, we know the hype of a GenAI agent writing code is just that: hype. The agents can’t do the same job that humans do, at least not for some humans. We still need humans to prompt the AIs, make decisions, and maybe most importantly, stop the agents when they’re off track. I’m not sure anyone other than a trained software engineer can do that well.
I was listening to a podcast recently on software developers using AI, and there was an interesting comment. “Data beats hype every time, ” which is something I hope most data professionals understand. We should experiment with our hypothesis, measure outcomes, and then decide if we continue on with our direction, or if we need to rethink our hypothesis.
Isn’t that how you query tune? You have an idea of what might reduce query time, you make a change, and check the results. Hopefully you don’t just rewrite certain queries using a pattern because this has helped improve performance in the past without testing your choice. Maybe you default to adding a new index (or a new key column/include column) to make a query perform better? I hope you don’t do those last two.
AI technology can be helpful, but there needs to be some thought put into how to roll it out, how to set up and measure experiments, and get feedback on whether it actually produces better code and helps engineers. Or if it’s just hype that isn’t helping.
Ultimately, I think that this is especially true for data professionals, as the training of models on SQL code isn’t as simple or easy as it might be for Python, Java, C#, etc. For example, I find some models are biased more towards one platform (MySQL) than another (SQL Server). Your experiments should include using a few different models and finding out which ones work well and (more importantly) which ones don’t. We also need to learn where models actually produce better-performing code for our platforms.
If you’re skeptical of AI, then conduct some experiments. Try to learn to use the tool to help you, rather than replace you. Look for ways to speed up your development, or have an assistant handle tedious tasks. I have found that when I do that, I get benefits from AI that save a bit of typing.
From the Pragmatic Engineer podcast, the best way to deal with some of the hype on AI is with data, take a structured approach to rolling it out, throw in a lot of AB testing measures with different groups or cohorts, evaluate, and see what works well. One of the things the guest noted was that the most highly regulated and structured groups are having the most success with AI. Because they’re careful about rollout, and they are measuring everything. They’ve been measuring time spent, accuracy of tasks and more. Then they decide where and when to use AI, which might be the best advice you get.
Today I’m in San Francisco at Small Data SF 2025. I went to the conference last year and thought it was a great event. Watching people talk about data and how we might look at managing smaller systems, at dealing with the challenges of exploding volumes by querying, storing, and handling less data was fascinating. The event had me me really thinking about ways in which we can build better performing (and cheaper) systems.
To be clear, small data isn’t very little data. Often this is still 100s of GB, perhaps low TBs, but it’s getting away from the idea of thinking we’ll be working on PB-sized big data systems, or that we even need to.
Last year there were lots of talks on data analysis, querying, and even AI, but using smaller sets of data in practical ways that provide value to organizations and individuals by judiciously choosing data sets. Either recent or representative data.
This helped me think of new ways for subsetting, which is something I’ve been pushing at Redgate for our TDM product.
I’m looking forward to the talks. This is a quick trip. I skipped the workshops yesterday since they weren’t that great last year (too many product/company pitches from Silicon Valley) and flew out last night after coaching kids in their first practice. At the conference today listening to talks and doing networking get together after before flying back early tomorrow.
A quick trip, but I’m sure I’ll lots to write about (and think about) in the future.