The Challenge of AI

In his book, The Coming Wave, the CEO of Microsoft AI laid out the risks of AI tech bluntly. “These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor-replacing,” he wrote. Suleyman advocated for regulatory oversight and other government interventions, such as new taxes on autonomous systems and a universal basic income to prevent a socioeconomic collapse. This book was published before Suleyman joined Microsoft.

Satya Nadella is more optimistic than his new deputy. In an interview at Microsoft headquarters, while sitting next to his human chief of staff, Nadella said that his Copilot assistants wouldn’t replace his human assistant. As his chief of staff sat typing notes of the conversation on her tablet, Nadella acknowledged that AI will cause “hard displacement and changes in labor pools,” including for Microsoft. Judson Althoff, Chief Commercial Officer, said that Nadella was pressuring his team to find ways to use AI to increase revenue without adding headcount.

In 2025, Microsoft has reduced quite a bit of its workforce. Over 9,000 earlier this year, though perhaps there will be some hiring in the future, according to Nadella. Nadella contends that AI could end up delivering more societal benefits than the Industrial Revolution did. “When you create abundance,” Nadella said, “then the question is what one does with that abundance to create more surplus.”

As I discuss AI with different people, I get wildly different opinions. The pace of GenAI model growth across the last two years has led quite a few people to believe that the technology will approach mimicking the average human’s intelligence in just a few years. That’s a scary thought, and it certainly could lead a lot of executives to place a bet on fewer human employees and more digital ones.

However, many more people believe that the GenAI models still need a lot of guidance, and they are best suited for partnerships with humans. That’s good, in a sense. If a smart or talented human can use an AI partner and get a lot done, that means we still need some humans.

Some.

That use of AI by a few talented people might also lead us to a reduction in labor for a lot of organizations. Maybe fewer humans get more done with AI, and it’s possible organizations want to make that trade. It’s easy to think we’ll find things for more humans to do, but computers are incredible levers, and this worries me.

A little.

What I also think is that there is so much work we’d like to get done, but we can’t, at least in the technology space. We don’t have enough people to do the work, so GenAI agents or partners working with humans might let us catch up on the backlogs we have.

Of course, I don’t know that all that backlogged software we went is something we need, if it’s good for the world, and if it will end up putting even more people in the real world out of work.

Lots of challenges ahead. Let me know what you think.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged , | Leave a comment

Vibe Coding a Login Tracking System

A customer was asking about tracking logins and logouts in Redgate Monitor. We don’t do this natively, as this really needs an XEvent session. I decided to see if an AI could help me get a solution setup that might let me build a custom metric to track this.

I could do this myself, but it’s some looking syntax and capabilities, futzing with different code items, and trying to think about options. The goal here is can an AI help and save time. Not do the work for me, but assist.

So maybe not Vibe coding per se, but felt like I did little.

Video Walkthrough at the bottom.

This is part of a series of experiments with AI systems.

Note: This isn’t something I necessarily worry about. The rate might tell me if I’m under attack, but I’d hope applications would detect this first (and be able to block things)

The Problem

The customer just asked if they could track logins and logouts. I mentioned the Server Properties (shown below), to see what they’d done. They hadn’t used this, but also, it’s not very flexible or reportable as it puts info in the error log.

2025-12_0171

Tracking this info really requires an XE (Extended Events) session. That’s a lightweight way to capture this information. If I want to capture some info about the client logging in, or failing to log in, that’s the way.

A separate request was could we also get logouts. The only way to do this is with an XE session and the sqlserver.logout event.

With that in mind, let’s see how my assistant can help.

Using Claude

I opened Claude and asked this: “in sql server can I track login counts and logout counts from t-sql?” and actually spelled everything correctly. No savings here.

The base answer I got started like this, giving me a few options.

2025-12_0172

The ending asked me if I’d like to get the Extended Event option. It had provided only the login trigger option. I need an assistant, so I said yes.

2025-12_0173

That first answer was maybe 20-30 sec, but I started reading things, so this felt like a discussion with another DBA. Once this started, I let it go and it started to write out and work on code on the right, and then fill in the results on the left.  This was a few minutes, so I flipped over to answer a few emails while I saw this on another monitor.

The results started with a table to store data and then an XE session. The whole page looks like this, which is a lot.

2025-12_0175

Here’s the full left side. Notice that it asks me for a next step. I’ve met a lot of junior DBAs, or even Senior-DBAs-with-1-year-of-experience-10-times that didn’t do this.

2025-12_0177

The actual code doesn’t matter yet, since I realized this isn’t getting failed logins. I asked another question and got a response. A polite Claude complements me and then rewrites code. This took another few minutes, and it was neat to see it rewriting its code on the right, adding in a new field in the table and adjusting the session.

I watched a bit, but got distracted with a Slack message. One nice thing is I can move on to another task while my assistant keeps working.

2025-12_0178

At the bottom, I liked the summary of how it works.

2025-12_0179

My Slack message was from an AE, asking a question about the server property stuff (from the customer). I could have typed a bunch, but when I looked back, Claude was finished, so I asked it.

2025-12_0180

I copy/pasted this to the AE, as it’s a good summary for the customer. This assistant is making my job easier.

Testing the Solution

I didn’t just send this. Instead I decided to test this on a few local systems. I keep a DBA database on each instance, so I ran the code in there to create the table and session. As a precaution, since this isn’t my code, I ran each item separately.

2025-12_0181

The table worked here. I had another window for the session, which I looked over, but didn’t extensively check. I’m not an XE expert, and I’d likely fumble this code worse than an AI at first, so I checked the events and actions. I decided to just run this since it looked good.

When it came to the procedure, I got an error. I copied and pasted this into Claude. It recognized an issue and fixed it. This took a few minutes, but this is faster than I could have corrected my own amateurly written query against XE.

2025-12_0182

Now I had a procedure. One thing I edited in both the session and proc is that I removed the hard coded c:\SQLData path. I wanted this captured with my other instance stuff in the \logs folder, so I left just the name of the XE session.

Adding Archival

One of the things that Redgate Monitor does really well is manage older data. I’ve seen so many people, including myself, set up something like this and then a year later realize they’ve captured GB of data.

I asked Claude to just fix this for me.

2025-12_0183

I grabbed the second command for my Agent Job and changed 180 to 90.

SQL Agent Job

Claude again asked me above about jobs, but jobs are simple and easy, and I wanted to think about it for a minute. I right clicked and created a new job. I thought about the name and description I wanted. Then I made two steps, pasting in two exec proc commands for the procs my assistant had written.

The last thing was some testing. I ran my job to be sure it was working. It completed, which was good.

I made a few logins and logouts, including a few failed logins. Then I queried my table. I didn’t remember the name, but my assistant tends to pick plain/boring names. So I used SQL Prompt to find it with a ssf <tab> l and got this:

2025-12_0184

When I got a query and checked, I see logs of my logins and logouts.

2025-12_0185

Redgate Monitor Custom Metric

My AE and customer wanted to see this in Redgate Monitor, so I decided to ask Claude. It was happy to help.

2025-12_0186

I could repeat that for the other items (failed logins and logouts). I didn’t, but this gets me ready to add this to Redgate Monitor.

Summary

This shows how an AI assistant can help me set up some auditing for security purposes. There is nothing complex here, and I’ve set up a bunch of this myself in the past. I even have a blog on this.

However, the code is cumbersome and slow to write for me. Or most humans unless you end up working with XE every day. Even if you use SSMS and the GUI to generate the script, it can be slow. I know I’d certainly have to look things up. In less than 15 minutes, I had a fairly well working solution, with archiving (deleting) old data and I didn’t need to focus tightly the entire time or type a lot. I did some other work, and I could focus on just testing.

Claude was a great AI assistant to this problem, which is similar to a lot of DBA-type work I’ve done in the past.

These were the tools I used:

And, of course, Management Studio 22.

Addendum

I tried Google Gemini and ChatGPT. A quick summary of those, which didn’t work as well. At least not to me.

Note that I use the free versions for all of these tools right now.

Gemini

The first prompt got me just a table and trigger, with a followup if I wanted more. I asked about XE and got a basic session, not as easy to read as Claude and embedded inside the response. It also had fewer actions.

2025-12_0188

I got to the same place, but it was more prompts and I had to keep guiding it along, like micro, or at least mini, managing another DBA. On the plus side, this was faster.

ChatGPT

ChatGPT suggested a trigger, but noted this wasn’t great. It did suggest extended events, and a complete solution. I could have just entered “yes”, but I didn’t. I’m still working on muscle memory at times.

2025-12_0189

I got each part of the solution separate, but this has me scrolling through explanation and code. Again, I prefer the Claude side-by-side approach, but this works. And it’s fast.

2025-12_0190

This also kept leading me along the process, which I liked. It certainly likes the checkboxes and Xs in its results.

Both tools helped me with custom metrics.

Video Walkthrough

Here is a a retry, and then showing the first solution. I think Claude learned a bit the second time.

Posted in Blog | Tagged , , , , , | 2 Comments

Growing Artificial Intelligence

This editorial was originally published on Jul 4, 2016. With Steve on holiday, this is an interesting look back almost a decade into the past at AI technology.

There’s a fascinating piece over at O’Reilly that looks at what we might consider Artificial Intelligence (AI) to be. The discussion looks at Deep Blue, Watson, and AlphaGo, all of which have defeated humans in game competitions where we might expect some intelligence is needed. We could argue that, but certainly, these computing machines have done more to display knowledge than the best humans at certain endeavors.

What is interesting is that each of these machines, while very competent in its area, is specialized. AlphaGo can’t play chess, nor can DeepBlue play Go. Each has been tuned to a specialized area, and also trained to excel in that area. This isn’t fundamentally different than humans that train and specialize themselves, though certainly we find humans have more capabilities in a general sense (for now) than machines.

As we look to grow intelligence, however, there is one thing that’s commonly needed in both artificial or machine intelligence and human intelligence: data. Whether a human is training themselves to solve a particular problem, compete in a game, or even excel in a sport, they need lots of data. We gather this with our senses as well as by examining what others have one, contemplating actions, trying out different actions, ideas, or concepts, and then adjusting to improve.

This is what researchers are also trying to do with gaming machines, with self-driving cars, and even with bots. That last item is interesting to me, as I haven’t paid much attention to bots. A long conversation with another SQL professional got me interested in, and intrigued by, the idea of software robots that might handle various complex tasks better than the FAQ method that so many applications and websites use. I wasn’t sure these would be useful, but I have found the Slackbot to be more helpful than the help or searches for some tasks.

There’s work to be done, and I know the Slackbot (and other machine intelligence software) needs to be trained better. This requires data. Lots of data, and possibly lots of hand holding from a human. For many areas, such as relatively low-level customer support or problem solving, I wonder if a bot could be trained to work better than the simple decision tree algorithms like those found in the Windows Troubleshooter.

There are various ways we might grow this software to help us, and make no mistake, we will need to grow it. Plenty of businesses are becoming excited about machine learning, the R language or Python, software bots, and more. In all the cases of implementing these systems, the one demand that will impact many of us is the need for lots of data. Data that’s organized, that is relevant, that we can use to separate out successes from failures, and evaluate our particular problem better. We will need to group data into knowledge and then feed it into software.

I think this is a bit different than how most of us have used data over the years. We’ve often collected, manipulated, aggregated, summarized, and spit data back out to (ultimately) some human that can make a decision. Most of us haven’t worked with sending data to a machine intelligence and somehow then helping it to understand how to respond on make a decision.

My suspicion is there will be lots of work for us in the next decade in helping machines to use data and understand it, maybe even to use them to help us gather, organize, clean, and manipulate data better ourselves. It’s an exciting time to be a data professional, and I’m sure some of you will work on a few very exciting projects in the future.

Steve Jones

Posted in Editorial | Tagged | 1 Comment

Monday Monitor Tips: Native Replication Monitoring

Redgate Monitor has been able to monitor replication for a long term, but it required some work from customers. Now we’ve added native monitoring.

This is part of a series of posts on Redgate Monitor. Click to see the other posts.

New Native Monitoring

The monitoring capabilities in Redgate Monitor were originally fairly limited to a few counters from PerfMon. A few people had written custom metrics on sqlmonitormetrics.com that clients could use, but we’ve had customers asking for more native integrations.

We’ve done it. With version 14.2, we have added an estate view of your replication environment. In the Estate menu, there is a new entry for Replication Monitoring.

2025-11_line0125

If I click this, I get a list of the jobs running replication across various servers. You can see this below, with each instance and the job denoted by a REPL- at the start. These are the defaults that Microsoft sets up and should be left alone.

You can see below that the agent server name is listed, and I can click it to get to that server overview in Redgate Monitor. I also have the category and job name to the side. Beyond that we have the last completed run if it’s successful. If it’s running, the Job Ended is blank. To the right we have the publisher, subscriber, and distributor names.

2025-12_0164

I can resort the columns, such as below when I am looking by category.

2025-12_0165

I can also sort by publisher:

2025-12_0166

Or subscriber (or any other column).

2025-12_0167

Clicking the column a second time reverses the sort order.

Alerting

There are two new replication specific alerts available for the job failures and maintenance job failures. These work the same as any other alert in Redgate Monitor and can be configured for specific servers, groups, levels, etc., with notifications going out to all the notification targets.

Here are the alerts in the alert configuration.

2025-12_0168

These jobs run across the various replication categories: distribution, merge, snapshot, log reader, and queue reader.

If I look at the details, I can see the job failure works across multiple categories and is set to a high level alert. I can adjust this as I can with any other Redgate Monitor alerts, and exclude jobs if I wish with a regular expression.

2025-12_0169

The replication capabilities are documented here: https://documentation.red-gate.com/monitor14/sql-server-replication-314869637.html

Summary

Replication isn’t something most people use, but for those that do implement it, monitoring is critical. Redgate Monitor has added some native capabilities to let you get a glimpse of your entire replication estate at once and get notified if there are issues.

Replication can be amazing, but I find it brittle. When it works, it’s amazing, but when it breaks, it’s broken. Getting a jump on issues is important for many organizations and Redgate Monitor can help you do that.

Redgate Monitor is a world class monitoring solution for your database estate. Download a trial today and see how it can help you manage your estate more efficiently.

Posted in Blog | Tagged , , | 1 Comment