When SQL Server Central Went Down

This is part of a few memories from the founders of SQL Server Central, celebrating 25 years of operation this month.

“The site is down.”

I got a phone call from one of the other founders around 9 am in Denver one day. At the time, I was working at a small startup, in a semi-private office with one other person. Most of the company knew I had a site on the side, and alternately cheered me on or celebrated hiccups, depending on the person.

I checked, and things were down. I had a side channel to telnet into the server, but couldn’t access things. At this time, the site was hosted on a single server, running IIS and SQL Server, in a friend’s basement. This was in 2002 or 2003, and broadband was a lot different then. No cable internet, and most other solutions were in the kbps range. I had ISDN at my house, but a friend had gotten into a trial from Sprint, giving him 3Mbps over a microwave link.

My office-mate was listening and watching. I said I was taking an early, and long, lunch. She chuckled and went back to her own tasks. I let my boss know and started driving, stressed out. After all, the site was growing, popular, and downtime could be a killer.

We’ve had a few outages over the years, but not many. This was one of the more stressful ones as I fought traffic on I-25, trying to get from the Denver Tech Center up to Westminster. I can’t remember the weather, but I was sweating when I got to my friend’s house. This was the before-times, when remote work was a rarity. Luckily, my friend had left me a key to get into his house, where I ran into the basement. Unable to get the server to respond, I rebooted it and crossed my fingers.

I’m sure a few of you have had the stressed-out feeling of waiting for a database server to restart, hoping that it comes back up cleanly. It did this time, and I don’t remember what went wrong, but things were back up and running with no major issues. I was probably in the basement for less than 30 minutes, and I left, dreading the long drive back to the office. Almost two hours driving for a 30-minute fix.

I do remember thinking if this was going to be a regular occurrence, I might need a new job. Or a new place to host the server.

We weren’t in the basement for too long. Revenues were increasing enough that I started to look at co-location facilities. My current employer had investigated quite a few when we set up our systems, and I reached out to a few contacts. I negotiated a half-rack at some point, moving our servers to a real facility. By that time, my employer had failed, and I got an F5 firewall as part of my severance, which fronted SQL Server Central for quite a few years.

Those were the days, with lots of CLI access to remote systems and a power strip where I could cycle them off and on. Those were skills I had to learn to avoid more drives at inconvenient times.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Leave a comment

A New Word: Dolorblindness

dolorblindness – n. the frustration that you’ll never be able to understand another person’s pain, only ever searching their face for some faint evocation of it, then rifling through your own experiences for some slapdash comparison, wishing you could tell them truthfully, “I know exactly how you feel.”

I’ve been lucky and blessed in life, with many things going my way and relatively few big obstables or painful moments. I’ve had some, but many less than a lot of people I know. I’ve also had plenty of success (by my definition) in life.

As a result, I think I run into dolorblindness regularly when I run into someone that’s had a difficult event. Losing a loved one, dealing with a health crisis, losing their job, or some other event that creates a lot of pain in individuals. I feel empathy and sadness for them, but I’m often struggling to understand the feelings because I haven’t felt them.

Most of the time.

I often won’t say that I understand because I don’t. Sometimes I do, but often I express support, offer help, or just give them a hug.

Whether you really know how someone feels or are experiencing dolorblindness, I hope you be kind, have compassion, and support those struggling in life.

From the Dictionary of Obscure Sorrows

Posted in Blog | Tagged , | Leave a comment

Expensive CPUs

There have been a lot of features added to the SQL Server platform over the years. Several of these features let us perform functions that are beyond what a database has traditionally been designed to handle. SQL Server has had the ability to send emailsexecute Python/R/etc. code, and in SQL Server 2025, we can call REST endpoints.

Quite a few of these features (arguably) are more application-oriented than database-oriented. There’s nothing inherently wrong with having a server perform some of these functions, and there have been some very creative implementations using these features. I recently ran into one of these examples from Amy Abel, where she shows how to use the new REST endpoint feature to call an AI LLM to generate and send emails from your database server. That’s creative, and it’s reminiscent of the numerous examples from various experts over the years who demonstrate how these features can be used to accomplish a task.

However, these are examples. They work amazingly well with one user running a limited workload. This reminds me of many of the examples I’ve seen using the AI vector enhancement in SQL Server 2025 to create embeddings from string data using an LLM. That is interesting, but most of the examples show a trigger being used to update the encodings. Imagine users updating data and those triggers firing. Imagine a real workload and how often your users might update string data you want to use in an AI application, especially a RAG application. Think about how complex or long-running triggers in your applications now that can overload your system.

CPUs in database servers are expensive. The hardware isn’t more expensive, but the software is pricey. Standard Edition is limited to 24 or 32 cores (depending on version), and while Enterprise isn’t limited to any number of cores, the cost of each core is $$$$. Is it worth having those $$$$ cores sending emails or calling external services? Or would you be better offloading those calls to another server, like an app server, where the cost of the core is the hardware and a little .NET code running separately?

Many of us already struggle with the database server as a bottleneck for our application and workload. Scaling up our database systems is expensive and cumbersome. We struggle to get approval for larger VMs, and if we scale up in the cloud, it gets very expensive very fast. I’m not surprised that database vendors are happy to add these features as it increases the licensing cost for applications using them.

I know the majority of the cost of building applications is labor and software developers’ time. However, that’s changing with LLMs that can produce code cheaply. I keep seeing that the cost of writing code is approaching zero.

That’s not going to be true if you use LLMs trained on the example code that increases your database licensing cost. Then the cost might be higher than you expect.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged , | Leave a comment

Claude Code Helps Analyze Test Data Manager Log Files

I had a customer ask about analyzing their Test Data Manager (TDM) usage to determine how many people were protecting data in dev databases and how often. TDM creates log files each time it runs, but analyzing those isn’t easy.

Or is it? Claude Code thinks it is.

This is part of a series of experiments with AI systems.

Consolidating Log Files

I didn’t want to grant extra rights or permissions, and I like the loosely-coupled idea of tools, so I manually copied over log files from a couple locations into a new repo spot. This was a test to see if Claude Code could help me with something I’ve done in past with other types of files.

TDM puts log files in Program Data, which can be locked down on many machines, so I copied things over to a location on another drive. In this case, I created a repo with a folder called TDM. In there I copied the Anonymization and Subsetter folders from Program Files. You can see the Anonymization folder contents below.

2026-01_0212

I opened a few random files to see what was inside and had an idea of what I’d look for in here if I were doing the analysis. I used this to help me craft a prompt.

With that in mind, I have a good place to start, so let’s go see what my assistant, Claude, can do.

A Quick Prompt

I ran Claude from the TDM folder with this prompt: I have a number of log files in the subfolders here. What I am looking for is a summary of how much activity is in those log files, by month and year. The anonymization log files
have “started” records of an “inf” type fro the classifying, the mapping, and the anonmyizing. The subsetter has “subsetter started” . Build a script that can analyze this data and
produce summaries for the activities

Claude agreed to help me with this response. It agrees to help summarize the data and then starts to read the files in the sub folders, looking at a sample of each. I

2026-01_0213

Claude then notes it understands the structure and starts to write code.

2026-01_0214

I have no idea how long this took as I left it alone and went to work on other things. I was finishing another piece, so a few times I approved actions to re-read files and execute the script. I know that it was less than 10 minutes, but could have been much quicker.

When I saw the script had completed, I also had a summary. I’ll paste in the text below to make it easier to read.

Claude’s Summary

Perfect! I’ve created a Python script that analyzes your log files and produces a summary of activities by month and year. Here’s what the script does:

  Script Features

  The script (analyze_logs.py) analyzes:

  Anonymization logs:
   – Classify – Tracks “Classifying database” activities (task=”classify”)
   – Map – Tracks “Creating masking file” activities (task=”map”)
   – Mask – Tracks “Masking database” activities (task=”mask”)

  Subsetter logs:
   – Subsetter – Tracks “Subsetter started” activities

  Summary Results

  Your current logs show:
   – 163 total activities across both tools
   – Activity from February 2024 to February 2025
   – Peak activity in January 2025 with 69 activities (10 classify, 4 map, 8 mask, 47 subsetter runs)
   – 2025 is on track to exceed 2024 activity (88 vs 75 in the first 2 months)

  Usage

  Simply run:
   python analyze_logs.py

  The script automatically scans the Anonymization/ and Subsetter/ subdirectories and provides:
   1. Monthly breakdown of all activity types
   2. Yearly summary totals
   3. Grand totals across all time periods

  The script handles both structured JSON logs and readable text logs, making it robust against different log formats.

Leading Me On

At this point, I was ready to commit and save this code, but I noticed that there was a grayed out prompt already added for me at the CLI. It said: Export the summary to CSV file

I hit Enter, and then Claude proceeded from here. I’ll show the prompt and then how Claude started to edit it’s own work.

2026-01_0215

Once this completed, I approved the action to run the script again and saw these results. I got a summary of the changes, and then two CSV files with the results. I hadn’t committed code yet, so in the repo, you can see the files yourself. These are in my AIExperiments repo on GitHub: TDM folder in GitHub

I looked at the results and it looked pretty good to me. I don’t run TDM a lot on this machine, but the results seemed to match up with the dates of the log files. For example, there are log files for the anonymization (classify, map, mask) for Nov 2024, but they’re empty, so these are excluded from the csv summry.

2026-01_0216

What’s also neat is that the next pre-populated prompt asks if I want charts. I hadn’t thought about this, but I decided to just hit enter and see. Claude wrote a new file and then run it (I approved this) and had charts in a subfolder.

I re-ran the TDM process today (Jan 14) to see if things worked. I got a new log file in the subsetter folder and when I ran the python script manually, I see new executions.

2026-01_0226

Very cool.

Summary

This is a quick app that is useful and it helps not only me, but the customer. In fact, lots of TDM customers and our SEs might find this useful. This is the type of data analysis I’ve done in the past, trying to look through log files and parse them, which can be a tedious task. Claude Code, (or another AI) makes this easy to generate and get working.

Of course, this needs a bit more testing to ensure it’s accurate and I’m guessing there might be some edge cases or rough edges that need smoothing out but is that any different than an app I or someone else wrote? The app is only as good as my prompting and testing, but this was quick enough that I could either alter it myself, or let Claude do it.

After my first Claude Code session, I had USD$1.09 left of my $5. After this, I have $0.17. I’ll buy buying more credits or subscribing as I am finding Claude to be very useful.

Posted in Blog | Tagged , , | Leave a comment