Better Trigger Design: #SQLNewBlogger

I had someone ask me about using triggers to detect changes in their tables. This is the third post in the series. The first one

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

The Setup

We’re using the same table from the first post.  This is the dbo.Customer table with a PK and 5 other fields. Here is the data in the table:

2025-05_0231

I had this trigger in the last post, and showed how it captured updates to the ContactEmail field.

CREATE OR ALTER TRIGGER Customer_tru ON dbo.Customer FOR UPDATE
AS
BEGIN
     IF UPDATE(CustomerName)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.CustomerName changed')
     IF UPDATE(AddressKey)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.AddressKey changed')
     IF UPDATE(CustomerStatus)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.CustomerStatus changed')
     IF UPDATE(CustomerContact)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.CustomerContact changed')
     IF UPDATE(ContactEmail)
     BEGIN
         INSERT dbo.logger (logdate, logmsg)
         SELECT GETDATE(), 'ContactEmail updated from ' + d.ContactEmail + ' to ' + i.ContactEmail
          FROM inserted i
          INNER JOIN Deleted d ON i.CustomerID = d.CustomerID
     END
END

This seemed to work, but did it really?

The Problem

Let’s illustrate the big problem with this change. I’ll run this code:

UPDATE dbo.Customer
  SET ContactEmail = ‘andy@sqlservercentral.com’
  WHERE CustomerID = 2;

If I do this, here are the results:

2025-05_0232

I get a NULL? Why, the original value is null and when I concatenate null with other values, I get NULL. Not ideal.

Let’s fix this problem. I’ll use a function to handle null values. Note, I need to do this for both the inserted and deleted tables. Here’s the new trigger.

CREATE OR ALTER TRIGGER Customer_tru ON dbo.Customer FOR UPDATE
AS
BEGIN
     IF UPDATE(CustomerName)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.CustomerName changed')
     IF UPDATE(AddressKey)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.AddressKey changed')
     IF UPDATE(CustomerStatus)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.CustomerStatus changed')
     IF UPDATE(CustomerContact)
         INSERT dbo.logger (logdate, logmsg) VALUES (DEFAULT, 'dbo.Customer.CustomerContact changed')
     IF UPDATE(ContactEmail)
     BEGIN
         INSERT dbo.logger (logdate, logmsg)
         SELECT GETDATE(), 'ContactEmail updated from ' + COALESCE(d.ContactEmail, 'null') + ' to ' + COALESCE(i.ContactEmail, 'null')
          FROM inserted i
          INNER JOIN Deleted d ON i.CustomerID = d.CustomerID
     END
END

We can see this handles the null appropriately.

2025-05_0233

In this case I’ve chosen to replace a NULL value with the word ‘null’. This means something to me, but I could have just as well replaced this with “blank” or any other word. In many applications a developer might display a null value as a blank, so choose what works for you.

I’m also including an example to show this works for multiple rows. Here I’ll update multiple rows and we can see each is inserted into my log.

2025-05_0234

This is a good reason to audit certain activities, as people will sometimes make these mistakes and updates lots of data.

This trigger is slightly more useful, and handles the NULL cases, but it still isn’t perfect. Imagine I need to parse out changes, or generate the reverse transactions, or even search for certain changes. Stuffing a lot of data into a single field is overloading it, and making it less useful over time. What we’d really want to do is separate pertinent data into different fields. In a NoSQL world, we might do this by using a JSON schema to track the before and after.

We could do that here, just stuff JSON into the log message and read it back out and de-serialize it.

SQL New Blogger

This post modified our trigger to address a previous design problem: not handling nulls. We also showed how to test the trigger with multiple rows. This shows I’ve added knowledge to my skillset and can test what I’m trying to do.

Write your own blogs that might examine what you’ve done poorly in the past and how to fix the problems you’ve identified, even in simple code. Many of us do this a lot.

This was a 20-30 minute post for me. You could likely do it in a similar amount of time.

Posted in Blog | Tagged , , | 1 Comment

Patching the Patch

I had to make a few changes to a SQL Saturday event recently. The repo is public, and some of the organizers submit PRs for their changes, and others send me an email/message/text/etc. for a change. In this case, an organizer just asked for a couple of image updates to their site. I opened VS Code, created a branch, added a URL for the images, and submitted my own PR. After the build, I deployed it.

And it didn’t work.

I had a broken image. I checked the URL in code and realized I had “events” copied before the URL, which wasn’t valid. Ok, edit the URL to be correct and repeat: new PR, build, merge, deploy.

And it didn’t work.

I was looking at the code live on the site, the code in the repo, and I was trying to reconcile paths and file names and keys and values and a few other things.

I realized the world for a developer hadn’t changed a lot, and in fact, I was in the age-old loop: deploy, patch, patch the patch, fix the patch for the patch, and so on. I don’t even know that I could have gotten better here with testing, as these were one-off data changes that affected the site for users. If I enter the wrong data, it’s wrong. I can’t easily test for this.

I have written code that was wrong, and a few simple tests would have caught my issues. I’ve also written code that isn’t easy to test. If I am adding or changing data, it’s hard to test that. Often, I might do some copy/pasting between the code and the test to generate the test. If I’ve typoed something, the typo continues through the test (in some cases). Even using a code generator or an AI to produce the INSERT or UPDATE code might not solve the problem. They might read my typos in a prompt.

One of the best things to help code quality in the last few decades is continuous integration (CI), where we have automated systems that compile code, test it, and run it. It’s not perfect, but it does help reduce the silly mistakes many of us likely make every day when writing code. These can’t prevent typos and issues, but if we are testing intermediate systems, hopefully somewhere along the way, a human or AI agent tries to verify that the things we were typing exist and can catch a typo.

In this case, I had to find where I’d mistyped the line and realized that I had the path wrong. The image was in a subfolder and I needed to add that to the img url.

Working with data is hard, and it’s a constant source of simple mistakes. I don’t know we’ll ever get away from patching the patch when data manipulation is involved.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | 2 Comments

Moving to Rancher Desktop

I’ve been very happy with Docker Desktop for years, running it on both laptop and desktop. However, a corporate decision was made to move to Rancher Desktop, so I now have an unexpected “opportunity” to learn something new.

Here’s a short post on how things went on the desktop and laptop.

Getting Rancher Desktop

I had never heard of Rancher. I’ve met a number of Linux/Oracle people using Podman, but not Rancher. You can download Rancher from rancherdesktop.io. This is a project from SUSE, of the Linux distribution fame, and one of their many projects.

The install is a next/next/next standard Windows MSI, though once installed, I found I needed WSL2 on my desktop. On my laptop, I ran wsl –v and saw this.

2025-05_line0035

On my desktop, I installed this first, and tried to get things going. It installed, downloaded some Kuberbetes things, asked me to use containerd or dockerd (I chose the latter), and then was running. Once the engine was up, my docker commands worked and I could start containers.

However, there was a problem in that I had a volume (unnamed) that a few containers were using. However, after uninstalling Docker Desktop, I couldn’t find the volumes. I’m lightly concerned I’ve lost some data, which isn’t the end of the world, but it annoying.

Learning From My Mistakes

On my laptop, I decided to try and make this smoother. First, I uninstalled Docker.

2025-05_line0068

Next I ran the Rancher install.

2025-05_line0069

Once this installed, I started the desktop program. It again downloaded some Kubernetes components and then seemed to be up and running.

2025-05_line0070

I tried Docker components after starting the Rancher Desktop (RD), but before I realized it was downloading stuff. That was the docker error shown first below. The second one was once the online status was shown in RD.

First Tests

I have a few containers set up to run SQL Server with docker compose files. I have a batch file I double click for “docker compose up” (and another for down). I clicked one and saw this: images downloading.

2025-05_line0072

I assumed an image store would be an image store, but apparently not. Rancher must use a different place. That’s an interesting thing I need to check. Do I have extra images laying around. I think my Docker Desktop was using containerd, so maybe that’s part of this.

I could connect from SSMS fine and I could see my container running in the desktop (I had to switch away from the Containers item and back).

2025-05_line0081

I also checked some docker commands and they seemed to work. I could get a list of containers, and apparently Rancher runs a bunch itself.

2025-05_line0082

I could also see logs from my container, which are handy at times when I need to try and debug an issue.

2025-05_line0083

Summary

I have to admit that after a week, I’m still nervous. Losing a volume, whether it’s really gone or I just can’t find it, is disturbing. I hate losing data.

Rancher seems to work fine for the basic things I do with containers, though the interface feels incomplete and simple. I can’t set hardware limits, it isn’t an active interface, and I feel like I’ve lost a lot of options. I didn’t really use more, but I still feel some loss.

I haven’t heard internal complaints from anyone, so I’m assuming that most container based things still work. We’ll see how I like this across the next month.

Posted in Blog | Tagged , | 3 Comments

The Third Sabbatical

I can’t believe I’ve been at Redgate long enough to get a third sabbatical. I’ve very lucky to have this job, still enjoy it, and get the benefit. I’ve scheduled it from Jun 30 – Aug 8, and I’ll be gone from work during that time.

The idea of the sabbatical is to get away from work and recharge. It’s an extended break, and while some people use this to further their career in some way, others just try to get away from their daily life.

I wrote about a bunch of projects I had during my first one. The second one was less planned, but I also had a quick review of the time away.

For this one, I have no great plans right now. In fact, life has been so busy this year, I haven’t even had time to think about what to do, but I didn’t want to delay things, so this was the best time to take it.

I started this post to get me to at least lightly focus on the time and what the possibilities could be. For now, I have a few things to do:

  • Work with a contractor to replace the covering on our riding arena
  • Rebuild a better generator house and re-wire a circuit
  • Replace some damaged fencing
  • Rebuild 2 horse feeders
  • Organize the garage a bit more
Posted in Blog | Tagged , | 1 Comment