The Book of Redgate–Being Reasonable

As a part of the Book of Redgate, we have a series of (red, of course) pages with the title “What we believe”. These are our values, as set up by the founders. The first one of these is:

You will be reasonable with us

We will be reasonable with you

Two simple sentences, but they really encapsulate how we try to work together. We know there are stressful times, there are hard times, and while we all want to follow the golden rule (treat others as you would want to be treated), we sometimes fail. However, we try to be reasonable with each other.

If I ask for something, others should try to accommodate me. If I ask for too much, and they tell me that, I should try understand that. Being reasonable is having sound judgment, being fair and sensible, not being extreme.

We try to get along with others. Some good examples of this are us setting normal working hours, but being willing to flex with others. If someone goes above and beyond, we recognize that and perhaps go out of our way to make it up to them.

One example of this stands out in my mind. At our annual company meeting our CEO told a story of a deal that they were trying to close during the year. A crucial part of this deal was one employee, who had scheduled a holiday previously. As the deal was getting close, and in danger of problems, this employee came off vacation to help finish something. Our CEO not only recognized this, but personally thanked them in front of the company, gave them more holiday to make up for it and sent a gift.

We are reasonable with each other, all of us being willing to bend and flex, but not abusing that willingness.

Just like a family does with each other. At least, my family does.

I have a copy of the Book of Redgate from 2010. This was a book we produced internally about the company after 10 years in existence. At that time, I’d been there for about 3 years, and it was interesting to learn a some things about the company. This series of posts looks back at the Book of Redgate 15 years later.

Posted in Blog | Tagged , , | 4 Comments

A Domain for Data

A domain is a set of possible values (among other definitions). I use this word a lot in my work, often with a problem domain (the thing you’re trying to solve) or the domain of possible values (like the US States and Territories list). That last one is interesting, as this is often the set of data we stick in some reference or lookup table to use in a form on a screen.

There is a domain as part of the SQL specification, which I never knew about. I was reading an article from Joe Celko on the CREATE DOMAIN statement. This doesn’t exist in SQL Server and T-SQL, but it has been a part of PostgreSQL for quite a few versions. The article talks about the definition of a domain from a few very experienced database design people.

What’s interesting is that this could be a list of values, but it could be an expression against which values are checked. For example, we might choose the domain of positive integers, which might require that the value passes some x>=0 where x is the value. That seems OK, though this looks like a CHECK constraint to me, which we already have.

The article is a little light and doesn’t give many examples of how or why a domain might be needed or why it might be better than a constraint. Perhaps there are good reasons, but I struggle to think of any. Certainly I am cautious of tightly binding which data is allowed, especially when I find many businesses have rules that often have exceptions.

Would you use a domain for any data in your system? Do you think you have business rules that are firm and set enough to add domains? Perhaps you do, and if you do, that’s great. I suspect many of you are like me in that you are careful of where and how many check constraints you use. Those can be very hard to change and remove when an exception occurs, so the fewer the better.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Comments Off on A Domain for Data

Database DevOps Metrics

The DORA organization is dedicated to helping others build software better and faster, at a higher quality, and in a way that is more efficient. They continue to compile and publish the Accelerate State of DevOps report every year, which is a fascinating read.

As a part of the report, they have identified four key metrics that identify high performing organizations in terms of software. These are divided into two areas: throughput and stability. Throughput measures are change lead time and deployment frequency. Stability measures are the change fail percentage and failed deployment recovery time.

For a long time, as I chatted with various people doing database work, it seemed that most people deployed relatively infrequently. They might deploy a couple times a week for software changes, but database changes were often less than once a week. There have always been people moving faster or slower, but that felt like the pace for a majority of people. These days, in the 2024-2025 timeframe, many people seem to be able to deploy database changes every week, often multiple times a week.

Lots of people have moved to more throughput, with more frequent deployments and less change lead time. Most of us can’t get more work out of people, so if we deploy more often, their completed work gets released quicker. Those two metrics make some sense, and I think those are good measures, but not goals. What I find is that people often need to make changes quicker either to respond to changing needs of their organization or to fix bugs they’ve introduced. I wonder what the ratio is of the former to the latter? I suspect it might be less than one, if most of your deployments are fixing bugs. I don’t mind deploying software quicker, but the design, modeling, and testing can’t be shortened.

The stability metrics are often high for most people I speak with about deployments. I don’t see a lot of failures at deployment time as code usually compiles and deploys. It’s often a day (or week) later that someone notices the code doesn’t do what they expect. Is that a deployment failure? I think not. What’s the MTTR if it’s fixed an hour after being reported a day after the deployment? Is the MTTR an hour or a day plus an hour? I don’t know how these metrics apply to databases, especially if data gets mangled and has to be corrected manually over hours/days/weeks. Is that in the MTTR? Can you even track it?

Metrics are good ways to measure you progress or health, as long as the metric doesn’t become the goal. I’ve run into a lot of customers using these metrics to measure their development, and it does help most for a period of time. Whether this continues to help them improve often depends on whether they keep focusing on their goals of delivering better quality software faster, or they focus on the metrics.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Comments Off on Database DevOps Metrics

Comparing My Current Schema with a Backup with SQL Compare

A customer asked if they needed to restore a database from backup to compare the schema in a database. They don’t and this post shows that.

This is part of a series of posts on SQL Compare.

Setting Up a Comparison

When I open SQL Compare, I see a screen that looks like what I’ve shown below, with a database to database comparison.

2025-03_0096

At the top, to the left of “Source”, there is a drop down arrow. If I pick that I see these choices: database, backup, snapshow, scripts folder, SQL Source Control, SQL Change Automation, Flyway. Those last 3 are project types for Redgate tools.

2025-03_0097

If I select backup, I get a dialog where I can add my backup set files. I can add full or diff backup files, but not transaction log files. If I click the “+Add backup set flies”, I get a file picked, and I can find a backup file.

2025-03_0099

Once I pick one, I see it in my list. I can now clear the list or add more files. The details of how this work are documented at: https://documentation.red-gate.com/sc/working-with-other-data-sources/working-with-backups

2025-03_0100

Once I have my backup, I’ll set the target, in this case a copy of Northwind that I’ve altered and called Westwind. This is on my local instance.

2025-03_0101

When the comparison completes, I see the differences. This was without any sort of restore on my instance. Note that the top left icon for Northwind_FullRestore has a different icon. I have this database on this instance, but it’s different than the backup.

2025-03_0102

If I expand the results, these look like any comparison. I see those things that are the same, only in one or different. In this case, as we are trying to make the target look like the source, those objects in my db and not in my backup would be dropped if I deployed all changes.

2025-03_0103

Summary

This is a short demo of using a backup as a comparison source against a database. I haven’t really shown a flow or scenario, but I’ll do that in another post. This is just a short proof that this works.

SQL Compare is an amazing tool that millions of users have enjoyed for 25 years. If you’ve never tried it, give it an eval today and see what you think.

Posted in Blog | Tagged , , | Comments Off on Comparing My Current Schema with a Backup with SQL Compare