Finding Identity Columns–#SQLNewBlogger

I had to find a set of identity columns recently and through this would make a good blog post.

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

Getting a List of Tables and Columns with Identity Properties

Finding out which tables have an identity isn’t very easy, especially in SSMS’s Object Explorer. This property is somewhat hidden, which is annoying to me as I use them often. However, I get this is just a property.

This is, however, stored in sys.columns.is_identity. This is set to 1 if the property is set, so filtering on this is good thing. If we join to sys.columns on object_id, we can get a list of table names.

Here’s a short script to do this.

SELECT
   o.name AS TableName
, columns.name AS ColumnName
, is_identity
FROM
   sys.columns
   INNER JOIN sys.objects AS o
     ON o.object_id = columns.object_id
WHERE sys.columns.is_identity = 1;

However, there’s an easier way. There is a sys.identity_columns view which inherits from sys.columns and does the filtering for you. You can use this code instead.

SELECT
  o.name AS TableName
, columns.name AS ColumnName
, is_identity
FROM
  sys.identity_columns AS columns
  INNER JOIN sys.objects AS o
    ON o.object_id = columns.object_id

SQL New Blogger

I had to do this as a quick test for a client that wanted to do some checking of identity seeds. They asked for a list of tables to check, and I showed them how to get this quickly.

This was literally about 2 minutes to set up and about 5 minutes to write this post. Something you could easily do.

Posted in Blog | Tagged , , | 5 Comments

Constitutional AI

I will admit that I don’t know a lot about AI (Artificial Intelligence) systems and how they are built. I’ve been playing with them a bit and haven’t been overly impressed with the results. I think some of this is that the my work is creative and I’m both used to being creative and I find the AIs less creative. And less accurate. And require a lot of editing. I don’t mind editing, but not if it takes longer than just writing things myself.

From my understanding, a lot of the models behind AI systems (chatbots, recommenders, etc.) are built with humans giving them feedback on their responses in what’s known as RLHF (Reinforcement Learning from Human Feedback). Essentially paid (often low paid) people that help to “guide” the AI into responses that are useful.

I don’t quite know how that looks, and I certainly don’t want a job doing that. Definitely not if it’s looking at a lot of UIs like the one in this article. Can you imagine being paid to read things like this and then try to rank them? I can’t imagine they keep getting great input from evaluators across the day. Maybe 9am-10am, but I’d bet the 4pm-5pm responses are quick clicks.

There was an article about a company trying something different: constitutional AI training. There’s a better description on the Anthropic website. It seems in this case they are creating some principles and limited human feedback, but then relying on an AI to give feedback to another AI? Or itself? I have to admit that I’m not completely sure of what happens here.

Ultimately, I like the idea here, but I think the idea of a single LLM/AI model that suits every situation, or one that works in every geography doesn’t make sense. We have different thoughts among people and different cultures all over the world. I’d expect that we might have different types of AIs in different situations or environments. The one that helps decide how to deal with nuclear safety likely needs to be different from the one governing traffic signals. I certainly don’t want one TruthGPT to be the one true voice on all things.

The idea of AI systems, assistants and more seems  complex and more strange than anything I’d have imagined from reading science fiction. As with many things, the reality is far different from the speculation I’ve had about how I would respond or want a system to behave. I think that’s the nature of science fiction; it picks specific situations and tailors the story to fit. The real world is much more messy.

I don’t know where we go, but I’m curious as many of you have had more exposure to AI. Is it helping? Hurting? Useful? Are you excited or worried for the future? I’m curious what you think, mostly because I’m not sure what I think.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

Posted in Editorial | Tagged | Comments Off on Constitutional AI

Getting a Summary of Changes in SQL Compare 15

A client asked for a summary of changes, so I wrote this quick post to show where to find this in SQL Compare 15.

This is part of a series I have on SQL Compare from Redgate Software. It’s an amazing piece of software that you should try if you haven’t. Download an eval today.

If you use SQL Compare, you might find yourself in situations like this, where there are a bunch of changes.

2023-07-20 14_17_58-SQL Compare - E__Documents_SQL Compare_SharedProjects_(local)_SQL2017.SimpleTalk

This isn’t a lot, but when you build the script, you then are confronted with a lot of information. If I choose all those objects, here’s my script in the deploy wizard.

comparescript

This is a lot of code to look through. Most people can do this, but it’s also easy to miss something in this long listing. Humans make mistakes, and we want to minimize mistakes.

The UX isn’t great, but if you look at the top of the code window, below the 14 objects part, there are three items: Deployment Script, Summary, and Warnings. If there are significant warnings, that will actually be the default screen, but the summary is something people miss. You can see this below:

2023-07-20 14_21_30-Zoomit Zoom Window

If I click Summary, I see this:

2023-07-20 14_21_37-Deployment

This is a much more manageable list of things that are being touched, and a good list to show in a standup or code review for others to watch.

Grabbing a screen shot is something you could easily send to people as well. If you use the Flyway solution, we’ll also generate a report of this with your CI or deployment processes.

Give this a try the next time you use SQL Compare. If you don’t use it, give it a try. It’s amazing.

Posted in Blog | Tagged , , | 2 Comments

We’re Not Faster with AI

At Redgate Software, we’ve been trialing Copilot from GitHub with our developers. I managed to get access for this experiment and have tried a few things, though I’m not sure I’ve found it very useful. I’ll continue to work with Copilot, but for now, I just don’t find Copilot AI helping me with the types of tasks I do.

However, our chief of technology, Jeff Foster, was interviewed about how he sees AI and Copilot in Redgate. It’s a good read, and I found a few fascinating things in the piece. First, I was worried about this a bit, as we have explicit guidelines in developers that say we can’t take the output from an AI and put it in our codebase. We worry about legalities since we sell software. How does that work with Copilot?

Well, I learned that there are a couple of things in here. First, with a business subscription, we can have the AI not show us copies of public code. If more than 150 characters of public domain code is in a suggestion, it’s hidden. This is in the Copilot docs as well. I don’t know if 150 is a good length or too long, but at least there are limits. We also found that Copilot doesn’t retain any snippets, which is important for us. We don’t want to share Redgate code with others, especially competitors unless we are explicitly putting a public repo out there.

The second thing I found really interesting is that we have a lot of guardrails around the way we produce code. We have tests, lots of automation to evaluate things, and plenty of peer review. I see requests for PRs in Slack constantly amongst our teams. We know that the code which gets generated likely needs some editing, but also that we want to be sure that whether it’s sent as is or edited by a developer, the code is still reviewed. Having these guardrails, test, etc. help us experiment with new ideas without impacting our productivity.

Perhaps the most interesting part for me was near the end. Jeff doesn’t think that Copilot makes us faster. He says “We probably invest ten times more time finding the right problem to solve than on the actual coding.” That makes sense, given what I’ve observed over the last 15 years. We are deliberate and careful about picking problems to solve and producing high-quality code. Sometimes it makes me a little crazy as I want things to move faster, but I’ve learned to appreciate that overall, this is an approach that doesn’t get us into places where we struggle to onboard developers to new projects or adjust our code to meet new requirements.

I’ve got a few quiet weeks coming up, so I’m going to give in more and use Copilot a bunch to see what I can get out of it. If you’ve tried it, let me know what works or doesn’t. If you want me to try and solve some problem for you, drop a note in the discussion with the challenge you have.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

Posted in Editorial | Tagged , | Comments Off on We’re Not Faster with AI