Friday Flyway Tips: Git Integration in Community Edition

Redgate added Git integration to the free, Community edition of Flyway Desktop. I saw the announcement and decided to make this post to show how this can work for a new project.

We do need git installed, so head over to the free git download if you don’t have it. From there, install Git and you’re ready to go.

I’ve been working with Flyway Desktop for work more and more as we transition from older SSMS plugins to the standalone tool. This series looks at some tips I’ve gotten along the way.

Source Controlling Your Project

When you start Flyway Desktop Community, you should see the edition in the upper left, as shown here.

2024-08_0054

I’ll click Open project, and choose one of my existing projects. When I do that, I see all the migrations in the project. I can also select or add a target and run flyway commands from here.

2024-08_0055

What’s new is the right hand sidebar, which now has the VCS controls. If I click the left arrow in the upper right, I get the sidebar to expand. I can see I don’t have any changes. This bar wasn’t available previously, but now it is.

2024-08_0057

Let’s make a change. I’ll close this (click the arrow at the top) and return to the migrations screen. I’ll click the “add migration” button (the arrow points to this in the image below).

2024-08_0058

When the editor opens, I’ll add some code. I’ll also change the name. Notice there are no changes in the right sidebar.

2024-08_0059

When I save this, all of a sudden, there is a single change in the middle of the bar.

2024-08_0060

Expanding the sidebar and clicking on the middle icon, I see my one change has been added as a migration script.

2024-08_0061

I can add a comment and commit this or continue working. When I’m done committing, I can easily push my changes from here to the remote.

I’m manually managing scripts in Community Edition, but I can do it all from Flyway Desktop, including all the version control work.

Flyway Enterprise

If you want to get more from Flyway, try Flyway Enterprise out today. If you haven’t worked with Flyway Desktop, download it today.

If you use the CLI Flyway Community, download Flyway Desktop and get a GUI for your migration scripts as well as version control.

Video Walkthrough

No video walkthrough this week as I’m on the road.

You can check out all the Flyway videos I’ve recorded.

Posted in Blog | Tagged , , , | Comments Off on Friday Flyway Tips: Git Integration in Community Edition

Moving One File Across Git Branches: #SQLNewBlogger

I was working on some branching and merging with a customer and they wanted to move a file from one branch to another without taking the entire commit. I had to dig in a bit and see how to cherry pick a file, and not a commit. This post looks at how this can work.

I’ll do this in the git CLI. I’m sure it works in many clients, but when I do something strange or new, I like looking at the CLI. Mostly because when I make a mistake, the clients send me to the CLI often anyway, so I get comfortable there.

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. You can see all posts on Git as well.

A Simple Setup

First, grab a repo. In this case, I had a branching/merging repo I use with customers to show some things. I’ll use the repo at: https://github.com/way0utwest/BranchMerge

For me, I have a main branch, as well as a dev and qa branches. There are likely some feature ones as well.

To start with, let’s check main. Everything is up to date (I pulled already):

2024- 08_ 0059

Let’s also check QA. Same thing.

2024- 08_ 0060

I’ll now make some changes on QA, adding a file and changing two others. Once I do this, here is my status, pre-commit.

2024- 08_ 0061

If I don’t commit, these files don’t exist, and I could change branches and add them there. However, once I commit, I might want to move them in a certain way. I’ll commit and my status is clean.

2024- 08_ 0062

Now, I could switch to main and do this to move one file:

git checkout qa -- README.md

However, I’m just doing that without review. So, I wouldn’t do that and you shouldn’t either. Instead, let’s create a PR.

Using a Branch for Peer Review

A PR is a pull request, but it also means we ask for some peer review. In my case, I’ll use this code to create a new branch and then pull in two changes: my modified readme and one of the other changes.

git checkout -b r9-qa

git checkout qa -- README.md

git checkout qa -- "V6__create proc gettwo.sql"

Ignore my typos, but I’ve run 4 commands: checkout 3 times and status once.

2024- 08_ 0063

Now I’ll commit and push these changes to my r9-qa branch.

2024- 08_ 0064

Once I do that, Github detects a push and asks me to create PR. I do, and I see the PR with these changes.

2024- 08_ 0065

Now I can proceed with my flow, and my cherry picked changes are captured.

There is a git cherrypick command, but often I find I need random files from multiple commits, while ignoring others in the commit. This works well for database releases.

A few references:

SQL New Blogger

This post took my about 20 minutes to write. However the learning and experimentation took over an hour as I read various links and dug into the docs and various posts.

This is a very useful skill, and one you can discuss in an interview. Write a similar post and you’ll be prepared for this type of question in an interview.

Posted in Blog | Tagged , , | 2 Comments

A Kafka Introduction

I’ve heard of Kafka before. I know it’s an Apache project and you can download or read more at https://kafka.apache.org/. I knew it was a way of moving data around, some sort of ETL tool useful for moving things around. More like a message and queueing system, which is a tool that seems like a great idea, but one that everyone struggles to work with.

And one that seemed complex. The overview is that Kafka is “a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud environments.

Would I need that or use it? In a lot of my database work, I’m not sure that it would easily fit into most of the OLTP applications or data warehouse systems. Maybe. Hard to tell. Their description of event streaming and the definition of an event make it seem this is a catch-all system for moving log data around. One that be so open-ended that it ends up requiring a lot of configuration for “my” system.

Here’s their definition of an event: An event records the fact that “something happened” in the world or in your business. It is also called record or message in the documentation. When you read or write data to Kafka, you do this in the form of events. Conceptually, an event has a key, value, timestamp, and optional metadata headers.

Recently I watched a Kafka presentation at THAT Conference (which was a fantastic event). In the talk, this sentence caught my eye: “[Kafka is] a pipe to move data from A to B, C, D”. I’ve certainly had that need, and sometimes configuring lots of pipes is work. If you’ve ever worked with replication and the publisher/subscriber model you likely get a twitch in your eye if a ticket is opened to configure a new subscriber. Not because the configuration is hard, but because the ongoing admin can be a pain.

The talk dives into some of the complexity of designing and implementing a Kafka system. For developers that might write to the stream or read from it, things seem simple. For admins and architects, less so, and I can’t help what happens when a reader goes down. I have nightmares of replication subscribers being down and transaction logs not being reused.

Kafka doesn’t seem as complex as I thought before, but it certainly doesn’t seem simple or easy. Kafka is not a panacea for moving data around, but it is a well-understood and widely used technology. Those things mean more to me now that I find myself considering the challenges of maintaining a system over time and hiring staff who understand it. It’s something I’d consider using in the future, and maybe something I’d like to experiment with a bit more and learn how it works at a more practical level.

If you use it, or know more, I’d be interested in how well Kafka has worked for you, either as a developer or admin.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Comments Off on A Kafka Introduction

Less Junior Staff

As I’ve been working with some AI (Artificial Intelligence) technologies, what I’ve often found is that they produce junior-level code. The code I’d expect from someone early in their career or inexperienced in a particular area. That is code that likely works, but isn’t efficient or clean or perhaps incomplete in some way.

I’m sure AI technologies will improve, and we’ll be able to train them better for our environment. Just like we train junior developers to be better. However, what does that mean for junior people across the next decade? I ran across an interesting post on the death of the junior developer, which speculates we might have a problem as an industry.

The post references an article from Gene Kim, where a law firm sees a similar problem with their junior people, who are associates. That position might be equivalent to the junior developer in software. Someone with more experience and knowledge often reviews work and helps shape it, even though the junior person does the work. With AI, however, we might not need the junior person. Instead, the AI produces work the senior person has to review. Finding issues with associate work is a lot like finding hallucination problems in AI responses.

The same could be said of coding. There is plenty of poorly written code, but if senior people become good at writing prompts and getting the same code back that a junior developer would write, then how many junior people do we need? Arguably less, though you might still need a few. Or you might think that you need all the junior people and you’ll get 10x more work done, clearing your backlog. Certainly, I know most developers, DBAs, and other IT people have a large backlog of problems.

However, the problem with junior people using LLM (large language model) AIs and getting more done is that they might generate a lot more bad code, so much that your senior people can’t find the time to review the code and you end up with systems that contain even more technical debt than you have today. Perhaps we even find systems that don’t perform well enough for regular use or create constant issues that your developers try to fix with AI, which might not work. I can certainly see things deteriorating rapidly.

There’s a great quote in the Gene Kim piece: “I believe this furthers the case that AI helps the experienced people far more than inexperienced people — the seniors more than the juniors.”

I’m starting to think that might be the case. Senior people are going to become very productive, and very valuable. Junior people are going to struggle, and while they’ll get work done, the quality will vary. Maybe that’s good, or maybe we will start to see a rapid divergence of not only productivity but salaries. If you can hire a senior person to produce better code at the same rate as your 5 junior people, maybe you’ll want to pay that senior person $200k a year and reduce junior rates to $45k a year.

I don’t know that we’ll see rapid changes, as many organizations are slow to alter the way they hire, code, or structure their staff. However, as there is success by others, especially when it’s touted in places like the ETLS, I can see other managers being influenced. That will filter over time to those who hire to pick the productive, senior-level people who can showcase some code skills in an interview. Craft a prompt to solve a problem, get some code back, refine it, explain where and why you’d change your prompt or use parts of the code, and we might see the AI-capable people getting hired quickly, and for fantastic compensation.

I don’t know that I think this reduces a lot of junior staff, mostly because of organizational inertia, but I do think that learning to be better at your craft and learning to use AI is likely to increase your future earnings.

Steve Jones

Posted in Editorial | Tagged | Comments Off on Less Junior Staff