When Work Isn’t Done

Software development can be a challenge for each of us with lots of demands and the need to ensure your code solves a problem correctly, efficiently, and completely. Juggling the workload by yourself is one thing, but add in a team of developers, and the complexity quickly grows.

The real world is chaotic and despite the best efforts of project managers and scrum masters, our software development life cycle doesn’t always proceed smoothly. I wonder how many of you run into this situation and how you deal with it.

Developer 1 gets a piece of work, let’s call this A. They complete this and send it to the QA team. Somewhere during this process, Developer 2 get’s a different piece of work (B) and writes code. They send this to QA before A is completely tested.

Now, Developer 1 finds a mistake. Something doesn’t work or they realize their solution is incomplete. QA is A + B, but A doesn’t work and needs revision. B passes testing and needs to be deployed. If your codebase and QA have both A and B in them, how do you strip out A or B and ensure B is deployed to production but A isn’t?

If this is C# or Java, you might have one solution, even if both changes are in the same class. If this is database code, you might have a different set of issues to deal with.

Really, the question is can you reorder work in your deployment process? I find many customers don’t consider this when they are evaluating their software pipeline. They somehow assume if code gets to QA that it’s good, which is nicely optimistic but not realistic. At some point, we’ll deploy code to QA that doesn’t work. The more developers we have, the more likely this is, and the more demands on our time, the more likely we need to reorder work and release one thing but not another.

As a DB developer and DBA in a company 20 years ago, I built a process and forced us to reset the QA environment and redeploy B only (using a branch of code that stripped out A) for re-testing. This ensured that we tested what was going to be deployed. However, I find a lot of organizations can’t do this or don’t want to. They want to hope that a human and either extract all of B or strip out all of A and release partially tested code without issues.

I find that to be a poor idea. In this era of regular staff changes, staff of varying quality, and the high complexity of software, this is asking for mistakes. With cheap hardware, virtualization, and the ability to provision copies of environments, we ought to do better.

How do you handle this today? Depend on humans to not make mistakes? Hope for the best? Or follow a repeatable, reliable process that accounts for inexperience and human error?

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Unknown's avatar

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged , . Bookmark the permalink.

2 Responses to When Work Isn’t Done

  1. I have the strict policy that if it changes in any way it has to be re-tested period. Anytime we have to even updates the comments in some piece of code (i.e. Stored proc, View, UDF) the version increments and a backup of the new version is added to the backup of the previous versions. That may seem overkill but it ensures we know what version of something we are dealing with. There’s been more than once where a change to comments only broke something because come compile/run time the machine did not interpret the changes the same as was intended. A great example is T-SQL’s single line comment which uses 2 dashes. While this is a valid way to comment in T-SQL if the query that contains is feed into something that (for whatever reason) removes the carriage returns in your code that comment line comments out everything that came after and breaks your code. We’ve had to deal with this a number of times because the accounting software we use does just that with any custom code we’ve written thus we always use multi-line comments now in T-SQL to avoid that.

    Like

  2. way0utwest's avatar way0utwest says:

    That was always my policy. One of the first things I did at that job, learning from the previous one, was to build a process to quickly reset QA so we could retest things. I typically haven’t been bitten by comments, but I’ve seen that. Multi-line comments are certainly a good idea for that reason.

    Like

Comments are closed.