Does Speed Compromise Quality?

One of the parts of DevOps that is often hyped is the speed and frequency of releases. Starting with Flickr and their 10 deployments a day, to Etsy deploying 50 times a day, we’ve seen companies showcasing their deployment frequency. Amazon has reported they deploy code every 11.7 seconds on average. That seems crazy, but with so many applications and lots of developers, not to mention each change being smaller, perhaps it’s not completely crazy. With the forum upgrade here at SQLServerCentral, we had two developers (with occasional other sets of eyes reviewing code changes), and while we were bug fixing, we deployed multiple times per day.

Is that a good idea? Does a rapid set of changes mean that quality is lower and more bugs are released? It certainly can. In fact, if you’re a development shop that struggles with releases and code quality, producing software faster is not going to help you. In fact, if management pressures you to adopt DevOps, and deliver code faster without culture change, without implementing automated testing, including for your database code, and using automated scripts, tools, or something to deploy software, then you are going to get more bugs out faster. You’ll still get to change direction quicker if you find you’re building the wrong software, but you’ll still end up becoming more inefficient because of bugs (And technical debt).

There’s a fantastic video (long) about refactoring code in two minutes. A bit of an oxymoron since the presentation is nearly two hours long, but the video is from a real project. However, their approach is that good unit testing allows them to refactor code, to change things, without introducing bugs. That’s a big part of the #DevOps philosophy. I always note in my DevOps presentations that if you can’t implement unit testing, meaning you won’t bother, then you don’t get much benefit from CI, CD, or any DevOps ideas. Tests protect you from yourself (and others).

In many of the DevOps reports, companies that release faster report fewer bugs and less downtime. Since Amazon has increased their speed, they have 75% fewer outages across the last decade, 90% fewer time down, and many, many fewer deployments causing issues. Turbotax made over 100 production changes during tax season and increased their conversion rates. The State of DevOps reports bear this out (2016 here). Thousands of responses show that speed doesn’t cause more bugs.

Because they work differently.

If your management won’t let you change the way you work, if you don’t implement automated unit tests (and other types of tests), if you don’t take advantage of version control, if you don’t ensure every change is scripted, then you won’t work differently, and speed will bring bugs.

You can do better. Your company can do better. Will they?

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged , . Bookmark the permalink.

2 Responses to Does Speed Compromise Quality?

  1. Hi Steve,

    Is QA only done through unit tests, or is QA still still a longer process, even if done in a couple of hours. Also, who is creating the unit tests? It seem dangerous to only have the devs do the unit tests since they tend to use software differently than QA or users.


  2. way0utwest says:

    QA is done in many ways. I think there are things that are too difficult to automate. Are text boxes aligned, do colors clash, can I easily see a link? Those are things manual QA may pass. However, the more functional stuff that can be automated, the better off you are because you can run those over and over, and help prevent regressions. Lots of regression errors, as we change code, which is something we do and don’t realize we’ve broken old functionality.

    Devs need to write unit tests, certainly others can, but the best process seems to be to write tests when you discover some issues, apart from basic “does this do what I said it would do”. I wouldn’t charge devs with writing every possible combination of edge case data, but they should have tests that evaluate if code returns results we expect. Those need to be shared, since seemingly unrelated changes sometimes have ripple effects.

    I’d also say that anytime QA, or users, report a bug, a unit test needs to be written first. Devs won’t come up with all test cases, QA won’t, users won’t . The idea isn’t to do this once and be perfect. It’s to do it constantly as you write code and get better and better.


Comments are closed.