Staying Focused

Most of us know when we’ve been working in the flow. Time ceases to exist, and often seems to have flown by when we stop working. We may not eat or take a break, just focusing on a problem and tackling it in a very single minded fashion. Whether this is a development task or some infrastructure effort, we can achieve flow.

At the same time, many of us struggle to find the flow in a busy workday of meetings, interruptions, music, slack messages, and more. Attention is a commodity, and one that many of us struggle to focus on a single task. We may find that if we do achieve flow and something interrupts us, then we struggle to get focused again.

When I give presentations and talk about remembering what code we wrote or changes we made last week, plenty of people will chuckle along with me. It’s a challenge to remember what we were last doing, especially when we move to more DevOps style work with small chunks of tasks being completed. Even coming back to work the next day, after an evening of family or hobby time, can be distracting.

How do you get back into the flow quicker? There are lots of books and advice around, but I thought this programmer’s look at how to keep himself organized around a busy life was interesting. He uses some tools, primarily based around software coding, to help remind himself quickly of not only what tasks need to be tackled, but where he was in the flow of the work. In some sense, this reminds me a bit of Andy Warren’s efforts at keeping SQL Saturday organized.

Flow is hard to come by. To me, this is one reason why more workers ought to have private offices and less meetings. A developer or DBA in the middle of the zone, working in the flow, needs to be left alone. That’s the most productive time for them, when they are most efficient and valuable. More organizations ought to try and foster this, not inhibit it.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged | Leave a comment

SQL in the City Sydney in Two Weeks

It’s two weeks to our next SQL in the City Summit, this time in Sydney. I’m excited to be going back, though it’s a quick trip for me. Likely I’ll be fueled on lots of coffee for the two days I’m in Australia, so if I’m speaking a bit fast, remind me that the rest of you aren’t on a whirlwind, halfway around the world trip.

2019-08-29 10_29_58-Redgate's SQL in the City Summit Sydney

We had great reviews on our previous tour, and I was sad to miss Sydney, but we’re coming back on Sept 27. The schedule is slightly different, but we’re thrilled to have Damian Brady from Microsoft talking about their DevOps transformation.

Anderson and myself will represent Redgate and deliver a few session on our tools, and we’ll have the amazing Hybrid DBA, Hamish Watson coming as well.

Register today and come learn more about Compliant Database DevOps and Redgate solutions to help you write database code more efficiently.

Posted in Blog | Tagged , , , , | Leave a comment

The State of DevOps Report for 2019

It seems like just a few weeks ago that I went over the 2018 results with Gene Kim. That was an exciting few weeks for me, running over the report in prep and then having the opportunity to host a webinar with Gene Kim. Exciting times, but it’s been almost a year and the 2019 report is out. You can download the 2019 Accelerate: State of DevOps Report from the DORA site and read it yourself, but I found a couple interesting things in the report.

As expected, companies adopting DevOps are outperforming those that don’t, at least in the metrics measured. In fact, it appears that the high performers continue to outperform low performers in two major ways: speed and stability. What’s interesting, and as was pointed out by Kendra, is that these aren’t on the same scale, meaning there isn’t a trade-off between the two metrics.

You can move fast and achieve higher stability. In fact, moving fast can lead to higher stability if you are learning and growing from your efforts to build and deploy software. This includes the database, though that’s not called out in this year’s report. Instead, the report talks about ways to achieve speed and stability.

One of those is tools, and more importantly, easy to use tools. Staff will turnover or change positions. Knowledge can be hard to share in busy environments, so tooling is important. Bespoke, bought, homegrown, open-source, it’s important that the tooling is easy to understand and that it performs at some job in an efficient manner.

There are other items in the report, but one thing I think we often ignore is that sticking with your old process and people doesn’t mean you’ll fail. You clearly have some people and process in place that works, as you’re in business now.


Both speed and stability help you work more efficiently. They help you compete better against others, and should ensure you reach your goals sooner. Speed increases value for customers, which in turn should help you improve how your organization functions. Stability makes both customers and staff happy. One thing the report points out: productivity has a positive impact on workers. They deal with stress better and burnout less.

Happier employees are hopefully helping your organization with more creativity in solving problems, more engagement with customers’ success, and less turnover. DevOps is better for your software, and your staff.

If you want to learn more, I’m hosting a webinar next week with Jez Humble to go over the report.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged | Leave a comment


There are a lot of Mean-Time-To-xxxx acronyms. Many of us have heard of the mean time between failures (MTBF) for disk drives. Some of us use that information when considering which model to buy. In the DevOps world, there are also the mean time to failure (MTTF) and mean time to resolve/repair (MTTR). There is one more that I think is very interesting, and that is the MTTD: the mean time to detect an issue. This is the average amount of time it takes you to detect there is a problem after the problem occurs.

There was an outage at Monzo recently due to a database upgrade, which was recounted on their blog. In this case, their MTTD, or rather actual time to detect, was a minute. I think that is amazing. In fact, I’m somewhat skeptical that an alert is raised, someone looks at it, the customer service desk calls the Ops team (who were upgrading servers), and the Ops person realizes in the space of a minute or two that there is an issue. It’s possible, but I have found that help desk personnel that discover something can take a few minutes to verify the issue and then scramble to find the on-call phone number. Relaying information can take a minute or two, so if this is accurate, huge props to the IT staff at Monzo.

Many of us strive to high a high availability number for our systems, especially databases. This is one of the drivers for the growing use of availability groups in SQL Server systems: to ensure the database is highly available to clients. In determining availability, we often speak of the percentage of time that a system is available. The holy grail is five 9s, or an uptime of 99.999% of the year. This gives you just over 5 minutes of downtime a year.

In the case of the Monzo outage, which took place in July 2019, the alert is reported at 13:14 and the incident was declared at 13:15pm, one minute later. The time to diagnose the issue (maybe another MTTxx item) was 63 minutes, just over an hour. At this point, availability is arguably down to 99.988%. The actual fix was completed at 113 minutes, or 99.978%. That’s the number if nothing else happens this year.

If you’re attempting to get to 5 9s of reliability, you get less than 6 minutes of downtime a year. Can you figure out what’s wrong in 6 minutes? Much less fix it? That’s a difficult task. I think 4 9s, giving you 52-ish minutes of downtime, is realistic, but very hard. Most of us can likely handle 3 9s, which allows for 8:30:00 of downtime a year. While I’ve exceeded that before, it’s been rare.

We have a lot of HA (high availability) options in SQL Server, and there are many successful implementations that achieve high levels of availability for the database. The network and the application are another story, but I think the quality of those areas has increased over the years as well. Doing HA well is hard, and if you aren’t 100% sure of what you’re doing, or your system is very valuable, you might engage a consultant, like Allan Hirt, to ensure that you’ve configured things well. SQL Server runs well in HA configurations, but getting it set up can be more difficult than you expect.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged | Leave a comment