That’s the question this week: What’s your downtime?
I thought about this as I read this piece on Azure’s downtime being greater than its rivals in 2014. You do need to provide an email to read the article, but essentially the article builds on tracking from CloudHarmony to show that Azure had more downtime for the year, with about 54 hours. In Europe, that high water mark was 5.97 hours for compute and 1.31 hours for storage, so the European Azure cloud is doing better than the others.
That’s pretty good. Certainly individual machines went down, and services were unavailable for short periods of time during failover, but keeping a hardware platform up around 5 hours of downtime a year is good. I’m not sure that many of my companies have done this, though to be fair, mostly it’s been patches from Microsoft that caused downtime for Windows machines.
However, let’s look at your database instances. Do you know what your downtime is? I’d encourag you to track it, and be sure that you report on it. Or at least have the data, in case the boss asks. I don’t know what our SSC total downtime is, but I can tell you that our cluster rebooted in Aug 2015 and Mar 2016, brief outages for some patching. In the past, I’ve typically seen our database instances run the better part of a year, usually only coming down for a Service Pack.
If you have monitoring in place, and you should, then I hope you have some idea of downtime for your main databases. If you can share, see if you might set the high water mark for the SQLServerCentral community in the last year.
The Voice of the DBA Podcast