Learning more Kubernetes

I’ve been slowly working my way through the 50 days of Kubernetes (K8s). As you might have guessed if you remember my first post, this has been more than 30 days. Life and work get in the way, but I’m working through the series of posts and videos.

A couple of interesting things I’ve watched lately:

Serverless Kubernetes and Serverless on Kubernetes – I worry about the idea of containers needing to spin up to meet serverless workloads, but maybe not. It’s a few ms for these containers to spin up, especially for small functional environments like Python or .NET. I suppose if there are issues, you can schedule more pods to be available or more replica sets to spread the load. This makes sense to me, and containers are often a perfect use for this on Kubernetes since you want to have lots of small items, but one large endpoint for people to hit things. I suspect this is how the FaaS implementations work on Azure and AWS.

This video also talks about the need for a virtual kublet to allow the API to get things ready, without having underlying VMs, a prerequisite for scheduling items on the cloud. This is the serverless kubernetes concept. I’m not sure I completely understand this, but I get the idea here. We don’t have hardware provisioned, we’re running Kubernetes, and we want to push some of our load into the cloud. Since we don’t have a node assigned to us, we want to schedule on a virtual node that the cloud provider will actually spin up when we hit it.

Fascinating and not something I’d have thought of before I saw this video.

How the Kubernetes Scheduler Works –  This is interesting. Scheduling workloads and pods in different places is important. Flexibility while meeting demands matters. The idea of hard (required) and soft (optional or preferences) constraints for choosing where to run pods is fascinating. Another video by Brendon Burns. Most of us might care about these items since we may want some spreading assurances for instances or minimum resource requirements for our instances.

For example, we might have a hard constraint that our pod (container) needs 128GB of RAM. This might limit the nodes that we want to run this pod on, and the scheduler takes this hard constraint into its decision making process.

For soft constraints, we might prefer that a reporting instance not run on the same node as a OLTP instance, but if there are no nodes available, perhaps we’d live with this. That’s a soft constraint, and the scheduler tries to honor these, but it isn’t bound to prevent scheduling on those nodes.

Posted in Blog | Tagged , , , , | Leave a comment

Ransomware and DevOps

Ransomware.

A scary topic and one attack that is apparently more common than I suspected. Before you go further, if you haven’t restored a database backup in the last month, stop and go verify your DR plan works. That’s one of the overconfident issues facing lots of government and businesses. While this might not help your entire organization, at least you’ll have some confidence in your process and that you can recover a database.

This is a great article from Ars Technica and worth reading: A take of two cities: Why ransomware will just get worse. I’d recommend you read it and think about a few things. First, do you have insurance because things (or substitute your own word here) happen? Second, have you really tested a DR plan for some sort of software issue like this? You might think about a way to restore systems in an air-gapped manner that prevents them from re-triggering encryption from a remote source, or maybe even in a scenario where you reset dates/times to prevent timer triggered issues. If you don’t think you need to, read this article as well.

Perhaps the bigger issue is are you actually patching and updating systems? Too many organizations can’t, don’t, or won’t. The former means you aren’t sourcing software properly. Either you’re using vendors with poor practices or have a poor development process. Organizations that don’t or won’t bother prioritizing patching, especially security issues, are likely those that will have issues as more criminals spread and use ransomware and other attacks for profit. Software and environments continue to be more complex, which means that the less you ensure the system is patched, the more likelihood there is of a vulnerability in your environment.

DevOps and the cloud PaaS/SaaS platforms are attractive for a few reasons. One is that the platforms are constantly kept up to date, forcing you to move along with them. SaaS cloud vendors know this and are constantly patching and updating their software in order to keep it running. DevOps asks that we always have the ability to release, that we have the ability to patch on demand, not only at certain intervals. This is something I try to emphasize when talking about DevOps. It isn’t necessarily about velocity, but it is about being able to release when you need to, whether that’s today or next month. This is especially important for security issues.

I have had hope for a long time that insurance would drive software to higher quality, and I still do. With the attacks and issues of ransomware, and who knows what other techniques that will be developed, I still believe more companies will buy insurance. I then hope, because of selfish motives, the insurance companies will require frequent patching, regular vendor certification of new platform versions, and better development processes. If insurance drives DevOps, I’m all for it, but I’d prefer you decide to adopt it yourself and start making changes today.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged , | Leave a comment

Republish: Dig Out the Root Cause

Off to SQL Saturday Austin today, so you get Dig Out the Root Cause from the archives.

Have a margarita and fajitas tonight in celebration of the great food I hope to enjoy in Austin.

Posted in Editorial | Tagged | Leave a comment

Great Access to Data with Google Fi

I’ve been a subscriber to the Google Fi (formerly Project Fi) network for a couple years now. Like many of you, I depend on my mobile device for lots of data items, usually email, but I’ve also been able to get work done at times, and certainly Slack and Skype are a regular part of my work.

I made the switch because I often travel to the UK and a few other countries. Getting access from the US carriers is a pain. I’ve switched SIM cards before (never like doing that), carried a second phone that I can put a SIM in, and even tried to live on Wi-Fi. These are all compromises and a hassle.

I used to have Verizon and the had great coverage, but at $10/day, that adds up quickly. T-Mobile had free data, but low speed. My kids have Sprint and get the same, low speed connection that’s almost useless in today’s very chatty applications.

The last couple years have had me use the Google network in my travels. I’ve gotten phone and text calls in many places, tethered my computer, and been able to keep in touch in same way I do every day, regardless of where I am. I’ve been in these countries with Google Fi:

  • United States
  • United Kingdom (GB, Scotland, and N Ireland)
  • Ireland
  • France
  • Switzerland
  • Greece
  • Norway
  • Denmark
  • Canada
  • Australia
  • New Zealand
  • Hong Kong

In all of these countries, my phone connects and gives me 4G speeds, with my same data plan. I pay a bit for international calls, but texts are included. What’s even better is that my data cost is capped at $60. I pay $10/GB, but if I go over $60 (And I did while in Australia), there’s no extra charge. If I use less, I get charged on a pro-rated basis.

If you decide to move, I’ve got a referral link. You’ll save some money and I’ll get a little credit. Referral: https://g.co/fi/r/KAF848

Posted in Blog | Tagged , | Leave a comment