Learning more Kubernetes

I’ve been slowly working my way through the 50 days of Kubernetes (K8s). As you might have guessed if you remember my first post, this has been more than 30 days. Life and work get in the way, but I’m working through the series of posts and videos.

A couple of interesting things I’ve watched lately:

Serverless Kubernetes and Serverless on Kubernetes – I worry about the idea of containers needing to spin up to meet serverless workloads, but maybe not. It’s a few ms for these containers to spin up, especially for small functional environments like Python or .NET. I suppose if there are issues, you can schedule more pods to be available or more replica sets to spread the load. This makes sense to me, and containers are often a perfect use for this on Kubernetes since you want to have lots of small items, but one large endpoint for people to hit things. I suspect this is how the FaaS implementations work on Azure and AWS.

This video also talks about the need for a virtual kublet to allow the API to get things ready, without having underlying VMs, a prerequisite for scheduling items on the cloud. This is the serverless kubernetes concept. I’m not sure I completely understand this, but I get the idea here. We don’t have hardware provisioned, we’re running Kubernetes, and we want to push some of our load into the cloud. Since we don’t have a node assigned to us, we want to schedule on a virtual node that the cloud provider will actually spin up when we hit it.

Fascinating and not something I’d have thought of before I saw this video.

How the Kubernetes Scheduler Works –  This is interesting. Scheduling workloads and pods in different places is important. Flexibility while meeting demands matters. The idea of hard (required) and soft (optional or preferences) constraints for choosing where to run pods is fascinating. Another video by Brendon Burns. Most of us might care about these items since we may want some spreading assurances for instances or minimum resource requirements for our instances.

For example, we might have a hard constraint that our pod (container) needs 128GB of RAM. This might limit the nodes that we want to run this pod on, and the scheduler takes this hard constraint into its decision making process.

For soft constraints, we might prefer that a reporting instance not run on the same node as a OLTP instance, but if there are no nodes available, perhaps we’d live with this. That’s a soft constraint, and the scheduler tries to honor these, but it isn’t bound to prevent scheduling on those nodes.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Blog and tagged , , , , . Bookmark the permalink.

2 Responses to Learning more Kubernetes

  1. What would be stored in a 128GB pod? That seems like a workload that should be somewhere other than a container.


  2. way0utwest says:

    It’s not a 128GB container,. It’s a ram requirement. Lots of stuff, like a DB server, might want that.


Comments are closed.