The Inefficiencies of Kubernetes

A report of cloud Kubernetes usage shows that these resources are being under-utiliized, over-provisioned, and costing more than necessary for many organizations. From the previous year, average CPU declined from 13% to 10%, and memory is used at only around 23%. Companies are over-provisioning their clusters, which is understandable. No one wants to have systems overloaded and users complaining about performance.

However, this is a similar tension to what we see with virtualization on-premises. Operations people want to leave plenty of CPU/RAM/IO headroom for systems to handle bursting or increasing workloads. Management wants to get all the use they can out of their investment and would prefer we provision systems as closely as possible to their expected workloads. Containers and orchestrators should allow a closer match, but only if there are workloads that burst enough to require additional containers and pods to be deployed. That does happen with memory occasionally at a little over 5% of containers exceed their memory, but that’s not a significant amount.

Managing a Kubernetes cluster is a specialized skill and most organizations don’t have the skills or experience to do it well. My view is that if you want to use an orchestrator, you’re better off letting the cloud providers manage the infrastructure and scale up and down as needed. There are autoscaling technologies to help Operations staff better manage their capacity and costs, but this is an additional skill people need.

While I do think some companies are adopting cloud native technologies and rewriting their applications to run in containers and Kubernetes clusters, I find many more companies are hesitant to adopt a very complex technology on top of the complexity of teaching their developers to work within containers for their applications. Certainly in the Microsoft space, I don’t see a lot of database servers running in containers. Despite some of the advantages of upgrades and downgrades, the unfamiliarity with the ins and outs of containers leads most teams to continue to manage the database separately.

Resource matching to a workload is a problem we’ve had for years and Kubernetes doesn’t make this any easier to deal with. The cloud is supposed to help us better manage our resources, but there is a lot of knowledge needed to do this well. Add in the cost/performance issues in the cloud and it’s no wonder that many companies have overprovisioned their resources to ensure systems continue running. I don’t know whether lots of IT staffers are optimistic about their workload growth or scared of potential problems from overloaded systems, but unless organizations carefully manage all their resources, they are likely to continue to see larger cloud bills than they like.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged , | 1 Comment

The End of SQL Server 2019

Well, not really the end. I doubt anyone running SQL Server 2019 is going to stop (or upgrade) just because mainstream support ended. Actually, I wonder how many of you know that SQL Server 2019 passed out of mainstream support on Feb 28, 2025. I do think the 6 or 7 of you running Big Data Clusters likely knew this was the end of any support.

I saw a report in the Register on this, which includes a survey of which versions are still running. This is from an IT asset firm and matches Brent Ozar’s Population report. 44% of you are running SQL Server 2019, which is the largest percentage. Since there’s an additional 32% of you running versions older than 2019, I’m sure that upgrading isn’t a priority.

It seems like just a couple of years ago that SQL Server 2019 was released. At the end of February Microsoft ended mainstream support for this version. There will still be security fixes released, but no more cumulative updates. The Register says if you don’t upgrade, you might run into a bug and not get a fix (unless you buy extended support), but that’s never worried me. If I haven’t hit a bug 5 years in (or likely 3-4years after my last upgrade), I’m not too worried. If I run into something it’s likely from new code and I’ll just change the code to work around the issue.

I do expect to run a database platform for a decade, and I am glad that Microsoft continues to supply security patches for this period. While I certainly want every database firewalled, reducing the attack surface area of known vulnerabilities is good. I also find myself less concerned about the security of older versions. If there is a big security vulnerability discovered in 2017 tomorrow that exists in previous version and I had a 2012 server, I’d just prioritize an upgrade then.

Upgrades are hard, eat a lot of valuable time, and don’t necessarily provide many benefits. Most applications tend to use basic CRUD features and whatever was available at the time in that version. If I use a tally table to split strings in 2017, I’m unlikely to rewrite that code to use STRING_SPLIT with an ordinal if I upgrade to 2022. That certainly isn’t a selling point for me to upgrade. My boss knows that isn’t something we’d take advantage of in older code.

I’m not a bleeding edge person, and I wouldn’t push for upgrades. If you want to stay somewhat current with versions and are running 2019, I’d be waiting to test my application on SQL Server 2025 at the end of the year or early 2026. If I were mandated to stay current, I’d still be doing that, not jumping to 2022 right now. However, I do recommend that everyone patch their systems with cumulative updates to ensure their security is up to date. There have been several security patches in the past few years that you should have applied and if you haven’t, this is a reminder to do so soon.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | 1 Comment

CHOOSE’ing a Beer: #SQLNewBlogger

We recently published an article on CHOOSE at SQL Server Central. I thought it was a good intro, but as someone noted in the comments, how do you use CHOOSE? Do you have to hard code choices?

This post shows you don’t.

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

A Scenario

I have a table that contains some data. In this case, about beer. I like beer, and this was a fun little demo. I’m not recreating the DDL because, well, you might like different beers.

2025-03_0145

In any case, this is simple to set up.

If I wanted to choose some data from this table based on an index, I could do something like this. This code populates the first index in choose with beers and the second with brewers. CHOOSE is 1-based indexing.

DECLARE @i INT = 1;
SELECT
   CHOOSE (@i, beername, brewer)
FROM dbo.Beer AS b2;

This returns me the beers.

2025-03_0146

If I changed the value to 2, I get brewers. I show both below.

2025-03_0147

How would I use this? Maybe a user is asking to edit either a home or shipping address. I can index these by returning the column data as index 1 or 2, and linking the user suggestion to the index. They choose home, we pass in 1. If we qualify the query with a WHERE clause to one customer, they get just their data to edit.

I could even do something silly, like getting values from different places. For example, here I’ll use string_split on a value.

DECLARE @i INT = 2;
DECLARE @s VARCHAR(20) = 'Vodka,Tequila,Bourbon'

; WITH a (value)
AS
(SELECT a.value FROM STRING_SPLIT(@s, ',', 1) AS a
  WHERE a.ordinal = 1
),
  b (value)
AS
(SELECT a.value FROM STRING_SPLIT(@s, ',', 1) AS a
  WHERE a.ordinal = 2
),
  c (value)
AS
(SELECT a.value FROM STRING_SPLIT(@s, ',', 1) AS a
  WHERE a.ordinal = 3
) 
SELECT
   CHOOSE (@i, a.value, b.value, c.value)
  FROM a, b, c

This is silly, but it does return an acceptable answer.

2025-03_0148

I don’t know that there are many places that I’d use CHOOSE, but as I play with it, I can see that it could be a handy tool at times with a little creativity.

SQL New Blogger

This post took me about 15 minutes to write after I saw a comment. I set up a scenario and posted a reply, then took that code to structure this post. The STRING_SPLIT piece was the longest, as I had to futz with code, but I show some use of a new feature and how I might incorporate this into an application.

You could write your own creative blog on this, probably in 30 minutes or less. I bet you’d get asked about this in an interview as it’s kind of funny.

Posted in Blog | Tagged , , | Comments Off on CHOOSE’ing a Beer: #SQLNewBlogger

Monday Monitor Tips: Looking Back in Time

Often we find out about a problem reported by a customer after the incident has passed. This might be from a trouble ticket or even an email that we didn’t see until a period of time has passed.

How can we look back at the activity of a server in the past? This post looks how a DBA can time travel back to a situation that occurred in the past.

This is part of a series of posts on Redgate Monitor. Click to see the other posts

Time Traveling

Let’s imagine I get a ticket that said there was a problem at 2:15am from a user running a process. I didn’t get to this at 2am, but at 9:15am when I receive it, I need to look back at what was happening.

If I pick a server in Redgate Monitor, I’ll see the view below. This is of the staging02 server on monitor.red-gate.com. By default, this shows me the last hour of activity on the server.

2025-03_0085

In the upper right corner, I can see the time frame selected on the left (below) and the amount of time. I’ve selected the drop down, and there are many other choices. I also see the metric time at the top, just in case, I’ve started to mess with other values.

Note: there is a calendar control to the left that can go back to previous days if you don’t want to use the time duration drop down.

2025-03_0086

In this case, let’s jump to the last 12 hours. If I select that, you can see my display changes a bit, zoomed out to show 12 hours not 1. The four charts below haven’t changed, however.

2025-03_0087

Most of the top chart has a darker background, except for a portion at the far right, which has a white background. This white background part is the focus window, and it determines what the 4 graphs below show, as well as the query information and other data.

This is set to 1 hour, but I can expand it. If I drag the box on the left side of this further to the left, I can expand the amount of time shown. If you look below, I’ve expanded this to 7:37am as the start.

2025-03_0088

I can also slide this. I’ll slide this to the left to cover to 2:00am-3:00am part of the graph. Now I see different views below in the four graphs.

2025-03_0089

In this case, I now can focus on the 2:00am issue. I see an annotation that there was a Flyway deployment at 2:00am. You can see the annotation zoomed in with the tooltip when I hover the mouse on this icon.

2025-03_0090

I can scroll down to the query area, and I see the top queries, of which there were just a few.

2025-03_0093

The top one has a lot of duration, and if I expand it, I can see the query history. Note there was a query plan change just after 2:00, when my deployment occurred. The duration went up and then started to slightly drop. I see another plan change at 2:40am, and if I were to look back at the top, I’d see a second deployment from Flyway at that time.

2025-03_0095

I don’t quite know what changed in the deployment, but I’d start looking here to see if this affected my query.

Summary

The focus window in the overview for an instance allows you to set the time frame in which you see data related to that instance. This lets you time travel back to look at the server as it existed in the past. The amount of time you can travel back depends on your data retention settings, which we’ll examine in another tip.

Hopefully this gives you a quick tip on how you can focus your efforts to a relevant period of time when you get an issue to review.

Redgate Monitor is a world class monitoring solution for your database estate. Download a trial today and see how it can help you manage your estate more efficiently.

Posted in Blog | Tagged , , , | 1 Comment