Tech Debt Perils

My wife and I have been thinking about some new audio equipment. We’ve been a little unhappy with our Bose soundbar because of the software flakiness and sporadic network connectivity issues. In looking around, I saw a Sonos product, but after reading a bit about the company’s recent history, I decided to look elsewhere.

Sidebar: if any of you have recommendations that aren’t high-end $$$$ audio, let me know.

I saw this article about some of the problems Sonos has had, and it resonated with me. I’ve been in a place where I’ve worked on software that had a lot of technical debt and needed to be changed/improved to help grow the company. Management was pressing everyone to get the software ready quickly, without more concerns about the customer experience or the quality.

That seems to be what happened with Sonos, where they released new products and new software, with the software missing functionality that their customers needed. There were bug reports, bad press, lower sales, and canceled raises/bonuses. Those last ones, to me, ought to be completely focused on executives and managers, but I’m sure that’s not the case. Management rarely takes the blame, but ultimately they are responsible for hiring, training, steering the employees, and deciding to launch.

The story reads like a summary of The Phoenix Project. Technical debt, poor project management, and pressure to increase sales combine to create a disastrous launch. It’s always hard to know where to assign blame, but clearly, good software engineering wasn’t a priority, nor was reducing technical debt. I know it can be hard to balance the need to alter software with the need to keep it maintainable, but in this case, they made poor decisions.

The article says there were yelling and screaming in meetings, and concerns from developers about pushing back on senior leadership on the timelines and demands. Some of us might have been in those situations, and my experience has been if there is management that doesn’t value software engineering, any particular developer is vulnerable to a layoff or termination if they complain. If management doesn’t value software, they don’t value developers and think they can be easily replaced. I’ve seen quite a few companies start to fall quickly with this attitude when developers are not valued. I also see constant job openings and employees constantly looking for other opportunities at these organizations. People only stay long enough to find another job.

Software is complex. It’s going to have some bugs. There are always tough decisions about which things to work on now and which to delay. I hear those conversations, and I find myself trying to balance the needs of sales and engineering. We need both, and we also need to ensure we can change directions in the future. Paying down technical debt should be a regular occurrence to ensure the software is maintainable and adaptable. I know that when we look to quickly take advantage of new opportunities, this can mean we’re adding new technical debt. Walking that line is the key to success.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | 4 Comments

Denver Dev Day Oct 2024 Slides

Here are the slides from my talk today: CI in Azure DevOps

If you have questions, please feel free to contact me (top menu above).

Posted in Blog | Tagged , , , | Comments Off on Denver Dev Day Oct 2024 Slides

Should I Learn PostgreSQL

I got asked this question recently:

I constantly see PostgreSQL on Microsoft slides, email, ads, etc. My MCADAA exam started with an entire section on it. I’m trying to determine if it’s worth focused study and training time. Roughly how much of the Azure cloud database space does PostgreSQL occupy? Should I include it in my personal training program?

It’s a good question, though I am assuming that MCADDA is the Microsoft Certified: Azure Database Administrator Associate cert. That page describing the exam only mentions Azure SQL (SQL Server), and not PostgreSQL, so while I think Azure SQL for PostgreSQL is good to learn, I think the cert preparation needs an update if that topic is on the exam.

In any case, should you learn PostgreSQL?

I’ll give you the DBA answer: it depends.

Ask yourself some questions:

  • Does your org use PostgreSQL or are they planning to do so?
  • Are you going to stick with your company for a few more years?
  • Are there more things you should be learning in the area you do work now?
  • Are there things you should become more skilled at that your company values?
  • Do you know what the opportunities are for people that know PostgreSQL well?

Depending on the answers, I may or may not recommend you spend time there. If you have other things to learn that might be better at your current job, either the job you have or one you might want, then focus there. If your company does (or doesn’t) value PostgreSQL, then that influences your choice as well.

PostgreSQL is growing fast. DB-Engines shows growth across a few platforms, and PostgreSQL is doing well.

2024-10_0140

The StackOverflow developer survey shows similar results:

2024-10_0141

However, those are general results. Your specific situation is different. Think about it and ask the question above to lots of people.

Posted in Blog | Tagged , , | 2 Comments

The Vast Expansions of Hardware

At the Small Data conference recently, one of the talks looked at hardware advances. It was interesting to see a data perspective on hardware changes, as many of us only worry about the results of hardware: can I get my data quickly? In or out, most of us are more often worried about performance than specs. However, today I thought it might be fun to look at a few changes and numbers to get an idea of how our hardware has changed, in the march towards dealing with more and more data. Big data anyone?

In thinking about disks, I saw a chart that looked at the changes from HDD (hard disk drives) to SDD (solid state drives) to NVMe (Nonvolatile Memory Express). These show read speeds going through the list from 80MB/S to 200MB/s to 5000+MB/s. That’s a dramatic change, and not one only in high-end arrays. There are off-the-shelf drives you can put in a desktop that read this fast. If you think about some of the early IBM drives, which read at 8800b/s. Growth in disk speed, inside the timeline of our careers, has grown by a few orders of magnitude in read speed.

Write speed hasn’t grown as much but capacity has. My early career work used HDDs with a 100MB capacity. These days we can get TB range storage on all of these mediums, with many laptops having 0.5TB or more on them. Desktops often have plenty more. My current workstation at home has 3.5TB of storage. Contrast that to the early IBM drive linked above, which had 5MB. These days people regularly demo hundreds of TB, or even 1PB queries from a database.

Many of us just expect the network to work well. In fact, I assume many of us won’t complain to network people since they are never at fault for performance issues. I started my career with Arcnet connections between machines. Those ran at 2.5Mb/s. We were moving those and 4Mbps Token ring to Ethernet at 10Mbps with Thicknet, Thinnet, and eventually RJ-45 connections. When we got 100Mpbs bridges, I thought we were cutting edge for our SQL Server Central servers. If we look back 20 years, 1Gbps was more the standard then, but today we see growth up into the 800Gbps with Infiniband. While I don’t know many data centers doing that, there are plenty running in the 50Gbps range.

If we think about CPUs, I started my career on a 386 machine running at 25MHz. I helped upgrade some 286 machines, but most of our servers were 486 class machines at 25 MHz. I still remember being excited about the early Pentium processors for a large system. There were many Pentium variants and later families of processors, but back in the 2000s, almost all machines were single-core. The first multi-core chips were released and slowly became more common over time. These days, many new laptops have multiple cores, including the new on I got, which has 12 cores. If you want, you can purchase an AMD Epyc 9004 processor with 96 cores. That’s on one chip. Since most servers can take more than one CPU, you can have hundreds of cores running if you want. If you want to get really crazy. the Nvidia Blackwell has thousands of cores for their GPU-based AI calculations.

Memory has likewise grown, though it seems most servers are much less than a TB of RAM, which is a much lower growth over time than storage and networking. Maybe because of those two changes, memory has had less of a reason to grow into common multi-TB-sized capacities in our systems. In fact, for you reading this, what are the common memory sizes you have in servers? I see many VMs and other machines set up with somewhere between 128GB and 1TB for memory, even as their data sizes have grown much, much larger. However, there are plenty that don’t have anything near 128GB.

That was one of the interesting things I realized about the Small Data conference, and one reason the event was created. Most of our data sets, especially usable sets, and most of our queries can run on a laptop if not a mobile device. The focus on big data seems overblown, especially as most of our companies don’t have anything approaching 100TB, much less 1PB. If you need it, there is hardware out there for you, but some of the amazing advances made over time are lost on me as the common, average capabilities out there on the majority of systems could handle the majority of my needs.

With some well-written queries.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Comments Off on The Vast Expansions of Hardware