Minimally Viable Security

Security has been a constant concern for many IT professionals over the years. Many of us are trying to implement better security controls, and yet at the same time, we try to avoid anything that slows us down. Security clearly hasn’t been a big enough concern, as we’ve had more than our share of SQL Injection issues. These often come about from poor practices, lack of education, and too many people not learning to adopt better habits across time.

We’ve also had no shortage of lost backups, open cloud buckets, and more over the years. While security (or cybersecurity) is listed as a concern for tech management, they are quick to avoid slowing down any development or deployment of software. While it is easier to get time for patching these days, it’s still not easy. There are plenty of organizations that prioritize resources spent on tasks other than patching, upgrading systems, or training developers.

One of the ideas in modern software development is to often build an MVP, a minimally viable product, where we can test ideas and determine if our solution is worth pursuing. This could be a greenfield application, or even a feature enhancement to an existing system. In the age of GenAI, vibe-coding, and more, this might be MCP or agent-based AI additions to software that are being developed and enhanced rapidly, incorporating feedback from customers.

If we allow minimal amounts of features to test things, shouldn’t we have minimal levels of security as well? That’s the thrust of a blog post from Forrester that discusses how we might look forward in 2026 to protecting our digital systems. There ought to be a minimum set of controls, testing, and more that ensures we can build software that doesn’t cost more from security issues than it generates in revenue. This might be especially important in the age of GenAI-coding where we can have less experienced engineers or even helpful agents committing lots of code they expect to deploy to production.

Education is important here to ensure everyone is aware of your MVS (minimal viable security) before they get too far along. It might be especially important in helping others guide their GenAI tools to ensure security is being considered early on. Adding in security requirements as a standard for your tools, such as in a Claude.MD file is a best practice that should be required for all future software development. You never know who might start to add AI coding tools or agents to your codebase, so be prepared now.

Education isn’t enough. It’s too easy for someone to forget what they learned. It’s also easy to assume many people have learned something when they haven’t. To me, part of an MVS is ensuring you have a framework or platform that can test all code and ensure that your systems are being securely built and deployed. This includes third-party software, especially SaaS products, where vendors might be tempted to sell you their own MVP without any MVS.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Unknown's avatar

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.