I’ve run across a few customers that are adopting DevOps processes for the database. This makes sense as I deal with Redgate Customers, many of whom are doing this. As they adopt a Compliant Database DevOps process, it seems often that there is a push for database developers to react to feature requests quickly, at the same rate that the application developers do, and push out new code regularly to support changes. This new code often includes schema changes to support new features and functions, which often means table changes because the data elements need to be stored somewhere. This, of course, means the data model changes.
In many cases, trying to release a new feature today or tomorrow, or even next week, means making quick decisions. As an example, if we capture lots of cost and price information in an application and are enhancing this to add currency values, we’ll be adding fields to any table that stores financial data. How we do this could vary. It’s easy to add a currency lookup field to all tables, and that might be what many application developers want to do. If they’re using an ORM, they might just add this as a property to their object definition, which generate a series of ALTER TABLE statements to add the related fields. Certainly, a database developer could just generate those same scripts.
Whether this is the best choice, or if some more normalized structures are needed, is unclear. That’s dependent on the problem domain, and if date stamps are needed to capture currency differentials at times, or even if we need additional FKs to ensure there is referential integrity, many junior developers and DBAs might not think about the implications to the data model. As time passes, this could mean additional technical debt to deal with, limiting future enhancements. This could also mean fundamental flaws in how financial data is calculated, potentially opening impacting revenue if the data model doesn’t support accurate calculations.
Moving to DevOps doesn’t necessarily mean moving fast. It can, but it’s really about making focused, small changes at the rate that matters for your business. It also means that your data model and design of data store structures becomes more important than ever. While we can make decisions quickly, this takes experience and understanding of the business impacts, as well as the potential downfalls from different types of structures. DevOps asks us to give feedback about potential problems up and down the software development pipeline, which should include a data architect of data modeler. Someone with experience here can help to consider future implications and even provide some flexible designs that can adapt in situations where we have incomplete knowledge.
Since we build our software on the data, we need to ensure we are properly capture the data in a way doesn’t create too much technical debt. There are many, many stories of organizations that struggle to grow their applications over time, often because of very poor data models. Many of these issues could have been avoided by consulting with senior developers, DBAs, and data architects/modelers for an hour before making a fundamental change. Even if this means keeping a consultant on retainer. The investment in reviewing and understanding the data model can pay off tremendously in the future, especially as the cost of data processing is often one of the larger costs of running an application. Whether this is an RDBMS like SQL Server or an alternative structure like CosmosDB. A little investment in modeling early can prevent the need to over-provision resources later.