AI is everywhere, and if you spend any amount of time looking for answers on the Internet to your coding challenges, you’ve likely encountered a lot of poor, average, good, bad, amazing, and just-helpful-enough AI content. For awhile, I was avoiding the AI summary from Google as the quality seemed slightly off, but lately it’s gotten good enough that I tend use it to decide which links to click on in the results. The summary helps me better understand the context Google sees in my search query.
I ran across a post on coding documentation and how helpful these docs are in onboarding, code reviews, and more. The teams that worked smoothly together often had good docs that helped them function as a cohesive group. At least to some extent. Over time, teams start to depend on tools and lose some of that cohesiveness since they rely more on tools than docs. I agree with the piece that this is a part of the reason many teams don’t really function as teams over time.
In the age of AI, this becomes more important. These AI agents are smart, but gullible and prone to making inconsistent decisions if you let them. In the piece, there’s a great quote: “When your codebase follows consistent patterns, AI assistants become force multipliers. When it doesn’t, they become chaos amplifiers.” Or as we data people know it, garbage in, garbage out.
The lack of documentation means a lack of guidance for both humans and AI agents. It’s easy to say AI makes crazy decisions when we feed it our code, but humans do the same thing. I can’t even begin to count the number of weird decisions over structure and naming I’ve seen from other humans when I didn’t provide them guidance. It happens even when I give them standards, but at least then we can have a conversation about attention to detail if there are docs.
I saw Brent’s predictions for AI database development in 2026, and part of the challenge in getting AI to be helpful is the lack of docs many of us have on schemas. I can’t tell you how often I’ve been asked if Redgate has tools that can doc a schema and decipher what data is being stored. Microsoft spent a billion+ on Purview, and its results in classification are a mixed bag. It’s a hard problem, and a lot of the problem is us. We don’t make good decisions about what to name columns or tables, we’re inconsistent, and we reuse columns as our requirements change, subtly altering the data being stored. Usually, this is an overloading of two types of similar, but different, data into one column. Sometimes it’s just storing whatever we want in a column (or allowing a user to do so).
I’d like to think that the growth of AI will result in a little more attention being paid to documenting our data stores. I’d hope this results in at least using the extended properties or COMMENT capabilities of the different platforms. I think having better ER diagrams might be a second step, though certainly with some AI assistance to help keep things in sync as we evolve our schemas.
Documentation is tedious work, and it’s not something humans are good at, or want to, update over time. However, if an AI agent were around to do the work and then let a human check the results, I suspect we might do a better job of keeping things up to date. To me, that’s another place where the AI revolution might benefit us all.
Steve Jones
Listen to the podcast at Libsyn, Spotify, or iTunes.
Note, podcasts are only available for a limited time online.

