Why do we reboot machines when something goes wrong?
I’m sure all have done it, and I would guess quite a few of you have found situations where this seems to fix issues, but there isn’t an underlying root cause that you can pinpoint. This is a fairly accepted way of dealing with issues, but have you thought about why this is a way to solve some problems?
The main thing that a reboot does is return the system to a know starting state. It’s why quite a few people complain about some modern laptops and mobile devices because they avoid restarts and try to sleep/wake instead. Most software expects to work on a stateless machine, so restarts help find a known good state.
Coincidentally, this is why databases are so hard for many people, especially software developers. Databases are state machines, which are inherently more complex than stateless ones. However, that’s not the thing I want to discuss.
If you think about code you’ve written and problems solved, which datatypes cause the most headaches and challenges? Which types of data are most difficult to deal with? There might be a variety of answers, but one of the most common ones is datetime data. The main reason? It’s not deterministic in many cases when we deal with calculations in real time. This data ages poorly and it’s hard to even test. By the time we’ve restored data from production, invariably our test data is old. We can de-age it (make it newer), but still, testing this data based on what happened yesterday is often hard.
This has been on my mind as another modern technology has similar characteristics. AI LLMs are often not deterministic. The same prompt might not produce the same response, and like SQL Server execution plans, even small changes in the input can affect the output.
That can be maddening to many of us, as we often want a reproduction of a problem to solve it. I ask for this from clients, Microsoft asks for it when I send them an issue, and most software developers want to be able to reproduce a problem on their machines. Or they often struggle to fix a bug.
In this new AI world, is determinism something most of us can hold loosely? It’s a good question as many of us struggle with AI when we don’t get the response we expect, or even a good response. Worse, we might get varying levels of quality code back from the same models. I have found that an experiment I conduct sometimes cannot be reproduced with any accuracy.
And sometimes it works the exact same way the second and third times I conduct the experiment.
Humans are not deterministic. Many of us know someone that is very reliable and can predict how they react most of the time. Most, however, isn’t a characteristic of determinism. I guess in that sense, humans are a state machine as well, one that is constantly evolving and different every day.
I find that success with modern AI LLMs requires me to accept some level of determinism and flow with it. I need to not expect the results to be perfect, and either massage the way I express the problem or give up. I’ve written it before, but I think learning when to give up on an LLM and just do the work yourself is a key skill for technologists.
Maybe for anyone using LLMs.
So, as you think about the future, are you prepared for one without determinism?
Steve Jones
Listen to the podcast at Libsyn, Spotify, or iTunes.
Note, podcasts are only available for a limited time online.


