I remember reading about AI as a college student, and I always wondered if we’d get computing algorithms that might actually make better decisions than informed humans. As my tech career grew, I saw the advancement in systems, like Deep Blue, that showed we could build better systems that might beat humans at their own games. The recent success of AlphaGo makes me wonder what the next challenge will be?
Certainly machine learning (ML) and AI-like systems are becoming quite popular in business these days. More and more companies are trying to find ways to use these computer science tools to improve their organizations capabilities. Whether these techniques will continue to be popular remains to be seen, but in the short term, all the efforts of companies like Microsoft and Amazon will get many companies to try to build intelligent systems.
There is a lot of data being used to fuel these AI and ML applications, so it’s a natural fit that many data professionals will want to be involved. It’s a complex area, where lots of data is used to conduct lots and lots of experiments. That type of work my not suit many of you, but give it a try on one of these platforms and see what you think. Perhaps this is work you will enjoy. However, even if you enjoy the work and become good at it, there’s potentially some issue that may impact the use of these systems.
We don’t know how most of the applications actually work. That’s because we haven’t programmed them. In some sense, many of these systems learn from data, and with a little guidance, but there isn’t necessarily a debug log that might explain all the actions. That is perhaps the dark secret of AI. We don’t really understand what’s going on.
The article linked above looks at autonomous cars, which a number of companies are researching and building. One of the issues with the complex system(s) that runs the car is that we don’t necessarily know how it works. This might not bother technical people, but it certainly does bother many people. When there is a problem, and there will be a problem since there are always failures, how do we determine what went wrong and make it better? Add more data and retrain the system? I’m not sure that is a solution that most people will accept.
Ultimately I think some of the research and work on AI needs to focus on allowing the model to output a set of data on what data is being weighed and the impact on the flow of information that leads to a decision. I know this is a tremendous amount of data, but it is probably the type of analysis that is needed for accountability in these systems. After all, it’s not just cars where this matters. Imagine that your have an AI system that cuts off all AC in the summer, or decides not to order enough resources for a busy time period. How do we explain to management that “the system just decided” on that course of action? I really think we need better tools for analyzing these models.