I found this article to be an interesting look at how we might add ethics to AI systems in one area. As the article points out, “… today there is no broadly accepted AI ethics framework, or means to enforce it. Clearly, ethical AI is a broad topic …”.
Glad someone is thinking, or many people are, but sad that we aren’t really moving in a direction that creates a better system for us humans to work under, or be bound by. To be fair, I do think this is a very difficult topic and hard for any large group of people to agree on what should be done.
Network monitoring is a fairly narrow problem domain, at least compared to many others. The article notes there are places where AI can, and does help humans that work in computer networking. That being said, how does the AI handle ethical considerations. For example, can we ensure the AI handles data privacy appropriately. This could be in compliance with some regulation like GDPR. It could also be in a manner that doesn’t disclose data inside a company to other systems or humans who shouldn’t see the specifics of network traffic (like passwords, credit cards, or any sensitive information).
There are also other considerations. While we see bias in AI systems trained on previous human behaviors, because humans are biased, will network AIs similarly have bias? Will they be less helpful for power users, who have a wide variety of traffic? Those are often privileged users, who might benefit the most from helpful monitoring. Will AIs discriminate against a user when another humans trains or influences it against them? A crude example might be a network admin that doesn’t like women. They enforce more strict rules against women, and influence the AI to do the same. How will the AI, or others, detect this type of issue?
Maybe the most difficult thing with AI is with corner cases. The ethical dilemmas that might not be easy to solve can confound an AI. Maybe the most ethical choice here is to seek other counsel or let other people help make a decision. To be fair, this is hard for humans to do as well, but for some reason we seem to trust computers less. At least some of us do.
Ethics is a challenging issue. I find it to be difficult as a human, and my own inconsistency means I might react differently at different times or in different places. How we translate that to AI systems, which are increasingly a part of our world, is going to be hard. I don’t have answers, but I lean towards transparency, accountability from the humans in charge, and the ability to reverse (and apologize for), poor decisions.
Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.