Where We Need Better AI Disclosure and Responsibility

There are a lot of contract and gig jobs in the world today. It used to be this type of work was widely spread throughout programming and technology, but these days many types of jobs are commonly completed using contract workers. I like the flexibility of contract work, but I also think that these workers need to be better at saving and planning for the future because of less employment stability. Usually they are paid more, but they need to use that to reduce the risk of being unemployed.

One trend that I’ve seen taking place in some of these positions is the use of software and AI to determine if a worker is doing an acceptable level of work on a regular basis. Amazon might be one of the highest-profile companies doing this, especially as they expand into the delivery business. They are using the power of computers to manage an army of workers rather than traditional human managers. This includes terminating them. There’s an article that talks about some of the experiences of their workers.

I don’t know how their system works, but I do know the frustration of trying to work with a company that doesn’t use humans for many tasks. If you’ve ever tried to contact Google, you know that it’s incredibly difficult to actually communicate with a human. Google seems to think that its automated systems can handle all situations. They might handle many things, but they don’t do a good job in plenty of situations, and there is little recourse to have a human intervene.

I do think that AI and ML can help our companies better interact with the world in many cases, but these systems are certainly looking for broad patterns. Maybe these patterns handle the middle 80% of cases, or maybe it’s more like 60%, but there are plenty of situations where humans ought to be involved. Maybe more important, when someone uses these systems to make decisions that impact human life, there should be some explanation and understanding of how the model impacts this specific situation. We want to know why the computer comes to its conclusion in medical care, employment, legal issues, or maybe anything other situation.

There is work being done to try and explain how these models work. The important thing, however, is to ensure that while we may understand the model, we also need to disclose the reasoning to those affected by the systems. Any appeal process should include this explanation, and likely with a human involved at some point to help evaluate the model for accuracy, fairness, or any other measure that is relevant. To me, we ought to require this of companies using AI models in their business practices.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged , , . Bookmark the permalink.