The Creepiness of AI

Last year, I watched a keynote talk from Matthew Renze about AI. In his talk, there were examples of the amazing things that Artificial Intelligence can do, as well as some of the creepier things have have been developed. It was an interesting talk, one that gives me inspiration and hope for the benefits of better computer algorithms as well as the concerns for various issues that we may be unprepared to deal with as a society.

One of the more controversial items that occurred recently with AI was the Google phone call, where a computer answers a call and interacts with a human. What’s disconcerting here is that the person doesn’t know this is a computer, and there are speech patterns the computer uses, like um interspersed in the answers, that deceive someone. While this certainly might be helpful in scheduling situations shown in the call, there is a downside. Could you imagine artificial personas used in telephone scams or phishing situations? A help desk knowing some information and then asking for verification of other data?

There are perhaps greater concerns, such as the work done with imitations of President Obama. There are fake speeches, generated by computer. While movie studios might want fake actors used to reduce labor costs, do we worry about the implications of a computer actually being able to imitate one of us in a video call?

The use of AI and ML, with lots of data an organization might have gathered could be good and bad, but certainly opens the world to more problems than benefits if there isn’t mandatory disclosure of the cases where this is used. Since there are always going to be criminal elements that don’t obey rules, this might be very scary.

There are certainly other issues, such as Target predicting a pregnancy, which was the first really, creepy data analysis thing I saw. That one is a few years old, and still bothers me as it was accurate, but an unrefined use of the data. A good example of where marketing groups are a bit too excited to use AI/ML technologies and don’t think through the implications. Fortunately this case seems to have dampened some of enthusiasm for prediction in retailing.

Perhaps this item that is a bit funny, but it is also very worrisome for me. It’s the case of an AI system playing video games. The AI system decided the best way to get the best score was to pause the game. Rather than compete and try to do better, the computer decided to just stop. A completely unexpected outcome, probably because the feedback and expectations weren’t explicit. Since it seems quite often humans don’t specify their requirements or expectations very well, I could imagine this being a very large issue in AI systems as they are used more often. It could even be deadly or problematic when a system does something we didn’t anticipate, and impacts human health.

Most of us won’t work with AI much as a technician, other than providing or managing some of the data. I do expect AI and ML systems to touch more and more of our lives, perhaps using our data for good, perhaps not. Hopefully we can help steer applications into the former more than the latter situation.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.