There are running jokes about how Amazon Alexa and Google Home are susceptible to visitors in your home. It’s become a bit of test from some geeks who will walk in your house and call “Hey, Alexa” out loud to see if you have a device that’s available. I even had an accidental Siri activation while recording some content recently. I said something that started Siri listening on my phone and then a few people nearby kept trying to duplicate the effort. Fortunately Siri wouldn’t come on, so no one was able to cause too much trouble. I suspect that the speakers in the recording room weren’t positioned well enough to allow the control room based hacking.
I know that will change. In fact, even if Apple and other companies manage to get digital assistants to recognize specific people’s voices (as rumored), technology marches on. People are already starting to fake audio data. The fidelity of digital recording and capabilities of speakers improve constantly. This is going to be a greater and greater issue over time, and I have no idea what security technique will prevent things. Maybe we need two factor authentication for audio commands? Won’t that defeat the purpose?
As data professionals, we are going to be dealing with more and more types of data, and trying to process, analyze, and act on information. This is one reason that I think understanding data lakes and being able to import and combine many types and formats of data will be a valuable skill for all data professionals. Whether you use something like Azure Data Lake or another platform, I expect to be combining data in all sorts of ways and providing information to users.
While speech recognition might not be something many of us worry about, will we want to extract information from audio or video and use it? Do we expect that audio files have more integrity than other sources? I worry that we give some types of data more veracity than others, when all types are subject to hacking. Some of us may get audio files as data, and it’s only a matter of time before we get hacked, perhaps with fake audio.
One of the issues we have with some data is determining the source of record. If I record my voice as a sample, and you compare all future audio of me to that sample, you can verify my identity, right? What if someone can fake my voice with simple software? It sounds crazy, but those days are coming, especially if our systems are susceptible to a person stitching together words from different captures, such as some of Baracksdubs. What might be worse is when we find someone hacking a database and replacing the samples. What’s the source of record then?
As I spend more time in this business, I become more convinced that auditing and monitoring are more important that security. We want them all, but I’d rather know I have an issue than assume my systems are protected because the security doesn’t appear to be broken.