The smart speaker space has exploded over the past few years, as Amazon, Google, Apple, and others have continued to crank out internet-connected, voice activated speakers that you can control by talking to them.
But some folks have been wary of putting devices with always-listening microphones in their homes, and reports have indicated that there’s a chance companies that make these products might be listening to your voice from time to time.
Now a new report from Security Research Labs indicates that it might not just be Amazon or Google listening in. They developed a series of apps that could keep listening in after you thought the app had closed or trick some users into providing their passwords — and all of these apps were distributed through Amazon and Google’s official channels.
In a nutshell, the vulnerabilities managed to make it past the official review processes in two ways. First, Amazon and Google typically only review an app (Amazon calls them “skills,” and Google dubs them “actions)” when it’s submitted the first time. Updates are not reviewed the same way, so the researchers were able to slip in their sneaky code through updates.
Second, the skills and actions basically tricked Google Assistant and Alexa into playing silence by inserting an unpronounceable character (�), causing it to seem like a voice app had finished running, when it was in fact still going.
You can see the apps in action in a series of YouTube videos from Security Research Labs.
The good news is that the proof-of-concept malicious skills and actions have been removed. And it’s unclear if any truly malicious developers have been using these techniques to spy on users.
According to a statement released to Ars Technica in response to the Security Research Labs disclosure, Google says it’s “removed the Actions that we found from these researchers” and that the company is “putting additional mechanisms in place to prevent these issues from occurring in the future.”
Amazon says it’s also removed the apps, and made changes including making sure that it’s no longer possible for skills to get a transcript of what a customers says after saying “stop” to the skill. The company also now prevents skills from asking users for their Amazon passwords.
But I wouldn’t be surprised if we see other exploits in the future — the rising popularity of smart speakers and voice assistants makes them a tempting target for malicious hackers… as does their ease of use. Users don’t need to download and install anything to interact with a Alexa skill or Google Assistant action, you just say a certain set of words to trigger them. And that means it’s up to Amazon and Google to make sure there are no major security vulnerabilities.
It’s obviously in those companies’ best interests to try to keep their platforms secure. But today I’m feeling pretty good about my household’s decision to skip to dumb speakers.