Amazon has always tried to push its Alexa-enabled smart speakers as a platform, boasting about the number of third-party “skills” available (more than 100,000 in the most recent count). In our experience, most of these skills are useless tricks; jokes from a note that you install and forget. But it turns out that they can also pose a threat to privacy.
The first large-scale study of privacy vulnerabilities in Alexa’s skills ecosystem was conducted by researchers from North Carolina State and Ruhr-University Bochum, Germany. They found a number of issues of concern, particularly in the verification processes that Amazon uses to verify the integrity of each skill. Here is a quick summary of his findings:
- Activating the wrong skill. Since 2017, Alexa will automatically enable skills if users ask the right question (also known as an “invocation phrase”). But the researchers found that in the US store alone there were 9,948 skills with double invocation phrases. This means that if you ask Alexa for “spatial facts”, for example, he will automatically activate one of the countless skills that use that phrase. How this skill is chosen is a complete mystery, but it may well lead users to activate the wrong or unwanted skills.
- Ability to publish under false names. When installing a skill, you can check the name of the developer to ensure its reliability. But the researchers found that Amazon’s verification process to verify that developers are who they say they are is not very secure. They were able to publish skills under the names of major corporations like Microsoft and Samsung. Attackers can easily post skills by pretending to be from reputable companies.
- Change the code after publication. The researchers found that publishers can make changes to the back-end code used by the skills after publication. This does not mean that they can change a skill to do anything, but they can use that loophole to turn questionable actions into skills. Then, for example, you could publish a skill for children that would be verified by Amazon’s security team, before changing the backend code so that it requests confidential information.
- Relaxed privacy policies. Privacy policies should inform users about how their data is being collected and used, but Amazon does not require skills to have tracking policies. The researchers found that only 28.5% of skills in the U.S. have valid privacy policies, and that number is even lower for skills aimed at children – only 13.6%.
None of these findings is a smoking gun for any specific Alexa skill that deflects invisible data. But together, they paint a worrying picture of Amazon’s (in) attention to privacy issues. With that in mind, it’s probably a good time to prune the Alexa skills you’ve enabled on your devices.
You can do this through the Alexa app or, more easily, over the web. Just go to alexa.amazon.com, log in to your Amazon account, click on “Skills” in the sidebar, then “your skills” in the upper right corner and disable any skills you are not using. I just checked my own account and found that I had more than 30 installed in various tests over the years. This has now been reduced to three healthy ones.
We can only hope that Amazon will pay a little more attention to this area in the future. In a comment given to ZDNet, a company spokesman said that “the security of our devices and services is a priority” and that the company conducts regular reviews to identify and remove malicious skills. Some of these protocols may need to be updated.