At Amazon’s Re:Mars conference, Rohit Prasad, Alexa’s senior vice president, showed off a surprising new voice assistant: the supposed ability to mimic voices. So far, there isn’t any timeline on when or if this feature will be released to the public.
Stranger still, Amazon designed this copy ability as a way to memorialize lost loved ones. It played a demonstration video where Alexa read to a child in the voice of his recently deceased grandmother. Prasad stressed that the company was looking for ways to make AI as personal as possible. While AI can’t take away that pain of loss, he said, “it can certainly make the memories last.” An Amazon spokesperson told Engadget that the new skill can create a synthetic voiceprint after being trained on just a minute of audio from the individual who should replicate it.
Security experts have long feared that deep fake audio tools, which use text-to-speech technology to create synthetic voices, would pave the way for a flood of new scams. Speech cloning software has enabled a number of crimes, such as a 2020 incident in the United Arab Emirates in which fraudsters tricked a bank manager into transferring $35 million after posing as a company director. But deep fake audio crime is still relatively uncommon, and the tools available to scammers are relatively primitive for now.
All products recommended by Engadget have been selected by our editorial team, independent of our parent company. Some of our stories contain affiliate links. If you buy something through one of these links, we may earn an affiliate commission.