Published Date : 8/14/2025Â
In the Hunger Games franchise, engineered mutant birds called jabberjays drive people to madness by mimicking the voices of their loved ones in pain. This is an apt metaphor for the wave of AI voice cloning fraud plaguing U.S. consumers, who collectively lost nearly $3 billion to imposter scams in 2023 alone. Many of these scams target the elderly, sometimes posing as relatives in need.
In response, more than 75,000 people have signed a petition, delivered by Consumer Reports, urging the Federal Trade Commission (FTC) to hold companies that operate biometric AI voice cloning products accountable. The petition calls for protecting all Americans from these kinds of deepfakes.
“AI voice cloning tools are making it easier than ever for scammers to impersonate someone’s voice,” says Grace Gedye, policy analyst for AI issues at Consumer Reports. “These AI-enabled scams are increasingly difficult to detect, are costing consumers real money, and can present a threat to our national security as we recently saw when someone impersonated Secretary of State Marco Rubio. We urgently need proper oversight and guardrails for this technology.”
“We are calling on the FTC, as well as national and state policymakers, to investigate AI voice cloning companies with insufficient guardrails and address the dangers this emerging technology presents to consumers.”
Specifically, the petition calls for the FTC to use its Section 5 powers to investigate companies that facilitate voice-cloning scams and hold them accountable. It also requests the recommencement of work on the Individual Impersonation rulemaking (SNPRM, R207000) and for state Attorneys General to use their laws and enforcement tools to investigate these voice cloning apps and hold companies accountable if they are not doing enough to protect consumers.
Presumably, the best thing for companies to do to protect customers would be to slow down in developing cheap, freely available voice cloning tools. Alas, the AI gold rush continues, and large companies continue to develop more sophisticated biometric voice technology.
The Register reports that new capabilities in Microsoft’s Azure AI Speech allow users to rapidly generate a voice replica with just a few seconds of sampled speech. The new zero-shot text-to-speech model, named “DragonV2.1Neural,” produces more natural-sounding and expressive voices and will generate audio in more than 100 supported languages. Microsoft says it “unlocks a wide range of applications, from customizing chatbot voices to dubbing video content in an actor’s original voice across multiple languages, enabling truly immersive and individualized audio experiences.”
Microsoft states that its policies require anyone whose voice is reproduced to have given consent. However, how it intends to enforce this is an open question.
As to what Silicon Valley’s giants think about the potential for easy voice cloning or synthesis to supercharge fraud, one need look no further than OpenAI’s Sam Altman, who recently threw some dirt at voice authentication efforts, saying, “apparently there are some financial institutions that will accept the voiceprint as authentication. That is a crazy thing to still be doing. Like, AI has fully defeated that.”
The rise of AI voice cloning is a double-edged sword, offering both innovative possibilities and significant risks. While companies like Microsoft continue to push the boundaries of what is possible with voice technology, it is crucial that robust measures are in place to protect consumers from the growing threat of impersonation fraud.Â