Published Date : 10/24/2025Â
We’re through the looking glass here, people.” The line is from TV’s The Simpsons, spoken by bespectacled Milhouse about a fiendish plot by aliens and reverse vampires to eliminate the meal of dinner. Yet it arguably applies better to the present moment, as deepfakes break containment and seep out into the world at large to undermine reality.
A study by Dutch cybersecurity startup Surfshark reports that losses linked to deepfakes have now surpassed 1.3 billion euros, with 860 million euros stolen in 2025 alone. In tandem, the cost of producing deepfakes has plunged, as powerful generative AI technology becomes just another app on your phone, and fraud-as-a-service schemes proliferate. As EU-Startups puts it, the “dramatic price collapse has made deception cheaper to run and far easier to scale.”
Governments are struggling to manage the damage, which is not just financial. North Korean spies have infiltrated companies with deepfake hiring scams. Ireland’s presidential election campaign has been shaken up by a deepfake of a candidate, Catherine Connolly, announcing her withdrawal of her candidacy to a reporter from RTE, in a video that circulated online. Connolly has issued a statement calling it a “disgraceful attempt to mislead voters and undermine our democracy.”
Meanwhile, the issue of celebrity deepfakes is having another moment in the spotlight, with an announcement that pressure from actor Bryan Cranston and a number of Hollywood associations has convinced OpenAI to strengthen the guardrails on its generative AI video tool, Sora 2. And country singer Martina McBride has spoken out against the threat AI poses to musicians.
And generative AI models are enabling new varieties of scams, such as the lost pet scam, in which deepfaked images of lost pets are used to coax reward money or “recovery fees” from worried owners. Miguel Fornes, information security manager at Surfshark, says such scams “exploit emotion for small sums, making victims less suspicious and far less likely to pursue legal action.”
The deepfake boom has also created market opportunities for deepfake detection products. Several startups have recently seen significant investment. Italy’s Trustfull raised 6 million euros in July 2025 to strengthen defenses against deepfake scams and large-scale phishing campaigns. And London-based Keyless closed a 1.9 million euro funding round in January 2025 for biometric tech to thwart injection attacks and deepfake identity spoofing.
EU-Startups mentions three others: Spain’s Acoru, which raised 10 million euros in Series A funding for anti-money laundering (AML) protections; Italy’s IdentifAI, which secured 5 million euros in July 2025 to expand its deepfake detection platform; and the UK’s Innerworks, which raised 3.7 million euros in August 2025 to focus on synthetic identity and deepfake fraud.
“Italy in particular stands out,” it says “with two active ventures in the space, suggesting a developing national cluster around biometric and deepfake-detection innovation.”
The market activity is partly spurred by new regulations such as the AI Act and the Digital Services Act. In the financial sector, the European Banking Authority (EBA) has issued an opinion outlining how deepfakes have destabilized AML systems, and urged financial institutions to adapt. Compliance is now a motivating factor.
More info on deepfake detection providers can be found in the 51-page 2025 Deepfake Detection Market Report and Buyers Guide from Biometric Update and Goode Intelligence, which presents commercially available options with a breakdown of the key suppliers.
India is likewise taking the deepfake threat very seriously. The country’s Ministry of Electronics and Information Technology (MeitY) has tabled draft amendments to its rules to account for the increase in AI-generated content, which would see much stricter requirements for labelling.
“With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (commonly known as deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly,” says the draft.
The proposed amendments “provide a clear legal basis for labelling, traceability, and accountability related to synthetically generated information.” They also hand more of the hot potato to social media companies, aiming to “strengthen due diligence obligations for intermediaries, particularly social media intermediaries (SMIs) and significant social media intermediaries (SSMIs), as well as for platforms that enable the creation or modification of synthetically generated content.”
Specific proposals include a clear definition of “synthetically generated information”; labelling and metadata embedding requirements; visibility and audibility standards requiring that synthetic content be prominently marked for enough time (10 percent of total); and “enhanced verification and declaration obligations for SSMIs, mandating reasonable technical measures to confirm whether uploaded content is synthetically generated and to label it accordingly.”
Stated goals include better user awareness, enhanced traceability and more accountability, while “maintaining an enabling environment for innovation in AI-driven technologies.”
The MeitY has issued a call for comments inviting feedback from stakeholders.Â
Q: What are deepfakes?
A: Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else's likeness using artificial intelligence. They can be used to create realistic but fake videos and audio recordings.
Q: How are deepfakes being used for fraud?
A: Deepfakes are being used to create realistic scams, such as impersonating individuals in financial transactions, spreading misinformation, and influencing political campaigns.
Q: What is the cost of deepfake-related fraud in 2025?
A: Losses linked to deepfakes have surpassed 1.3 billion euros, with 860 million euros stolen in 2025 alone, according to a study by Surfshark.
Q: Which countries are leading in deepfake detection technology?
A: Italy and the UK are leading in deepfake detection technology, with several startups raising significant investments to develop and improve detection products.
Q: What new regulations are being proposed to combat deepfakes?
A: India and the EU are proposing stricter regulations, including mandatory labelling of synthetic content, enhanced verification requirements, and increased accountability for social media platforms.Â