Published Date : 8/6/2025Â
The rapid advancement of artificial intelligence (AI) has brought about a concerning phenomenon known as AI Slop. This term, which is likely to find its way into Webster’s dictionary soon, refers to the blurring lines between truth and fiction due to the proliferation of synthetic media. With the monthly improvements in generative AI, images, audio, and video, the ability to distinguish between what is real and what is not is becoming increasingly challenging. This not only tests our discernment but also fuels a dangerous loop known as the Liar’s Dividend, which is on a path to disrupt our trust in media, society, and cybersecurity.
The Liar’s Dividend describes the advantage gained by those who spread false information in an environment flooded with misinformation. Lies travel faster than the truth, and truth is always trying to keep up. As synthetic media increases, it becomes easier for scammers to plant doubt about what is real, even when confronted with authentic evidence. The phrase was first articulated by legal scholars Bobby Chesney and Danielle Citron, who noted that as people lose trust in digital media such as images or videos, liars can dismiss inconvenient truths as “fake news” or AI-generated deepfakes, dodging consequences and accountability.
With the improved quality of deepfakes and other synthetic media becoming widespread, people are starting to grow skeptical about what they see online. While the technology is at its worst capabilities at this point in time, it is improving over time. With plausible deniability, public figures or criminals can now claim genuine footage is fake and misdirect public perception. There have been instances of fake hands or fingers sold online and when worn, can make people see and automatically claim that images or videos are fake because the number of fingers is wrong, thus dispelling the legitimacy of an image or video, without any further analysis. Ultimately, when truth itself becomes subjective, malicious actors gain greater freedom to mislead, manipulate, and evade justice. The stakes are increasing as the basis for informed decision-making and societal norms is threatened when real evidence can be easily dismissed or undermined by leaders and influencers.
Deepfakes use artificial intelligence to generate hyper-realistic videos, audio, or images that mimic real people or locations. The impact of such synthetic media in the hands of scammers and cybercriminals is a growing concern within the cybersecurity space. It’s essential to understand that within minutes, using thirty seconds of audio and a picture of the subject from social media, it is easy to impersonate executives to commit fraud, mimic family members for grandparent scams, spread fake news to sway elections or cause chaos, discredit truthful information by calling it “deepfaked,” and blackmail, defraud, or manipulate individuals and corporations. All of these have been occurring in society today and we’ve already seen deepfake videos created for crisis events like war, natural disasters, devastation, and national events.
While it’s easy to create deepfakes within minutes with as little as $20 or for free if using open-source software, there have been events that bring this awareness to light. In recent years, we have witnessed an alarming rise in sophisticated financial scams and deceptive practices leveraging artificial intelligence technology. A particularly striking example occurred when Arup, a British engineering firm, fell victim to a $25 million fraud scheme that utilized deepfake technology to impersonate the company’s CFO during a video conference call, resulting in 15 unauthorized transactions. Earlier this year, both Secretary of State Marco Rubio and White House Chief of Staff Susie Wiles were targeted with sophisticated deepfake attacks in which attackers used AI-generated voice deepfakes and spoofed messaging accounts to impersonate them and contact high-ranking government officials, including foreign ministers and US governors. This trend extended internationally, with synthetic media being used to attribute false statements to prominent political figures, including UK Prime Minister Keir Starmer and leaders in multiple countries, such as the US, Turkey, Argentina, and Taiwan. While these incidents are discovered before causing significant harm, they expose critical vulnerabilities in the verification of official communications. They materialize the urgent national security risks posed by synthetic media and increasing demands for improved deepfake detection tools and more stringent authentication protocols within the highest levels of government.
Deepfake fraud attempts have surged by 3,000% in recent years and incidents have become increasingly sophisticated, utilizing multimodal approaches, like text to video, image to video, or lip syncing. Political deepfakes have been used for false statements, election manipulation, and character attacks targeting global leaders and candidates. Celebrities and ordinary citizens alike have had their likenesses weaponized for scams, sexual exploitation, and reputational harm. In 2024, businesses faced an average loss of nearly $500,000 due to deepfake-related fraud, with large enterprises experiencing losses up to $680,000 and the expected total loss by 2027 to be upwards of approximately $40 billion. Organizations continue to deal with deepfakes, with 66% of cybersecurity and incident response professionals experiencing a cybersecurity incident involving deepfake use in 2022, marking a 13% increase from the previous year. What is more alarming is that in 2024, 50% of leaders say their employees haven’t had any training on identifying or addressing deepfake attacks. This number has been going down, it’s an alarming figure that 1 in 4 business leaders are unfamiliar with deepfakes, and 32% doubt their employees’ ability to detect them. With the increase in AI slop, it is becoming more challenging to be able to identify synthetic media.
While society wrestles with the issue of deepfakes, the cybersecurity industry continues to battle with deepfake creators. While AI advances are empowering defenders with new detection tools, the technology to create synthetic media is evolving just as fast. Technological guardrails, such as verifying audio signatures or watermarking media, might slow them down, but for the most part, technology is unable to keep up. Considering the various risks, criminals and corrupt officials can deny legitimate digital evidence, thus undermining authentic digital evidence. There could be fake corporate announcements or executive deepfake texts, videos, or audio calls that trigger financial volatility and impact the stock market. Deepfakes have also enabled imposters to pass job interviews to infiltrate sensitive roles, leading to espionage and insider threats. What’s easier than bypassing technology and humans, just get hired at the organization with top-notch resume, and all the perfect skills to match and pass a background check. These incidents generate a large concern as cybersecurity threats and organizations are playing catchup and some others are still watching from the sidelines waiting for Generative AI policies to take effect.
Combating the combined threat of the liar’s dividend and deepfakes will require technology, but more importantly, reducing the human risk through awareness and education. Years ago, people couldn’t spot the scams coming in via email. Now, almost everyone is aware of the attacks, and most of them can take action against or recognize them. We are on the same precipice with deepfakes and synthetic media. Organizations need to ensure that their behavioral defenses are active and ready to go. Sharing and utilizing best practices and tips for individuals and organizations need to be considered and implemented in the coming six months to a year when it comes to these attacks.
Adopting a Zero-Trust Mindset towards all unexpected or urgent communications with skepticism. These days, with video camera doorbells, people can check to see who’s at the door if they’re not expecting a pizza delivery, a package, or a friend to arrive. They can determine if they need to take action. Not only should reviews be done with skepticism, but even more so when it comes to an organization’s hierarchy, especially those related to financial transactions, password resets, or requests for personal data. A “Trust and Verify” behavior allows us to never act on a video, audio clip, or message without first confirming through a trusted secondary channel. If needed, call the person directly or check with IT/security. When it comes to public news, consider credible sources and that it can be verified. When in doubt, please refrain from posting it.
Educate yourself and your users on how to identify deepfakes. There are tell-tale signs, where you can watch for subtle inconsistencies, such as unnatural facial movements, mismatched audio/video quality, odd intonations, or unusual body language. Granted, this is what we see in video chats daily but consider the source and situation. However, we need to take a moment, consider its implications before taking any action. Sound familiar? This is the same principles we do with email. Essentially it’s social engineering, whether in text, images, audio, or video. While cybersecurity professionals have long advised against sharing personal information online, such as photos and videos, it’s essential to ensure that privacy settings are configured so that only authorized individuals, like family members or friends, can access and view personal information about you, your family, or your organization. Check and ensure your privacy settings minimize public content that could be used to create deepfakes.
People and organizations need to continue to stay informed about deepfakes, AI trends, and continue to increase and strengthen their media literacy. By encouraging critical thinking and skepticism, especially with emotional reactionary images, video, and audio, the risk to humans can be reduced. Of course, the “see something, say something” concept becomes even more important as tools and capabilities need to be provided to users, even anonymously for them to report. By allowing ways to report and share responsibly, it provides a crucial defense against the growing threat of deepfakes and synthetic media.Â
Q: What is the Liar's Dividend?
A: The Liar’s Dividend describes the advantage gained by those who spread false information in an environment flooded with misinformation. Lies travel faster than the truth, and truth is always trying to keep up. As synthetic media increases, it becomes easier for scammers to plant doubt about what is real, even when confronted with authentic evidence.
Q: How do deepfakes work?
A: Deepfakes use artificial intelligence to generate hyper-realistic videos, audio, or images that mimic real people or locations. Within minutes, using as little as thirty seconds of audio and a picture of the subject from social media, it is easy to create deepfakes that can impersonate executives, mimic family members, spread fake news, and more.
Q: What are some real-world examples of deepfake attacks?
A: Real-world examples include a $25 million fraud scheme against British engineering firm Arup, where deepfake technology was used to impersonate the company’s CFO during a video conference call. Additionally, deepfake attacks have targeted high-ranking government officials, including the US Secretary of State and White House Chief of Staff, using AI-generated voice deepfakes.
Q: How can organizations protect themselves from deepfake attacks?
A: Organizations can protect themselves by adopting a Zero-Trust Mindset, verifying all unexpected or urgent communications, and educating employees on how to identify deepfakes. Implementing best practices and tips, such as checking for subtle inconsistencies in videos and audio, and ensuring privacy settings are configured to minimize public content, can also help.
Q: What is the impact of deepfakes on society and cybersecurity?
A: Deepfakes can undermine trust in digital media, leading to a crisis of truth. They can be used for financial fraud, political manipulation, and reputational harm. The economic impact is significant, with businesses facing average losses of nearly $500,000 due to deepfake-related fraud, and the expected total loss by 2027 is upwards of $40 billion.Â