Published Date : 7/14/2025Â
You might have heard someone say, 'I can’t tell what’s real anymore.' With generative AI flooding the internet, synthetic content is becoming increasingly hard to distinguish from reality. Deepfakes, once a niche concern, now pose a significant threat to public trust. From AI-generated bands going viral to fraudsters impersonating high-profile figures like U.S. Secretary of State Marco Rubio, the risks are real and escalating. The UN is stepping in, launching an initiative called the AI and Multimedia Authenticity Standards Collaboration to tackle these challenges head-on. n nThe initiative, led by the World Standards Cooperation (WSC), brings together the International Electrotechnical Commission (IEC), International Organization for Standardization (ISO), and the International Telecommunication Union (ITU). This partnership aims to create a cohesive ecosystem of international standards that address the misuse of synthetic media while promoting ethical AI development. The WSC emphasizes the need for transparency, accountability, and innovation in digital content creation, ensuring that AI remains a tool for good rather than a weapon for deception. n nThe collaboration involves a diverse range of stakeholders, including tech giants like Adobe, Microsoft, and Shutterstock, as well as authentication specialists such as DataTrails and Deep Media. Civil society organizations like Witness and research institutions like Germany’s Fraunhofer and Switzerland’s EPFL are also part of the effort. This broad coalition highlights the urgency of the issue and the necessity of cross-sector collaboration. As the UN Agency for Digital Technologies notes, 'AI-generated content is becoming the new norm, reshaping communication and challenging long-standing assumptions about authenticity.' n nTwo key white papers have been released as part of the initiative. The first, a technical paper, maps existing standards in digital media authenticity and identifies gaps in areas like content provenance, watermarking, and rights declarations. The second, a policy-focused document, outlines strategies to build trust in content authenticity while balancing freedom of expression and innovation. These papers aim to guide future standardization efforts, ensuring responsible AI use and protecting users from manipulated media. n nThe UN initiative isn’t the only effort addressing deepfakes. The UK’s Ofcom has released a follow-up to its 2024 paper on deepfake defenses, focusing on attribution measures like watermarking tools and AI labels. Meanwhile, the World Economic Forum (WEF) has warned that disinformation poses a top global risk for 2025, citing potential financial and reputational damage to businesses. As WEF’s Matthew Blake states, 'False narratives have caused serious reputational and financial damage,' emphasizing the need for proactive solutions. n nDespite these efforts, challenges remain. Critics argue that AI’s potential for 'good' is often overshadowed by its risks. Charley Johnson, author of the newsletter Untangled, questions whether 'AI for good' is a realistic goal, suggesting that solutions should first envision the world they want to create before integrating AI. This debate underscores the complexity of governing AI in a way that benefits all humanity without stifling innovation. n nThe UN’s AI for Good Global Summit further highlights the organization’s commitment to ethical AI. ITU Secretary-General Doreen Bogdan-Martin stressed that AI should be a means to an end, not an end in itself. However, the growing prevalence of deepfakes and misinformation raises concerns about whether global standards can keep pace with technological advancements. As the UN states, 'Digital content can be powerful and creative, but it must also be traceable, trustworthy, and ethically produced.' n nThe road ahead requires continuous collaboration, adaptability, and a focus on both technical and policy solutions. With deepfakes threatening to erode public trust and disrupt industries, the UN’s initiative represents a critical step toward a more secure digital future. By fostering international cooperation and prioritizing ethical AI, the global community can work together to combat the challenges of synthetic media while harnessing the benefits of AI for societal good.Â
Q: What is the UN’s initiative on deepfakes?
A: The UN’s AI and Multimedia Authenticity Standards Collaboration brings together standards bodies, tech companies, and researchers to create global frameworks for combating deepfakes and synthetic media. The initiative aims to establish transparency, accountability, and ethical AI use.
Q: How do the white papers address deepfake threats?
A: The technical paper maps existing standards in digital media authenticity and identifies gaps, while the policy paper outlines strategies to build trust in content authenticity. Both aim to guide future standardization efforts and protect users from manipulated media.
Q: What role do tech companies play in this effort?
A: Tech giants like Adobe, Microsoft, and Shutterstock, along with authentication specialists, are part of the collaboration. Their involvement ensures practical solutions that address real-world challenges of deepfake detection and content attribution.
Q: How does Ofcom contribute to combating deepfakes?
A: Ofcom’s 'Attribution Toolkit' evaluates measures like watermarking and AI labels to attribute content to its source. This helps users identify manipulated media and supports regulatory efforts to mitigate deepfake risks.
Q: What are the economic risks of disinformation?
A: The World Economic Forum highlights that disinformation can cause massive financial and reputational damage, including stock price crashes and consumer distrust. Businesses of all sizes are vulnerable to false narratives that harm their operations and public perception.Â