Published Date : 10/3/2025Â
At an event held during the 80th UN General Assembly, a discussion addressed the question of “Trusted Digital Identity for People & AI,” through the lens of deploying Digital Public Infrastructure (DPI) that is secure and equitable. The topic of AI agents dominated the conversation.
A blog from the Decentralized Identity Foundation (DIF), which had two members on the panel, noted that the primary challenge the panelists looked at is “the persistent gap between policy and production.” While global goals like SDG 16.9 are clear, the goal of providing legal identity for all by 2030 is often stalled by protocol fragmentation and the lack of a robust architectural model for a world where both people and AI agents are first-class citizens.
The UN Sustainable Development Goals were not designed with AI agents in mind, nor is it clear on what basis they would claim citizenship. However, the current moment proceeds as though AI is inevitable, and as such, it must be tabled for discussion. “Today there’s not really a robust model for AI agents,” says Matt McKinney, CEO of AIGNE, “the agentic ecosystem for AI apps,” and one of the aforementioned DIF members. “That’s something a lot of people are working on, but there really is no clear line of sight or clear path in terms of bringing AI agents into our trusted ecosystem.”
This statement raises the question of whether agentic AI, a relatively new entrant into the tech ideasphere, should be part of a robust, trusted digital identity ecosystem at all. Nonetheless, McKinney suggests that as we build identity and think about bringing people into a trusted environment, we first need to acknowledge that there are two types of subjects we’re dealing with: people and AI agents that they authorize. For all its purported transformational power, agentic AI never comes without a warning that we need some way to distinguish who’s real from who’s bot.
Humans, McKinney says, “need the ability to safely delegate tasks without actually handing over the keys to their entire digital life. And as AI moves closer and closer to us, this is becoming a bigger and bigger issue: how do we actually maintain our personal identity from an agent’s perspective?” Once again, rather than ask whether a continued convergence of humanity with algorithmic large language models is actually beneficial, McKinney suggests the trick is in the right settings: you just have to make sure the agents only have the keys they need to unlock the doors you want them to.
He suggests this can be accomplished through controller-bound credentials, “a special ID that permanently links the AI to its owner so we always know who’s accountable,” often achieved with encrypted biometrics; and by ensuring that AI is auditable and accountable by implementing “scoped and time-boxed permissions.” Most important, however, is “having an auditable path to revocation, meaning we log every time the AI uses its key and we have the power to turn off that key at any time.”
The confidence in models that has seized the identity sector reflects boundless optimism about the potential for mass uptake of digital identity. But adding AI agents to the mix means adding a layer of trust that will need to be sold to the public just like digital identity itself. Given the recent reception to Kier Starmer’s digital ID salvo in the UK, there is already more than enough work to be done before we begin granting citizenship to algorithms.
“The next question is how do we actually build this without taking on a big risk,” McKinney observes. He outlines four key steps. The first is “starting from policy. So, first policy and architecture: before we actually build anything, we sit down and we create the rules.” That approach is dead on arrival, in that it has already failed: the tech is built, and the rules are not yet written.
“An internet of trust” comes up in the discussion, as does “trust elevation,” both phrases of Nicola Gallo of Nitro Agility. Ken Ebert, CEO of Indicio, steers the conversation toward biometric verifiable credentials, as a defense against AI deepfakes and exploding financial fraud. So the conversation cycles: AI will do everything for us, but the risks mean we need to control it, and since we can’t yet, we need more AI to combat fraud enabled by AI, because AI will do everything for us.
Which is to say, AI agents in the workflow have been sold as a revolution in efficiency. But if every innovation creates a new problem, trust will be elusive – which will only make the sales pitch for digital ID even harder. The session as described aimed to focus on “turning the UN’s digital identity strategy into a deployable reality, grounded in the core principles of interoperability, privacy by design, and inclusion for all.” Inadvertently, it highlighted the tension between these stated goals and a relentless culture of innovation that will tell us something is indispensable before it even exists.Â
Q: What is the main challenge discussed in the UN panel on digital identity?
A: The main challenge discussed is the persistent gap between policy and production, particularly in creating a robust architectural model for a world where both people and AI agents are first-class citizens.
Q: What is the role of AI agents in the digital identity ecosystem?
A: AI agents are seen as entities that need to be integrated into the digital identity ecosystem, but there is a lack of a clear model for how to do this securely and responsibly.
Q: How can AI agents be safely integrated into digital identity systems?
A: AI agents can be safely integrated by using controller-bound credentials, encrypted biometrics, and ensuring AI is auditable and accountable with scoped and time-boxed permissions.
Q: What are the concerns about adding AI agents to digital identity systems?
A: The main concerns are the need to distinguish between real humans and bots, the risk of fraud, and the challenge of building trust with the public in the reliability and security of these systems.
Q: What are the key steps to building a trusted digital identity ecosystem with AI agents?
A: The key steps include starting with policy and architecture, ensuring robust security measures, and maintaining an auditable path to revocation for AI agent credentials.Â