Published Date : 7/13/2025Â
As AI agents become more prevalent across industries, a critical issue is emerging: trust gaps between users and these automated systems. Okta, a leading provider of identity and access management solutions, has released a report that underscores the urgency of addressing these trust issues. The Auth0 Customer Identity Trends Report 2025, which analyzed data from 6,750 consumers across nine countries, reveals that while AI innovation is accelerating, user confidence remains fragile. This disconnect could hinder the widespread adoption of AI agents if not resolved. n nThe report highlights that 60% of users are deeply concerned about AI’s impact on digital identity, privacy, and security. This fear is compounded by the increasing reliance on AI for daily digital tasks, from online shopping to financial transactions. Many users feel that AI systems lack the transparency and accountability needed to handle sensitive data, leading to skepticism about their reliability. For example, 70% of surveyed users prefer interacting with humans over AI agents during critical transactions, citing a lack of trust in how their personal information is managed. n nOne of the key findings is that 60% of users worry about AI’s effect on identity security, with many believing that current systems lack robust Identity and Access Management (IAM) protocols. This concern is not unfounded, as the report notes a rise in identity-based attacks, particularly in sectors like retail and e-commerce. These industries face a high volume of fraudulent sign-ups, which can lead to data breaches and financial losses. The report also points out that professional and financial services are increasingly targeted, further eroding user confidence in AI-driven systems. n nDespite these challenges, the report offers a glimmer of hope. It suggests that enhancing user experience could be a catalyst for building trust. For instance, 74% of users said they would prioritize a company’s reputation and trustworthiness over product quality. Additionally, 38% of users expressed willingness to trust AI agents if human oversight is involved. Younger demographics, in particular, are open to modern authentication methods like passkeys and biometrics, which could bridge the trust gap if implemented effectively. n nTo address these issues, the report urges organizations to adopt a trust-by-design approach to AI. This includes strengthening identity layers with advanced tools that detect and prevent bot-based attacks, prioritizing user-friendly security measures, and fostering cross-sector collaboration. By integrating these strategies, businesses can create a more secure and transparent environment for AI agents, ultimately improving user confidence. n nThe findings also emphasize the importance of digital trust in the age of AI. As users become more aware of the risks associated with AI, companies must take proactive steps to ensure their systems are secure, transparent, and aligned with user expectations. This includes investing in identity security frameworks that protect against evolving threats while maintaining a seamless user experience. Without such efforts, the potential of AI agents to revolutionize industries may remain unrealized. n nIn conclusion, Okta’s report serves as a wake-up call for organizations to address the trust gaps in AI. By focusing on identity security, user experience, and transparency, businesses can build the confidence needed to drive AI adoption. As the digital landscape continues to evolve, the ability to earn user trust will be a decisive factor in the success of AI initiatives.Â
Q: Why are users concerned about AI agents?
A: Users are worried about AI agents due to fears related to digital identity, privacy, and security. Many believe these systems lack transparency and robust security measures, leading to skepticism about their reliability.
Q: What percentage of users prefer human interaction over AI?
A: According to the report, 70% of users prefer interacting with humans over AI agents during critical transactions, citing a lack of trust in AI's handling of personal data.
Q: How can organizations improve trust in AI agents?
A: Organizations can enhance trust by implementing advanced identity protection tools, prioritizing user-friendly security measures, and adopting a trust-by-design approach to AI development.
Q: What role does identity access management (IAM) play in AI trust?
A: IAM is crucial for ensuring secure access to AI systems. The report highlights that many users feel current IAM mechanisms are insufficient, increasing the risk of data breaches and fraud.
Q: What are the main sectors affected by identity-based attacks?
A: Retail and e-commerce are the hardest hit by identity-based attacks, with a high number of fraudulent sign-ups. Professional services and financial institutions are also significantly impacted.Â