Published Date : 7/14/2025Â
As AI agents become more prevalent across industries, a critical issue is emerging: the widening gap between technological advancements and user trust. Okta's recent Customer Identity Trends Report 2025 sheds light on this challenge, emphasizing that trust is no longer a nice-to-have but a necessity for organizations leveraging AI. The report, based on a global survey of 6,750 consumers and anonymized data from the Auth0 platform, underscores that while AI innovation is accelerating, user confidence remains fragile. n nThe report highlights that 60% of surveyed users are deeply concerned about AI's impact on digital identity, privacy, and security. This skepticism is compounded by the fact that 70% of users prefer human interaction over AI agents during transactions. Their fears stem from a lack of trust in how AI handles personal data, a concern that is particularly acute in sectors like retail and e-commerce, where identity-based attacks are on the rise. These sectors face a surge in fraudulent sign-ups, with more fake attempts than legitimate ones. Financial and professional services are also grappling with similar challenges, as cybercriminals exploit vulnerabilities in AI systems. n nOne of the key findings is that 60% of users worry about AI's effect on privacy and digital identity security. Many believe that AI deployments lack robust Identity and Access Management (IAM) mechanisms, increasing the risk of data breaches. This concern is not unfounded, as the report notes that weak IAM frameworks can lead to significant security lapses. To address this, Okta urges organizations to adopt advanced identity protection tools, prioritize user-friendly security measures, and implement a trust-by-design approach to AI development. n nThe report also reveals that user experience plays a crucial role in building trust. A staggering 74% of users say they prioritize a company’s reputation and trustworthiness over product quality. Additionally, 38% of users are more likely to trust AI agents with human oversight. Younger users, in particular, are open to modern authentication methods like passkeys and biometrics, which could help bridge the trust gap. However, these solutions require seamless integration with existing systems to avoid user friction. n nOkta’s findings suggest that the path to trust involves more than just technical fixes. Organizations must foster cross-sector collaboration, invest in identity security, and ensure transparency in AI operations. By doing so, they can create a safer digital environment where users feel confident in AI’s ability to handle sensitive tasks. The report concludes that only those companies that prioritize trust will thrive in the AI-driven future, as user confidence is the cornerstone of sustainable growth. n nIn the face of rising fraud and security threats, the need for proactive measures has never been clearer. Okta’s report serves as a wake-up call for businesses to rethink their approach to AI deployment, focusing not just on innovation but on the human element that underpins trust. As AI agents continue to evolve, the balance between efficiency and security will determine their long-term success.Â
Q: Why is user trust in AI agents declining?
A: User trust is declining due to concerns about AI's impact on privacy, digital identity, and security. Many users fear that AI systems lack robust Identity and Access Management (IAM) mechanisms, making them vulnerable to data breaches and fraud.
Q: How do identity-based attacks affect AI adoption?
A: Identity-based attacks, particularly in sectors like retail and e-commerce, are eroding trust in AI. These attacks exploit weak security frameworks, leading users to question the reliability of AI systems for handling sensitive data.
Q: What role does user experience play in AI trust?
A: User experience is critical. The report found that 74% of users prioritize a company’s trustworthiness over product quality, and 38% prefer AI systems with human oversight, indicating that seamless, transparent interactions build confidence.
Q: What steps can organizations take to improve AI trust?
A: Organizations should strengthen identity layers by adopting advanced security tools, prioritizing user-friendly measures, and implementing a trust-by-design approach. Cross-sector collaboration and transparency in AI operations are also essential.
Q: Why are modern authentication methods like passkeys important?
A: Passkeys and biometrics offer more secure alternatives to traditional passwords, addressing user concerns about data security. Younger users, in particular, are open to these methods, which can enhance trust in AI systems when integrated effectively.Â