Man practicing slackline over the sea in Minorca Balearic Islands Spain
Man practicing slackline over the sea in Minorca Balearic Islands Spain

Responsible AI Pulse survey

How can responsible AI bridge the gap between investment and impact?

Related topics

AI pays off when it’s embedded responsibly: greater profits, happier employees, and fewer costly mistakes.


In brief

  • Companies with oversight and real-time monitoring are turning AI from a risk into a growth engine.
  • Significant financial loss from AI risk is real and here to stay for those that don’t have the right controls in place.
  • Leadership blind spots can leave firms exposed; it is essential to have oversight of citizen developers.

The companies outperforming with artificial intelligence (AI) aren’t just building better models — they’re building smarter guardrails that let them seize outsized market opportunities. The latest EY Global Responsible AI Pulse survey reveals that organizations embracing responsible AI — through clear principles, robust execution, and strong governance — are pulling ahead in the metrics where AI-related gains have been most elusive: revenue growth, cost savings, and employee satisfaction. These gains aren’t marginal, they’re the difference between AI as a cost center and AI as a competitive advantage.

Nearly every company in the survey has already suffered financial losses from AI-related incidents, with average damages conservatively topping US$4.4 million. But those with governance measures like real-time monitoring and oversight committees are seeing far fewer risks — and stronger returns. 

Responsible AI isn’t a compliance exercise. It’s a performance lever — and the latest data proves it.

The responsible AI journey: companies are taking a comprehensive approach

Responsible AI is best understood as a journey — one that moves through three stages. First comes communication, where organizations articulate a clear set of responsible AI principles internally and externally. Next is execution when those principles are translated from words to action, through controls, key performance indicators and workforce training. Finally comes governance, the oversight needed to help ensure actions and principles stay aligned, through measures such as committees and independent audits. 

 

Most companies have started this journey. The second wave of the EY Global Responsible AI Pulse survey asked C-suite leaders about responsible AI adoption steps across these three stages. On average companies have implemented seven out of 10 measures.  

 

Adoption is even higher in sectors like technology, media and entertainment and telecommunications (TMT), where a higher dependency on technology and data for delivering core services makes responsible AI all the more critical. Organizations in these sectors are more likely than others to communicate responsible AI principles to external stakeholders (80% vs. 71%). They are also further ahead on governance: 74% have established an internal or external committee to oversee adherence to these principles (vs. 61% in other industries), and 72% conduct independent assessments of responsible AI governance and control practices (vs. 61%).


While there is a drop-off along each stage of the responsible AI journey, the gap is minimal, declining only a couple of percentage points on average from one step to the next. And where measures haven’t yet been implemented, companies overwhelmingly say they intend to act. Across all responsible AI measures, fewer than 2% report their organization has no plans to implement.

This progress matters. Responsible AI can’t be achieved though principles alone — it requires an “all of the above” approach. Clear articulation of principles, robust controls and strong governance are all essential to ensuring responsible AI moves from words to reality.  

Responsible AI is the missing link

AI has already delivered big wins for many organizations. Eight in 10 respondents report improvements in efficiency and productivity — the primary focus of many early use cases. Nearly as many say AI has boosted innovation and technology adoption — helping to accelerate activities in which generative AI excels, such as ideation, discovery, research and development and rapid prototyping. About three in four say it has improved their ability to understand customers and respond quickly to shifting market conditions.


However, in three critical areas – employee satisfaction, revenue growth and cost savings – AI has not delivered similar performance improvement. According to the EY AI Sentiment survey, half of citizens are worried about job loss due to AI and many remain hesitant when it comes to AI’s role in workplace decision-making. Translating AI investments into tangible improvements on the profit and loss statement also remains elusive for many firms.

Cathy Cobey, EY Global Responsible AI Leader for Assurance explains, "Organizations struggle to achieve positive ROI on their AI investments due to the complexities of integrating AI into existing processes, which demand re-engineering, upskilling, and continuous data flow investments. Additionally, challenges in legacy technology integration and the need for evolving governance frameworks hinder their ability to realize tangible financial benefits."

When we dug deeper into the data, however, something remarkable emerged: companies that have embraced responsible AI are breaking through where others are stalling. Organizations that are adopting governance measures — specifically real-time monitoring and oversight committees — are far more likely to report improvements in revenue growth, employee satisfaction and cost savings, the exact areas where most are struggling to report a return.


This link suggests a symbiotic relationship. Companies that have moved further along the responsible AI journey are the ones seeing improvements in the areas that need the biggest boost, and it’s not hard to see why. Anxious employees may be reassured by a public commitment to responsible AI from their employer. Communicating a responsible approach can build brand reputation and customer loyalty — ultimately driving revenue growth. And robust governance can help prevent costly technical and ethical breaches, as well as reducing recruitment and retention costs — benefits that ultimately flow through to the bottom line and boost cost savings. 

For business leaders, the message is clear — increase the return on your AI investments by moving further along the responsible AI journey. 

The price tag of ignoring the risks

While responsible AI adoption drives benefits, the converse is also true: neglecting it can come at a steep cost. Almost every company in our survey (99%) reported financial losses from AI-related risks, and 64% experienced losses exceeding US$1 million. On average, the financial loss to companies that have experienced risks is conservatively estimated at US$4.4 million.1 That’s an estimated total loss of US$4.3 billion across the 975 respondents in our sample.  

The most common risks organizations reported being negatively impacted by are non-compliance with AI regulations (57%), negative impacts to sustainability goals (55%) and bias in outputs (53%). Issues such as explainability, legal liability and reputational damage have so far been less prominent, but their significance is expected to grow as AI is deployed more visibly and at scale.

Encouragingly, responsible AI is already linked to fewer negative impacts. For example, those who have already defined a clear set of responsible AI principles have experienced 30% fewer risks compared to those who haven’t. 

C-suite blind spots leave companies exposed

Despite the financial stakes, it’s clear that many C-suite leaders don’t know how to apply the right controls to mitigate AI risks. When asked to match the appropriate controls against five AI-related risks, only 12% of respondents got them all right. 

As may be expected, CIOs and CTOs performed the best — yet even here only about a quarter answered correctly across all five use cases. 

Chief AI Officers (CAIOs) and Chief Digital Officers (CDOs) fared only slightly better than average (15%) likely reflecting a background more grounded in data science, academia and model development rather than traditional technology risk management. Consequently, they may have less experience managing technology-related risks than their CIO and CTO counterparts. 

Concerningly, CROs — the leaders who are ultimately responsible for AI risks — perform slightly below average, at 11%. And at the bottom end of the spectrum CMOs, COOs and CEOs performed worst (3%, 6% and 6% respectively). 


This lack of awareness has consequences. Firms that have lost more than US$10 million due to AI risks on average report that they have 4.5 out of 10 of the correct controls in place, while firms that have lost US$1 million or less have 6.4. This highlights a clear need for targeted upskilling of the C-suite — particularly as the financial and reputational cost of AI risks continue to rise.

Challenges ahead: Agentic AI and citizen developers

 The governance challenge doesn’t end with today’s models. As agentic AI becomes more prevalent in the workplace and employees experiment with citizen development, the risks — and the need for thoughtful controls — are only set to grow. 

The autonomous nature of agentic AI introduces new risks that can escalate quickly.

Encouragingly, most organizations are already implementing governance policies to manage these risks. Eight out of the 10 agentic AI governance measures we identified are being implemented by more than 75% of respondents. This includes continuous monitoring (85%), and incident escalation processes for unexpected agentic behaviors (80%). Although organizations have made a great start, there are still challenges in designing effective controls that can adequately oversee systems that operate continuously, adapt rapidly and require minimal human intervention.  


“In the age of agentic AI, where systems operate with increasing autonomy and complexity, organizations must prioritize real-time oversight: Continuous monitoring and rapid response capabilities are essential to navigate the intricacies of these technologies. The autonomous nature of agentic AI introduces new risks that can escalate quickly, making robust controls necessary to prevent costly disruptions and ensure system integrity” notes Sinclair Schuller, EY Americas Responsible AI Leader.

One particular area that is lagging is preparation for a hybrid AI-human workforce. Only a third (32%) say their HR team is developing strategies for managing such environments. Still, given the nascent nature of agentic AI, this figure can be seen as promising since it indicates that companies are starting to think strategically about longer-term implications of the technology.

Citizen developers – opportunity or blind spot?

The rise of “citizen developers” — employees using no-code or low-code tools to create their own AI agents — presents a more complex challenge.

A third (32%) of companies have chosen to prohibit the practice outright. Among the rest, tolerance ranges from tightly limited use cases to active encouragement, with some businesses even promoting best practices across teams.


What should concern leaders most is the inconsistency between stated policy and real-world oversight. Among organizations that allow citizen developers, only 60% have formal, organization-wide frameworks to help ensure alignment with responsible AI principles and just half have high visibility into actual activity.  And even among companies that prohibit the practice, 12% admit they lack visibility into actual activity — creating a governance gap where shadow AI development can flourish undetected — meaning they are essentially flying blind.

The emergence of agentic AI and citizen developers underlines a central theme of our findings: responsible AI must evolve in step with technology and workplace behaviors. Clear frameworks, proactive oversight and leadership awareness are critical if organizations want to seize the benefits of these trends without compounding their risks.

Implications for business leaders 

Here are three actions C-suite leaders can take to strengthen their AI governance and controls,  and boost business outcomes:   

1. Adopt a comprehensive approach to responsible AI

The symbiotic relationship between responsible AI adoption and AI-driven performance improvements has a clear message for business leaders. To get more value from your AI investments — particularly in crucial areas like financial performance and employee satisfaction — it’s critical for companies to move further along the responsible AI journey. A comprehensive approach includes articulating and communicating your responsible AI principles, executing on them with controls, KPIs and training, and establishing effective governance. 

2. Fill knowledge gaps in the C-suite 

AI affects every facet of your organization. It’s critical for leaders across the C-suite to understand both its potential as well as its risks — and the controls needed to mitigate them. Yet, our survey reveals significant shortcomings in awareness of which controls are most appropriate.

How does your C-suite compare? Identify where the biggest gaps are and fill them with targeted training. At a minimum, ensure the roles closest to AI risks are well versed on the appropriate safeguards. 

3. Get ahead of emerging agentic AI and citizen developer risks 

Agentic AI promises powerful new capabilities, but it also brings significant risks. It’s critical for organizations to identify these risks, adopt appropriate policies, and ensure appropriate governance and monitoring is in place. 

There are gaps between companies’ stated policies and their insight into whether employees are developing their own AI agents. Understand the costs and benefits for your organization before establishing your position. And — regardless of whether you ban, permit, or encourage the practice — make sure your policy is backed up with real insight into what employees are actually doing. 

Summary

As AI becomes more deeply embedded in business operations, leaders face a clear choice: treat responsible AI as a box-ticking exercise or as a strategic enabler. Those that take the latter path are already proving that robust governance, clear principles, and informed leadership can turn potential risks into competitive advantage. The next wave of developments — from agentic models to citizen development — will only raise the stakes. Success will belong to organizations that act now to align responsibility with performance.

Related content

How can reimagining risk prepare you for an unpredictable world?

The 2025 EY Global Risk Transformation Study explores how Risk Strategists see disruption earlier, adapt faster and respond with more precision.

How AI assessments can enhance confidence in AI

Well-designed AI assessments can help evaluate whether investments in the technology meet business and society’s expectations. Read more.

How Mott MacDonald is building confidence through responsible AI

Explore how EY teams helped a global firm establish and accelerate AI governance to enable them to transform responsibly and ethically.

    About this article