Data leaders scale AI agents even as trust gaps halt sensitive deployments

Dubai: Data leaders across global enterprises are deploying AI agents into operations at a pace that far exceeds their confidence in the technology, according to the Global AI Confessions Report, built on new Harris Poll data for Dataiku.
The survey of 800 senior executives shows 86% now rely on AI agents to handle daily business workflows. Nearly 42% said they have tethered dozens of internal processes to autonomous agents, embedding them into systems that touch data movement, operational automation and decision queues. Yet, 72% admitted they would approve critical decisions made by those agents, even when they could not review the underlying rationale.
One executive sentiment cut sharper than the charts: "I Let AI Agents Make Critical Business Decisions, Even Though I Don't Fully Trust Them." The statement appears word-for-word in the report, representing a wider unease among senior data operators who now find themselves accountable for systems they cannot fully explain.
High-stakes approvals without reasoning are now common. A total 72% said they do not insist on explainability from their agents, while 19% said they always require AI to “show its work” before approval. Only 11% currently believe autonomous agents are ready to operate in sensitive areas such as regulatory compliance, hiring and ethics, citing fear of opaque outputs and weak audit trails.
Almost 95% of Chief Data Officers said they could not trace agent decisions end-to-end for regulators if pressed today. Only 5% said they had achieved full traceability across production deployments.
“When it works, success is claimed quickly. When it fails, the blame is lonelier,” said one respondent inside the published report. CDOs take 46% of the credit for AI successes, but inherit 56% of the blame when agent-driven systems misfire.
More than half of organisations, 52%, have delayed AI deployments over concerns relating to reasoning opacity, workforce trust and integration snags. Almost 58% said fewer than half of their AI agent pilots survive past the proof-of-concept stage.
The data also shows leadership miscalculations widen the gap: C-levels overestimate the accuracy of their agents by 68% and underestimate production timelines by 73%.
A total 59% said their teams have faced operational disruptions in the past 12 months due to hallucinations, logic breakdowns or flawed agent outputs. Nearly 75% of data leaders said trust is their biggest blocker. One in three, 38%, said they expect Agent accuracy to be higher than 80%, even though many fall below that threshold in live pilots.
Boardrooms and data chiefs converge on one point: 91% believe internal or “shadow AI” tools are active in their organisations, often without governance visibility or internal review. Data leaders say this is raising execution risk faster than executive oversight can keep pace.
AI agents are scaling into real companies and real jobs, but confidence, tracing and explainability are gating factors for mission-critical and regulated deployments. The technology is accelerating. The teams deploying it say trust is not.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox
Network Links
GN StoreDownload our app
© Al Nisr Publishing LLC 2025. All rights reserved.