Experts warn AI decisions must remain transparent as adoption accelerates

Dubai: Artificial intelligence is already playing a major role in how customers interact with banks, telecom providers and airlines in the UAE, often operating quietly behind the scenes while human agents step in only when necessary.
Industry experts say many everyday digital services, from checking account balances to tracking parcels or rebooking flights, are already powered by AI systems designed to process high volumes of routine requests quickly.
Guru Sethupathy, General Manager - AI Governance at Optro, said AI has been embedded in low-risk interactions for years and is gradually expanding into more complex workflows.
“The reality is they already are. It’s just not always in ways they notice,” Sethupathy said.
Stay updated: Get the latest faster by downloading the Gulf News app - it's completely free. Click here for Apple or here for Android. You can also find it us on the Huawei AppGallery.
Customers have generally responded positively to automated services when they provide quick answers and eliminate long waiting times.
“In the Gulf’s fast paced markets, people often want answers instantly, not a call back tomorrow,” Ulhaq said.
Transparency remains critical, however, because misleading customers about whether they are interacting with AI can quickly erode trust.
“Customers don’t mind dealing with AI, but they will not tolerate being misled,” he said.
Rapid adoption of AI has raised concerns about explainability, especially when automated decisions affect financial or professional outcomes.
Kurt Muehmel, Head of AI Strategy at Dataiku, said AI systems are already embedded deeply in enterprise operations across the UAE.
“Our research shows that 65% of UAE CIOs say AI agents are already embedded in business-critical workflows,” Muehmel said.
The challenge emerges when organisations cannot clearly explain how those systems arrive at decisions.
“The risk is straightforward. You get a ‘no’ and nobody can tell you exactly why,” he said.
That lack of clarity could become problematic in areas such as lending, insurance or hiring, where decisions carry legal and financial consequences.
Data governance also plays a critical role in maintaining trust.
Muehmel warned that the largest risks often come from how data is used rather than the algorithms themselves.
“The biggest risk to someone getting a loan decision or an insurance claim is not that the AI got the math wrong. It is that the AI was trained on data it should not have used,” he said.
Companies deploying AI are increasingly building safeguards to ensure automated decisions remain accountable and challengeable.
Levent Ergin, Chief Strategist for Agentic AI, Regulatory Compliance & Sustainability at Informatica from Salesforce, said responsible deployment requires a structure similar to that of other high-risk professions.
“A pilot isn’t handed the controls without simulation training, supervision and clear emergency protocols. The same logic should apply to AI,” Ergin said.
Many organisations already rely on human-in-the-loop oversight models, in which employees review AI outputs before final decisions are made.
Monitoring systems also track inputs and outcomes to ensure organisations can audit decisions and intervene if systems behave unexpectedly.
Continuous supervision remains essential because AI systems evolve as they process new data.
“Treating AI like ordinary software underestimates it,” Ergin said. “In practice it behaves more like a living system that demands constant oversight.”
Growing competition is pushing companies to adopt AI rapidly, a trend that could create risks if governance fails to keep pace.
Sethupathy said leadership pressure to demonstrate results quickly can encourage organisations to move faster than their risk frameworks allow.
“Yes, that risk is real,” he said.
The challenge lies in balancing innovation with oversight. When deployment strategies and governance evolve together, companies can expand AI safely while protecting consumers.
Without that alignment, rapid rollout can expose organisations to reputational damage, regulatory scrutiny and operational disruptions.
The industry consensus is that the next phase of AI adoption will likely focus on accountability, transparency and trust, particularly in sectors such as banking, healthcare and insurance where automated decisions can directly affect people’s lives.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox
Network Links
GN StoreDownload our app
© Al Nisr Publishing LLC 2026. All rights reserved.