Experts warn AI decisions must remain transparent as adoption accelerates

Dubai: Artificial intelligence is already playing a major role in how customers interact with banks, telecom providers and airlines in the UAE, often operating quietly behind the scenes while human agents step in only when necessary.
Industry experts say many everyday digital services, from checking account balances to tracking parcels or rebooking flights, are already powered by AI systems designed to process high volumes of routine requests quickly.
Guru Sethupathy, General Manager - AI Governance at Optro, said AI has been embedded in low-risk interactions for years and is gradually expanding into more complex workflows.
“The reality is they already are. It’s just not always in ways they notice,” Sethupathy said.
Stay updated: Get the latest faster by downloading the Gulf News app - it's completely free. Click here for Apple or here for Android. You can also find it us on the Huawei AppGallery.
Where the real innovation is happening, though, is behind the scenes. AI is transforming back-office operations, coding, engineering, and cybersecurity. These are high-value areas where the risk profile is more manageable. That’s actually a positive development, because while innovation accelerates in the backend, governance frameworks are maturing just as quickly.Guru Sethupathy, General Manager - AI Governance at Optro
He explained that most AI deployments today focus on transactional services, where automation improves efficiency without introducing major risks to customers.
Where the technology is advancing most rapidly is behind the scenes, where companies use AI to strengthen operations such as engineering, cybersecurity and software development.
“That’s actually a positive development, because while innovation accelerates in the backend, governance frameworks are maturing just as quickly,” he said.
Technology leaders say UAE companies have been among the early adopters of AI-driven customer services.
Zane Ulhaq, Head of MENA, Endava, noted that AI assistants are already widely used in public services and financial institutions across the region.
“In many cases they already are,” Ulhaq said.
He pointed to examples such as AI assistants used in government utilities and banking platforms to handle millions of routine requests.
In many cases, they already are. The UAE was among the first globally to embed AI assistants into public services, with DEWA’s Rammas handling millions of customer interactions. Banks such as Emirates NBD were also early adopters with AI-powered virtual assistants to manage routine queries.Zane Ulhaq, Head of MENA, Endava
Customers have generally responded positively to automated services when they provide quick answers and eliminate long waiting times.
“In the Gulf’s fast paced markets, people often want answers instantly, not a call back tomorrow,” Ulhaq said.
Transparency remains critical, however, because misleading customers about whether they are interacting with AI can quickly erode trust.
“Customers don’t mind dealing with AI, but they will not tolerate being misled,” he said.
Rapid adoption of AI has raised concerns about explainability, especially when automated decisions affect financial or professional outcomes.
Kurt Muehmel, Head of AI Strategy at Dataiku, said AI systems are already embedded deeply in enterprise operations across the UAE.
“Our research shows that 65% of UAE CIOs say AI agents are already embedded in business-critical workflows,” Muehmel said.
The challenge emerges when organisations cannot clearly explain how those systems arrive at decisions.
“The risk is straightforward. You get a ‘no’ and nobody can tell you exactly why,” he said.
What is even more interesting is that the UAE actually leads the world on one metric that should matter to every consumer: 63% of UAE CIOs say an AI explainability failure is very likely or certain to trigger a trust crisis – the highest figure of any country we surveyed. So, the executives deploying this technology are themselves telling us the risk is real.Kurt Muehmel, Head of AI Strategy at Dataiku
That lack of clarity could become problematic in areas such as lending, insurance or hiring, where decisions carry legal and financial consequences.
Data governance also plays a critical role in maintaining trust.
Muehmel warned that the largest risks often come from how data is used rather than the algorithms themselves.
“The biggest risk to someone getting a loan decision or an insurance claim is not that the AI got the math wrong. It is that the AI was trained on data it should not have used,” he said.
Companies deploying AI are increasingly building safeguards to ensure automated decisions remain accountable and challengeable.
Levent Ergin, Chief Strategist for Agentic AI, Regulatory Compliance & Sustainability at Informatica from Salesforce, said responsible deployment requires a structure similar to that of other high-risk professions.
Before deployment, models should be stress-tested in environments that mirror reality, using controlled data and edge-case scenarios. You wouldn't allow an unvetted employee to approve financial transactions; similarly, data feeding AI systems must be valLevent Ergin, Chief Strategist for Agentic AI and Regulatory Compliance & Sustainability at Informatica from Salesforce
“A pilot isn’t handed the controls without simulation training, supervision and clear emergency protocols. The same logic should apply to AI,” Ergin said.
Many organisations already rely on human-in-the-loop oversight models, in which employees review AI outputs before final decisions are made.
Monitoring systems also track inputs and outcomes to ensure organisations can audit decisions and intervene if systems behave unexpectedly.
Continuous supervision remains essential because AI systems evolve as they process new data.
“Treating AI like ordinary software underestimates it,” Ergin said. “In practice it behaves more like a living system that demands constant oversight.”
Growing competition is pushing companies to adopt AI rapidly, a trend that could create risks if governance fails to keep pace.
Sethupathy said leadership pressure to demonstrate results quickly can encourage organisations to move faster than their risk frameworks allow.
“Yes, that risk is real,” he said.
The challenge lies in balancing innovation with oversight. When deployment strategies and governance evolve together, companies can expand AI safely while protecting consumers.
Without that alignment, rapid rollout can expose organisations to reputational damage, regulatory scrutiny and operational disruptions.
The industry consensus is that the next phase of AI adoption will likely focus on accountability, transparency and trust, particularly in sectors such as banking, healthcare and insurance where automated decisions can directly affect people’s lives.