AI is already deciding your bank calls, flights and bills in the UAE

Experts warn AI decisions must remain transparent as adoption accelerates

Last updated:
4 MIN READ
AI is rapidly reshaping world's development trajectory, emerging as a powerful catalyst for inclusive growth, innovation, and global competitiveness.
AI is rapidly reshaping world's development trajectory, emerging as a powerful catalyst for inclusive growth, innovation, and global competitiveness.
Shutterstock

Dubai: Artificial intelligence is already playing a major role in how customers interact with banks, telecom providers and airlines in the UAE, often operating quietly behind the scenes while human agents step in only when necessary.

Industry experts say many everyday digital services, from checking account balances to tracking parcels or rebooking flights, are already powered by AI systems designed to process high volumes of routine requests quickly.

Guru Sethupathy, General Manager - AI Governance at Optro, said AI has been embedded in low-risk interactions for years and is gradually expanding into more complex workflows.

“The reality is they already are. It’s just not always in ways they notice,” Sethupathy said.

Stay updated: Get the latest faster by downloading the Gulf News app - it's completely free. Click here for Apple or here for Android. You can also find it us on the Huawei AppGallery.

Customers have generally responded positively to automated services when they provide quick answers and eliminate long waiting times.

“In the Gulf’s fast paced markets, people often want answers instantly, not a call back tomorrow,” Ulhaq said.

Transparency remains critical, however, because misleading customers about whether they are interacting with AI can quickly erode trust.

“Customers don’t mind dealing with AI, but they will not tolerate being misled,” he said.

Explainability becomes a major concern

Rapid adoption of AI has raised concerns about explainability, especially when automated decisions affect financial or professional outcomes.

Kurt Muehmel, Head of AI Strategy at Dataiku, said AI systems are already embedded deeply in enterprise operations across the UAE.

“Our research shows that 65% of UAE CIOs say AI agents are already embedded in business-critical workflows,” Muehmel said.

The challenge emerges when organisations cannot clearly explain how those systems arrive at decisions.

“The risk is straightforward. You get a ‘no’ and nobody can tell you exactly why,” he said.

That lack of clarity could become problematic in areas such as lending, insurance or hiring, where decisions carry legal and financial consequences.

Data governance also plays a critical role in maintaining trust.

Muehmel warned that the largest risks often come from how data is used rather than the algorithms themselves.

“The biggest risk to someone getting a loan decision or an insurance claim is not that the AI got the math wrong. It is that the AI was trained on data it should not have used,” he said.

Safeguards designed to protect customers

Companies deploying AI are increasingly building safeguards to ensure automated decisions remain accountable and challengeable.

Levent Ergin, Chief Strategist for Agentic AI, Regulatory Compliance & Sustainability at Informatica from Salesforce, said responsible deployment requires a structure similar to that of other high-risk professions.

“A pilot isn’t handed the controls without simulation training, supervision and clear emergency protocols. The same logic should apply to AI,” Ergin said.

Many organisations already rely on human-in-the-loop oversight models, in which employees review AI outputs before final decisions are made.

Monitoring systems also track inputs and outcomes to ensure organisations can audit decisions and intervene if systems behave unexpectedly.

Continuous supervision remains essential because AI systems evolve as they process new data.

“Treating AI like ordinary software underestimates it,” Ergin said. “In practice it behaves more like a living system that demands constant oversight.”

Pressure on companies to deploy AI quickly

Growing competition is pushing companies to adopt AI rapidly, a trend that could create risks if governance fails to keep pace.

Sethupathy said leadership pressure to demonstrate results quickly can encourage organisations to move faster than their risk frameworks allow.

“Yes, that risk is real,” he said.

The challenge lies in balancing innovation with oversight. When deployment strategies and governance evolve together, companies can expand AI safely while protecting consumers.

Without that alignment, rapid rollout can expose organisations to reputational damage, regulatory scrutiny and operational disruptions.

The industry consensus is that the next phase of AI adoption will likely focus on accountability, transparency and trust, particularly in sectors such as banking, healthcare and insurance where automated decisions can directly affect people’s lives.

Nivetha Dayanand is Assistant Business Editor at Gulf News, where she spends her days unpacking money, markets, aviation, and the big shifts shaping life in the Gulf. Before returning to Gulf News, she launched Finance Middle East, complete with a podcast and video series. Her reporting has taken her from breaking spot news to long-form features and high-profile interviews. Nivetha has interviewed Prince Khaled bin Alwaleed Al Saud, Indian ministers Hardeep Singh Puri and N. Chandrababu Naidu, IMF’s Jihad Azour, and a long list of CEOs, regulators, and founders who are reshaping the region’s economy. An Erasmus Mundus journalism alum, Nivetha has shared classrooms and newsrooms with journalists from more than 40 countries, which probably explains her weakness for data, context, and a good follow-up question. When she is away from her keyboard (AFK), you are most likely to find her at the gym with an Eminem playlist, bingeing One Piece, or exploring games on her PS5.

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox