AI avatars are the latest fraud threat: UAE expert warns rising deepfake scams and how to spot them

Technology is advancing faster than the safeguards meant to contain it

Last updated:
7 MIN READ
AI
Privacy concerns, bias in algorithms, and scams are some of the problematic issues that come up with AI usage.
Shutterstock

Truly, AI seems to be pushing us all into an existential crisis. It seems as if, most of the time, we're constantly asking, is this real? And now, it most likely won't. For instance, a simple example would be, a new photo of a celebrity would be circulating on the internet. It's a regular photo.

And if your eyes are even slightly trained to spot AI, you pause. The skin is a little too smooth. The lighting a little too perfect. You realise, that’s not them.

Now bring it closer to home.

You log into a video call. Your CEO appears on screen. The voice sounds right. The face looks real. The instructions are urgent: approve the transfer.

But what if that person isn’t real at all?

As AI-generated avatars become more sophisticated and widely accessible, security experts warn that the technology is advancing faster than the safeguards meant to contain it. Deepfake executives, fake job candidates and fully synthetic identities are no longer futuristic plotlines.

They’re already here.

Rafal Hyps, Chief Executive Officer of Sicuro Group, explains why organisations and individuals should be paying close attention.

When 'seeing is believing' no longer applies

AI avatars can look and sound uncannily real. That realism is exactly what makes them powerful and dangerous.

“AI avatars can bypass identity verification systems that were not built to detect synthetic media,” says Hyps. “Deepfake-enabled fraud has already been used to impersonate executives on video calls and authorise payments. Most organisations have not updated their verification processes to account for this.”

In simple terms, many companies still trust what they see on screen. But verification systems were designed to confirm that a real person is present not to detect whether that “person” is an AI-generated replica.

That gap is now being exploited.

How frauds are using AI avatars

The tools required to create convincing fake identities are no longer limited to elite hackers. They are increasingly accessible and easy to use.

“An attacker can generate a convincing avatar of a senior executive, feed it through a virtual camera during a video call, and pass standard liveness checks,” Hyps explains. “AI tools can also generate fake identity documents with matching selfies and video. These methods are already in use and available.”

That means traditional safeguards, like asking someone to blink, turn their head, or hold up ID, may not be enough.

The risk is not theoretical. Payment approvals, internal authorisations, and sensitive business decisions often happen over video or voice confirmation. If those channels are compromised, the financial and reputational damage can be severe.

Who is at risk?

Some industries are more exposed than others.

Organisations where payment authorisation or sensitive decisions happen over video or verbal confirmation are particularly vulnerable. “Small firms and family offices that rely on trust and informal approvals are particularly exposed. Recruitment is also affected, with fake candidates using AI-generated identities to pass remote video interview rounds," adds Hyps.

In other words, any setting where identity is assumed rather than rigorously verified is a target.

Recruitment has become a surprising weak point. As remote hiring becomes common, companies may interview candidates they never meet in person. A convincing AI-generated identity could pass early screening stages before anyone realises something is wrong.

Realistic vs cartoon avatars: Is there a difference?

Many people assume that only hyper-realistic avatars pose a threat. Stylised or cartoon-like avatars seem harmless even playful.

But the risk runs deeper.

The truth is, realistic avatars pose a direct impersonation risk because the output is designed to pass as a real person. "Stylised or cartoon avatars appear harmless, but the platform still requires the same biometric input to generate them. The risk with stylised avatars is not in what they produce but in what data is collected to create them," adds Hyps.

Even if the final image looks animated, the system may still rely on detailed facial scans and biometric mapping behind the scenes.

And that brings another, longer-term concern: data security.

The biometric data problem

Most AI avatar platforms require users to upload facial images. Some go further.

Facial images at minimum, Hyps says when asked what data is collected. “Many platforms also capture facial geometry and expressions to generate the output. Providers generally state that images are analysed and discarded, though this is not a regulated standard.”

That lack of consistent regulation is worrying.

“There have already been major breaches of biometric databases globally, exposing millions of facial recognition records,” he warns. “The reason this matters more than a typical data breach is that compromised biometric data cannot be reset. A stolen password can be changed. A compromised face cannot.”

This is the core difference between biometric data and other personal information. You can update a password. You can cancel a credit card. But you cannot replace your face.

If facial data is leaked or misused, the consequences may follow someone for life.

Profiling without permission

Another hidden risk lies in publicly available images. Many professionals have high-quality headshots on company websites, LinkedIn profiles, or social media accounts.

According to Hyps, those images can be used without consent.

“Yes. Publicly available photos from corporate websites or social media can be used to generate avatars or train facial recognition models without the person's knowledge. That data can be combined with other publicly available information to build detailed profiles.”

In other words, someone does not need to hack your private files to misuse your likeness. A single public photograph may be enough to create a synthetic version of you.

When combined with other online information, job title, company, location, the result can be a highly convincing impersonation.

Can avatars be reverse-engineered?

The risks do not stop at impersonation.

Research suggests that AI-generated avatars may reveal more information than users realise. Hyps explains, enough biometric data can be inferred to narrow identity or match against existing databases. The tools to do this exist and are becoming more accessible.”

This means that even a seemingly harmless digital version of yourself could potentially be analysed and matched against facial recognition systems.

As the tools become more widespread, the barrier to misuse becomes lower.

Organisations where payment authorisation or sensitive decisions happen over video or verbal confirmation. Small firms and family offices that rely on trust and informal approvals are particularly exposed. Recruitment is also affected, with fake candidates using AI-generated identities to pass remote video interview rounds.
Rafal Hyps Chief Executive Officer of Sicuro Group

A technology moving faster than regulation

AI-generated avatars offer creative and commercial opportunities, from digital marketing to virtual influencers and personalised content. But as Hyps makes clear, the security framework around them has not kept pace.

The core issue is not the technology itself. It is how quickly it is being adopted compared to how slowly verification systems, regulations, and corporate policies are evolving.

For businesses, that may mean revisiting how identity is confirmed during high-risk decisions. For individuals, it may mean being more cautious about where and how facial data is shared.

In a world where faces can be generated, voices cloned, and identities simulated in real time, one old rule no longer applies:

Seeing is no longer believing.

So what can you actually do?

AI avatars aren’t going away. The technology will only get better, faster and more convincing. But that doesn’t mean individuals and organisations are powerless.

Here are practical steps to reduce your risk:

Don’t rely on video alone

If a request involves money, sensitive data or urgent approvals, verify it through a second channel.
Call the person directly on a known number. Send a follow-up message through an internal system. Build a culture where double-checking is normal, not awkward.

Tighten payment and approval processes

Businesses should avoid single-person approvals for large transfers. Introduce multi-step verification for financial decisions. Informal “quick approvals” over video calls are now a weak spot.

Update identity verification systems

Traditional liveness checks may not be enough. Organisations should review whether their verification systems are equipped to detect synthetic media, not just confirm movement on camera.

Be cautious with facial data

Think carefully before uploading your face to new AI avatar platforms. Understand what data is being collected and how it is stored. Remember: a password can be changed. Your face cannot.

Limit public exposure where possible

High-resolution headshots and detailed public profiles make impersonation easier. While you don’t need to disappear from the internet, be mindful of how much information is openly accessible.

Train teams to spot red flags

Unusual urgency. Slight audio delays. Subtle visual glitches. Behaviour that feels “off.” Encourage employees to trust their instincts and escalate concerns.

How to spot a 2026 deepfake

While AI has become incredibly realistic, it still struggles with the physical "cost" of rendering human biology in real-time. Use these three "liveness tests" if you suspect a caller isn't real:

The 'side profile' test

Most AI avatar models are trained on front-facing data (LinkedIn photos, social media videos).

  • Action: Ask the person to turn their head 90 degrees to the side.

  • What to look for: Watch the jawline and the ears. In a deepfake, the 'digital mask' will often glitch, blur, or detach from the neck when viewed from the side.

The 'Hand occlusion' test

Real-time AI struggles to render two complex objects interacting (like a hand moving in front of a face).

  • Action: Ask the person to wave their hand slowly in front of their face or scratch their nose.

  • What to look for: The AI avatar will often "flicker" or the hand will appear to pass behind the face pixels rather than in front of them.

The 'light and shadow' check

  • Action: Ask the caller to move their phone or laptop light source, or simply watch how their glasses react.

  • What to look for: Deepfakes often have "baked-in" lighting. If the room light changes but the shadows on the face stay exactly the same, it’s a synthetic image.

Lakshana is an entertainment and lifestyle journalist with over a decade of experience. She covers a wide range of stories—from community and health to mental health and inspiring people features. A passionate K-pop enthusiast, she also enjoys exploring the cultural impact of music and fandoms through her writing.

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox