Grok deepfakes: How UAE residents can protect themselves from AI image misuse

Experts explain how to reduce AI image risks and what to do if photos are misused

Last updated:
Nivetha Dayanand, Assistant Business Editor
Delhi Police Crime Branch busts Rs 47 lakh online trading scam with Chinese links; five more arrested
Delhi Police Crime Branch busts Rs 47 lakh online trading scam with Chinese links; five more arrested

Dubai: The global backlash against AI tools that manipulate real images has pushed digital safety into everyday conversation. After Malaysia and Indonesia blocked access to Elon Musk’s Grok over the creation of sexualised deepfakes, attention has shifted closer to home. The question many UAE residents are asking is no longer whether this can happen, but how to stay protected if it does.

Experts say artificial intelligence can now take a single clear photo and turn it into something deeply harmful within seconds, with consequences that spread faster than facts.

“The risks are immediate and personal,” said Talal Shaikh, associate professor of AI and robotics at Heriot-Watt University Dubai. “AI can place anyone’s face into compromising content within seconds. In the UAE’s close-knit communities, fabricated imagery can damage careers and relationships before the truth emerges.”

Shaikh pointed to a rise in sextortion cases where criminals fabricate intimate images and demand money or silence. Such acts fall under Federal Decree-Law No. 34 of 2021, which criminalises digital blackmail and image manipulation intended to harm.

Parents should share children's photos only in private groups, avoiding school uniforms or location identifiers. The UAE's Wadeema's Law reinforces heightened privacy protections for minors, and our online behaviour should reflect that responsibility.
Grok deepfakes: How UAE residents can protect themselves from AI image misuse
Talal Shaikh Associate Professor (AI and Robotics)

Even harmless images are vulnerable. Family gatherings, holiday photos or professional headshots can be repurposed into fake accounts, romance scams or identity theft. “A single clear photograph is enough to create thousands of manipulated images,” Shaikh said.

Elizabeth Rayment, director at YMM Your Mind Media, said visibility itself has become a risk factor. “Corporate portraits or family photos can be weaponised into convincing deepfakes. The damage is reputational, financial and deeply personal.”

Steps people can take right now

Experts stress that small changes can significantly reduce exposure. Shaikh advises tightening privacy settings across platforms, limiting who can tag or download images, and keeping profiles private where possible.

Watermarking images adds another layer of friction. Free tools such as Canva or Snapseed allow users to place semi-transparent text over photos, making misuse harder. Reverse image searches on Google can help monitor whether pictures are being reused elsewhere.

Families face added challenges. Shaikh urged parents to rethink sharing high-resolution images of children in public spaces. “Every image becomes potential training data. Avoid school uniforms, visible locations and open timelines.”

Rayment added that removing location metadata and separating public and private accounts can help reduce unwanted attention, especially among content creators and professionals with a public presence.

To reduce exposure, some precautions can be applied. Keeping accounts private where possible, limiting who can download or reshare images, disabling automatic tagging, and removing location metadata before sharing can help limit the risks. Watermarks can also help.
Grok deepfakes: How UAE residents can protect themselves from AI image misuse
Elizabeth Rayment Director at YMM (Your Mind Media)

What to do if an image is misused

Speed and restraint matter once a manipulated image surfaces. “Do not engage with the perpetrator or delete evidence,” Shaikh said. Screenshots should capture the image, URL, date and usernames involved.

Reports should be filed immediately using platform tools dedicated to non-consensual imagery. Victims can also file complaints through Dubai Police’s eCrime portal, Abu Dhabi’s Aman service or the MySafe Society app. Legal penalties under Decree-Law No. 34 of 2021 include imprisonment.

One critical warning is do not reshare the image, even with good intentions. Forwarding such material can itself be a violation under UAE law.

Why women and children face greater risk

Women, children and highly visible creators are disproportionately targeted. “None of this is the victim’s fault,” Shaikh said. Sexualised deepfakes often target women, while children face long-term identity risks that can follow them into adulthood.

Extra safeguards include strict comment filters, consistent watermarking and limited public exposure. Parents should rely on private groups rather than open feeds. UAE legislation such as Wadeema’s Law reinforces heightened protections around minors, making responsible sharing essential.

Online safety rests on three pillars, according to Shaikh. Platforms must detect and label AI-generated content quickly, with takedown mechanisms tailored to local law and culture. Regulators must enforce existing cybercrime legislation and expand public awareness campaigns in both Arabic and English.

Efforts by bodies such as the UAE Cyber Security Council have strengthened response frameworks, but individual awareness remains the first line of defence.

Shaikh encouraged what he calls a zero-trust media mindset. Treat sensational images as potentially synthetic until verified. “The faster we detect, report and remove, the less oxygen these abuses receive.”

With AI tools evolving rapidly and global regulators struggling to keep pace, experts say digital caution has become part of daily life. In the UAE, strong laws and reporting systems are in place. Using them early and wisely can make the difference between harm spreading and harm stopping.

Nivetha Dayanand
Nivetha DayanandAssistant Business Editor
Nivetha Dayanand is Assistant Business Editor at Gulf News, where she spends her days unpacking money, markets, aviation, and the big shifts shaping life in the Gulf. Before returning to Gulf News, she launched Finance Middle East, complete with a podcast and video series. Her reporting has taken her from breaking spot news to long-form features and high-profile interviews. Nivetha has interviewed Prince Khaled bin Alwaleed Al Saud, Indian ministers Hardeep Singh Puri and N. Chandrababu Naidu, IMF’s Jihad Azour, and a long list of CEOs, regulators, and founders who are reshaping the region’s economy. An Erasmus Mundus journalism alum, Nivetha has shared classrooms and newsrooms with journalists from more than 40 countries, which probably explains her weakness for data, context, and a good follow-up question. When she is away from her keyboard (AFK), you are most likely to find her at the gym with an Eminem playlist, bingeing One Piece, or exploring games on her PS5.
Related Topics:

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox

Up Next