Person in front of system
Dubai Police has cautioned against the risk of divulging personal details with seemingly trustworthy chatbots and applications. Image Credit: Shutterstock

Dubai: Sharing undue personal data with Artificial Intelligence applications, particularly Chatbots like ChatGPT, can prove to be very dangerous, a top Dubai Police official has warned.

Speaking to Gulf News in an exclusive interview, Major Abdullah Al Sheihi, Acting Director of the Cyber Crime Department at Dubai Police, said, “AI applications have become very important to a huge number of users, given their inputs for preparing research, writing articles, responding to emails, and more. However, there is a downside to it.”

Get exclusive content with Gulf News WhatsApp channel

He noted that while users may perceive these applications as trustworthy virtual companions, they can actually pose significant risks in the future.

“With the introduction of the voice feature, which has amazed many, the user’s personality in the application becomes an open book for advanced technologies. You find someone telling the AI robot everything about their life, and they might even start explaining their worries and personal circumstances to it. This is wrong because Artificial Intelligence has tremendous analytical capabilities and an infinite memory,” he said.

Major Abdullah Al Sheihi

“The device gathers data about users, learning from the information provided. Consequently, some of this data might infringe on the privacy of others. I do not disclose this information to others because it involves a violation of privacy. It’s crucial to respect the privacy of individuals, especially those in sensitive positions who prefer to keep their personal lives confidential,” Major Al Sheihi said.

According to him, few take the time to verify the sources of this information, which can contain inaccurate data and figures, leading to misguided interpretations by decision-makers.

Major Al Sheihi said: “Be wary of feeding personal information to chatbots like the ChatGPT. We haven’t received any reports of misuse thus far, but we acknowledge the possibility that issues may arise in the future since this is a new area. Reporting can take time when problems occur.”

He said, “Artificial Intelligence technologies are a double-edged sword; they can be used for beneficial purposes, such as improving and enhancing quality of life, maintaining security, providing information and assisting in various matters, but they can also be used by cybercrime professionals for hacking, fraud or breaching systems.”

While Dubai Police and other law enforcement agencies utilise big data and Artificial Intelligence to maintain safety and ensure justice, they also monitor the misuse of these technologies and collaborate with partners to mitigate their misuse and hold offenders accountable.

Risks for youngsters

Major Al Sheihi spoke about how cognitive abilities get disabled due to reliance on robots. “This can be measured by the increased reliance on robots or artificial intelligence in writing research, or rephrasing texts, even responding on behalf of a person. Youngsters must be especially carefully as it diminishes their cognitive abilities at an early age. As it is, they are more vulnerable to the risks of cybercrime when using these technologies,” he cautioned.

cybercrime rumours freedom limited-1681710738686
Dubai police have stressed on the optimal use of AI tools that can significantly enhance research and learning, but with due caution. Image Credit: Supplied

Recent studies have highlighted the dangers associated with students using chatbots or AI applications for research and academic tasks. These include cheating and plagiarism, the potential for incorrect information, and exposure to biased content that students may not recognise.

Additionally, children may become excessively reliant on communication through these tools, neglecting other social activities. Studies also confirm that modern applications can compromise privacy and security, as users often share personal information without understanding the associated risks.

Optimal use of AI

Major Al Sheihi stressed on the optimal use of AI tools that can significantly enhance research and learning, but with due caution.

“They should aid in research with emphasis on verifying facts from multiple sources. Engage with the idea, search for related concepts, and let it inspire your creativity and innovation. Those who depend excessively on applications like ChatGPT may risk losing their critical thinking abilities,” he said.

“If you want to develop a programme that is missing a certain component, you can consult ChatGPT for ideas—don’t get frustrated. By asking and exploring this concept, it can inspire creativity and innovation. However, those who overly depend on it will struggle to think independently. Risk management is essential in addressing various challenges,” he said.

“For instance, if Internet is disrupted, how do you continue to provide services? Despite the availability of technology, traditional methods remain crucial in risk management, particularly when integrating Artificial Intelligence,” he warned.

Additionally, there are risks of exploitation through the creation of deepfake videos that combine audio and visual elements. This can lead to fraudulent activities where a celebrity or influencer’s likeness is misused to promote products or investments deceitfully, the official said.

Daily reports of cybercrimes are logged on to the e-Crime platform, with over 100 transactions in a single day, including requests for information and assistance, he noted.

“They involve hacking of WhatsApp and Instagram accounts, as well as account recovery support. Additionally, reports encompass requests for help in recovering accounts and information on accounts sharing illegal content on social media. If a crime is identified, the perpetrators will be apprehended and referred to public prosecution,” said Major Al Sheihi.

How AI can be misused: The Monopoly case

The Monopoly case is a classic example of how AI can be misused.

It involved electronic fraud against foreign companies using AI technology. The crime was executed outside the country, while the outcomes, including the transfer of funds and the apprehension of suspects, occurred in the UAE.

Major Al Sheihi explained how the operation included two cases—one in 2020 and another in early 2024—resulting in the arrest of 43 suspects belonging to 12 different nationalities. A record sum of $113 million was recovered.

The criminals employed sophisticated techniques, moving money from one account to another to cover their tracks before withdrawing it via intermediaries and depositing it into specialised money holding and transfer companies.

The operation began when a lawyer of a company in one Asian country logged a criminal complaint through Dubai Police’s anti-cybercrime platform, e-crime.ae, claiming that an international gang had hacked the email of the company’s CEO, accessed correspondences, impersonated him and instructed the accounts manager to transfer around $19 million to an account in a Dubai bank.

It was claimed that the amount was for the benefit of the company’s branch in the emirate.

Dubai Police’s Anti-Cybercrime and Anti-Money Laundering Departments immediately traced the money trail and began monitoring the gang members’ movements, luring them to the UAE without raising their suspicions.

The account to which the money was transferred belonged to a person who opened it in 2018 and had left the country since. The gang was re-routing the funds through several accounts before withdrawing and depositing them in cash vaults of specialised money holding and transport companies.

Dubai Police indicated that while the task force was monitoring the case, the gang hacked into the electronic communications of another company outside the country and seized around $17 million. They then made multiple transfers before depositing the money into cash vaults.

However, Dubai Police managed to track and arrest the suspects. They confirmed that hackers would identify their victims precisely, studying their electronic activities. They primarily targeting corporate executives, business people, and high networth individuals.