UAE: MoHRE urges caution in the use of artificial intelligence tools

AI tools should not be fully relied upon when making strategic decisions

Last updated:
3 MIN READ
Supplied
Supplied

The Ministry of Human Resources and Emiratisation (MoHRE) has emphasized the risks associated with using artificial intelligence (AI) tools in the workplace, noting that although such tools may offer quick answers, the information they provide can sometimes be inaccurate or unreliable. Therefore, they should not be fully relied upon when making strategic decisions or sharing official information.

The Ministry explained that despite the significant benefits AI tools offer in accelerating processes and enhancing productivity, certain risks must be taken into consideration. In the latest issue of Labour Market magazine, it highlighted the importance of data protection, stressing the need to avoid inputting any work-related documents or sensitive personal information into AI tools, as this may lead to the leakage of confidential data.

Launch of the smart safety monitor

In a related development, the Ministry recently launched its new project, the Smart Safety Monitor, an AI-powered tool designed to facilitate the monitoring and implementation of health and safety standards in workplaces. The initiative reflects the Ministry’s commitment to leveraging generative AI technologies to enhance workplace safety, modernize field inspection tools, and promote intelligent oversight.

According to the Ministry, the project aligns with its priorities and strategy to support the well-being and happiness of workers by ensuring a safe and healthy work environment. It also reinforces the UAE’s sustainable and humanitarian approach to the labour market, particularly in the area of occupational health and safety. The Smart Safety Monitor represents a significant step towards creating safer and more sustainable workplace environments.

The Ministry added that the project marks a qualitative shift in how safety requirements are managed across workplaces, utilizing generative AI and advanced computer-vision technologies to improve oversight efficiency and strengthen occupational health and safety standards.

6 key threats in the workplaces

Six key threats every employee should know when using AI tools such as “ChatGPT” or others in the workplaces:

  1. Privacy Violations and Leakage of Sensitive Data: This is one of the most significant threats, as entering confidential information into AI tools may result in unintentionally disclosing sensitive data to external parties or storing it in unsecured environments.

  2. Inaccurate or Misleading Information: AI-generated responses may contain errors, outdated information, or statements that are not supported by evidence. Relying on such information—especially in official communication or strategic decision-making—can lead to serious consequences.

  3. Intellectual Property Risks: Content generated by AI may inadvertently reproduce copyrighted material or create outputs similar to existing intellectual property, exposing organizations to potential legal liabilities.

  4. Bias in AI-Driven Decisions: AI systems may reflect biases present in the data used to train them. This can influence hiring decisions, performance evaluations, or other HR-related processes, resulting in unfair outcomes.

  5. Overdependence on AI: Excessive reliance on AI tools can reduce critical thinking and decision-making skills among employees, and may create operational gaps if AI systems fail or deliver incorrect outputs.

  6. Cybersecurity Threats: Malicious actors may exploit AI platforms through phishing prompts, harmful instructions, or data-injection attacks. Without proper security measures, AI tools can become an entry point for cyberattacks.

What are the safety guidelines for using AI in the workplace?

To ensure the fair use of AI systems:

  • The data used to train and feed AI systems must accurately reflect the reality of the affected groups.

  • Decision-making processes must be examined for potential bias.

  • Fairness must be ensured in any major decisions made based on AI systems.

Abdullah Rashid Al Hammadi  is an accomplished Emirati journalist with over 45 years of experience in both Arabic and English media. He currently serves as the Abu Dhabi Bureau Chief fo Gulf News. Al Hammadi began his career in 1980 with Al Ittihad newspaper, where he rose through the ranks to hold key editorial positions, including Head of International News, Director of the Research Center, and Acting Managing Editor. A founding member of the UAE Journalists Association and a former board member, he is also affiliated with the General Federation of Arab Journalists and the International Federation of Journalists. Al Hammadi studied Information Systems Technology at the University of Virginia and completed journalism training with Reuters in Cairo and London. During his time in Washington, D.C., he reported for Alittihad  and became a member of the National Press Club. From 2000 to 2008, he wrote the widely read Dababees column, known for its critical take on social issues. Throughout his career, Al Hammadi has conducted high-profile interviews with prominent leaders including UAE President His Highness Sheikh Mohamed bin Zayed Al Nahyan, HH Sheikh Mohammed bin Rashid Al Maktoum, and key Arab figures such as the late Yasser Arafat and former presidents of Yemen and Egypt. He has reported on major historical events such as the Iran-Iraq war, the liberation of Kuwait, the fall of the Berlin Wall, and the establishment of the Palestinian Authority. His work continues to shape and influence journalism in the UAE and the wider Arab world.

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox