AI tools should not be fully relied upon when making strategic decisions

The Ministry of Human Resources and Emiratisation (MoHRE) has emphasized the risks associated with using artificial intelligence (AI) tools in the workplace, noting that although such tools may offer quick answers, the information they provide can sometimes be inaccurate or unreliable. Therefore, they should not be fully relied upon when making strategic decisions or sharing official information.
The Ministry explained that despite the significant benefits AI tools offer in accelerating processes and enhancing productivity, certain risks must be taken into consideration. In the latest issue of Labour Market magazine, it highlighted the importance of data protection, stressing the need to avoid inputting any work-related documents or sensitive personal information into AI tools, as this may lead to the leakage of confidential data.
Also Read
Mistake-filled legal briefs show the limits of relying on AI tools at workGulf News Cyber Forum 2025: Cybersecurity experts highlight AI's role and risksRising AI data costs and emerging risks under the spotlight at Gulf News Cyber Forum 2025AI agents may shop for you in the future. But who pays the bill when things go wrong?In a related development, the Ministry recently launched its new project, the Smart Safety Monitor, an AI-powered tool designed to facilitate the monitoring and implementation of health and safety standards in workplaces. The initiative reflects the Ministry’s commitment to leveraging generative AI technologies to enhance workplace safety, modernize field inspection tools, and promote intelligent oversight.
According to the Ministry, the project aligns with its priorities and strategy to support the well-being and happiness of workers by ensuring a safe and healthy work environment. It also reinforces the UAE’s sustainable and humanitarian approach to the labour market, particularly in the area of occupational health and safety. The Smart Safety Monitor represents a significant step towards creating safer and more sustainable workplace environments.
The Ministry added that the project marks a qualitative shift in how safety requirements are managed across workplaces, utilizing generative AI and advanced computer-vision technologies to improve oversight efficiency and strengthen occupational health and safety standards.
Six key threats every employee should know when using AI tools such as “ChatGPT” or others in the workplaces:
Privacy Violations and Leakage of Sensitive Data: This is one of the most significant threats, as entering confidential information into AI tools may result in unintentionally disclosing sensitive data to external parties or storing it in unsecured environments.
Inaccurate or Misleading Information: AI-generated responses may contain errors, outdated information, or statements that are not supported by evidence. Relying on such information—especially in official communication or strategic decision-making—can lead to serious consequences.
Intellectual Property Risks: Content generated by AI may inadvertently reproduce copyrighted material or create outputs similar to existing intellectual property, exposing organizations to potential legal liabilities.
Bias in AI-Driven Decisions: AI systems may reflect biases present in the data used to train them. This can influence hiring decisions, performance evaluations, or other HR-related processes, resulting in unfair outcomes.
Overdependence on AI: Excessive reliance on AI tools can reduce critical thinking and decision-making skills among employees, and may create operational gaps if AI systems fail or deliver incorrect outputs.
Cybersecurity Threats: Malicious actors may exploit AI platforms through phishing prompts, harmful instructions, or data-injection attacks. Without proper security measures, AI tools can become an entry point for cyberattacks.
To ensure the fair use of AI systems:
The data used to train and feed AI systems must accurately reflect the reality of the affected groups.
Decision-making processes must be examined for potential bias.
Fairness must be ensured in any major decisions made based on AI systems.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox
Network Links
GN StoreDownload our app
© Al Nisr Publishing LLC 2025. All rights reserved.