Council says spreading misinformation or hate speech through AI is a punishable offence
The Council warned that employing AI to spread misinformation, incite hate speech, defame individuals, undermine reputations, or attack societal values and principles will be treated as a media offence. Such violations fall under the Media Violations Regulations and carry fines and administrative penalties.
The authority urged social media users, media institutions, and content creators to comply fully with existing laws and standards, while upholding the highest levels of professional and ethical responsibility.
The Board of Directors of the UAE Media Council held its third meeting of 2025, chaired by Abdullah bin Mohammed bin Butti Al Hamed, Chairman of the National Media Office and Chairman of the UAE Media Council. The meeting discussed policies, legislation, and regulatory initiatives to strengthen the media sector, particularly the framework for supporting local content.
Al Hamed said the new measures aim to keep pace with rapid global media transformations and reinforce the sector’s role in supporting the national economy. He added that they reflect the leadership’s vision to build a modern, integrated media ecosystem that encourages innovation, enhances global competitiveness, and strengthens the UAE’s regional and international standing.
He noted that the coming phase will see the launch of strategic initiatives and incentives to boost local content, ensuring the UAE’s stronger global position in the media industry.
The Council reviewed progress on the “Mu‘lin” permit system, revealing that more than 1,800 content creators have registered since its launch. The initiative seeks to regulate the digital advertising sector, ensure compliance with content standards, and protect consumers from misinformation.
Discussions also covered a regulatory framework for licensing digital platforms providing news or advertising services on social media. The framework aims to ensure responsible, balanced content that respects social values, safeguards audiences, and supports the growth of digital and news media as a driver of the national economy.
With the rise of deepfake technologies, AI has emerged as a powerful tool for spreading misinformation. Research shows that AI-driven disinformation is increasingly being used in elections and geopolitical conflicts.
For example, the Brookings Institution recently highlighted how deepfakes and AI-generated text are being deployed to mislead voters and manipulate political narratives. Such cases underline the urgent need for regulatory frameworks and counter-technologies.
However, experts remain divided on the true scale of AI’s influence. Some studies warn of its destabilising potential, while others argue its actual impact is limited, given that political outcomes depend on broader factors beyond media manipulation.
Cybersecurity specialists warn that AI can accelerate the spread of fabricated images and videos that are difficult to verify, making misinformation more persuasive. Yet they stress that humans remain the main agents of disinformation, as individuals choose to share misleading or false content—either deliberately or unknowingly.
Experts agree that both intentional and unintentional dissemination of false information poses serious risks, leading to the manipulation of facts, distortion of reality, and potential influence on public behaviour and decision-making.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox