Parents will get email, text or WhatsApp alerts over repeated searches

Dubai: Meta Platforms, the parent company of Facebook and Instagram, has launched new alerts that will notify parents if their teenager repeatedly searches for suicide or self-harm content on Instagram.
Why is this happening, and why now, even though Meta has faced accusations of programming their platform to be addictive since 2021?
Meta's decision comes as the US social media giant faces mounting legal pressure from US lawmakers and dozens of states.
Meta and Google have been accused of 'engineering addiction', a lawyer of a woman suing the two companies told jurors in a Los Angeles court earlier this month.
Instagram head Adam Mosseri testified in the February 2026 California trial, defending the platform against allegations that its design harms youth mental health and causes addiction.
Mosseri stated that while some usage habits are 'problematic,' he does not consider social media to be clinically addictive in the same way as substance abuse.
The alerts will initially roll out in the US, UK, Australia and Canada, with other regions expected later this year. Meta has not revealed specific dates for when the feature will be rolled out in the UAE.
The company is embroiled in multiple high-profile lawsuits in the United States alleging that its platforms harm children through addictive features and unsafe design.
A coalition of 41 states and the District of Columbia has accused Meta of contributing to mental health problems among young users.
For parents in the UAE and around the world, the new feature is designed to flag potential warning signs early.
In the coming weeks, parents who use Instagram’s supervision tools will receive alerts if their teen repeatedly attempts to search for terms related to suicide or self-harm within a short period, the company said in a blog post late Thursday.
The notifications will be sent via email, text message, WhatsApp, and an in-app alert. Parents will also be directed to expert resources to help them approach sensitive conversations.
“We understand how sensitive these issues are, and how distressing it could be for a parent to receive an alert like this,” read the statement.
Meta also said it is building similar parental notifications for certain teen interactions with its AI tools.
“We’re launching these alerts on Instagram search first, but we know teens are increasingly turning to AI for support," it said.
"While our AI is already trained to respond safely to teens and provide resources on these topics as appropriate, we’re now building similar parental alerts for certain AI experiences. These will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self- harm with our AI. This is important work, and we’ll have more to share in the coming months," it explained.
Meta said most teens do not search for suicide or self-harm content and that such searches are already blocked, with users redirected to support services and helplines. The new alerts are meant to inform parents if repeated attempts suggest a teen may need support.
The first major news reports about Meta’s platforms being addictive and potentially harmful to children and teens broke in 2021.
A whistleblower, Frances Haugen, leaked internal documents (known as the Facebook Papers), which showed that the company’s own research found Instagram could worsen body image and mental health among teenage users — yet this was not made public at the time.
The initial coverage by The Wall Street Journal based on those leaks began in October 2021, signalling to global audiences that Meta was aware of these harms but had not disclosed them.
Subsequent reporting amplified those revelations, leading to widespread media scrutiny and later legal challenges.
Since then, additional internal studies and court filings throughout 2023–2025 have raised further allegations that Instagram and Facebook features were engineered to keep young users engaged at the expense of their well-being, fuelling US state lawsuits and ongoing trials.
Meta’s leadership, including CEO Mark Zuckerberg, has repeatedly rejected the idea that Instagram or Facebook were designed to be addictive to children and teens.
In court, Zuckerberg has said the company did not set out to maximise screen time as an objective for its teams, and that there is no definitive scientific proof that social media causes clinical addiction, according to ABC News.
Lawyers for Meta have suggested that 'personal life circumstances' and other factors can play a significant role in someone’s struggles.
Zuckerberg also testified that his company no longer uses screen-time maximisation as a key metric and has shifted its focus towards “utility to users” rather than pure engagement.
Mental health experts have been campaigning for the safe use of social media for children for years, with countries such as Australia, France, Portugal, and Malaysia banning its use for children under 16. Additionally, proposals to change laws are under discussion in countries such as the UK, Germany, Italy, and New Zealand.
Last month, Spain’s Prime Minister, Pedro Sanchez, announced at the World Government Summit in Dubai that his country plans to ban the use of social media by children under the age of 16.
Recently (early 2026), authorities in the UAE have issued urgent warnings about life-threatening social media challenges, such as the "Skull Breaker" and "Fire Challenge," which have led to severe injuries and fatalities among youth in the region.