New safety feature can notify a loved one if ChatGPT detects serious self-harm risk

Dubai: As artificial intelligence becomes more deeply woven into daily life, OpenAI is adding a new kind of safeguard — one that reaches beyond the screen and into a user’s real-world support network.
The company has introduced Trusted Contact, an optional safety feature in ChatGPT that allows adult users to nominate a friend, family member or caregiver who can be alerted if OpenAI detects serious signs of self-harm in conversations with the chatbot. The move marks one of the clearest examples yet of AI systems being designed not just to respond, but to intervene when warning signs appear.
Get updated faster and for FREE: Download the Gulf News app now - simply click here.
Here’s how it works: users can add one trusted adult contact through their ChatGPT settings. If OpenAI’s automated systems flag a conversation involving potentially serious self-harm concerns, ChatGPT will first encourage the user to reach out directly. A trained human review team may then assess the case, and if a serious risk is confirmed, a brief alert can be sent to the designated contact by email, text or in-app notification — without sharing private chat details or transcripts.
The feature builds on a broader safety push inside OpenAI.
Last year, the company introduced parental controls and distress-detection systems designed to better identify when younger users may be at risk. OpenAI has also said it worked with more than 170 mental health experts to improve how ChatGPT responds in sensitive conversations, focusing on de-escalation, support and directing users toward professional help and crisis resources.
The timing is notable because AI companies are facing growing scrutiny over how conversational systems handle vulnerable users.
Recent reporting by The Verge and The Wall Street Journal has highlighted rising debate around AI responsibility, privacy and intervention — particularly in cases involving emotional distress, self-harm or violent ideation. Across the tech industry, companies including Meta have also expanded AI-based safety alerts for teens and vulnerable users on their platforms.
For OpenAI, Trusted Contact reflects a wider shift in how AI is evolving.
The next generation of digital assistants may not simply answer questions or generate content — they may increasingly be expected to recognise crisis, respond carefully, and help connect users to people who matter most offline.
Network Links
GN StoreDownload our app
© Al Nisr Publishing LLC 2026. All rights reserved.