Age prediction to spot under-18 accounts and automatically apply stricter safety limits

OpenAI is rolling out a new safety feature in ChatGPT that aims to spot underage users — not by asking them to tick a box, but by predicting their age based on how they use the service.
According to TechCrunch, the company has introduced an “age prediction” system designed to identify accounts that likely belong to users under 18, and automatically apply stricter safeguards to limit exposure to sensitive content.
The feature works by analysing “behavioural and account-level signals,” TechCrunch reported, including details such as a user’s stated age, how long the account has existed, and patterns like the time of day the account is active.
If the model flags an account as belonging to someone under 18, ChatGPT will automatically shift that user into a more restricted experience, applying extra protections around topics such as sex, violence, and other material considered sensitive for minors.
The move comes as pressure builds globally on tech platforms to strengthen protections for children and teens — especially as AI tools become part of daily life in schools and homes.
In a separate report, Reuters said OpenAI is rolling out age prediction globally as it prepares to introduce an “adult mode” for verified users in early 2026. Reuters added that users incorrectly flagged as under 18 will be able to regain full access by verifying their identity through a selfie submission to Persona, an identity verification service.
OpenAI has been building toward this approach for months. In an earlier policy update, the company described its long-term plan to tailor ChatGPT experiences depending on whether someone is over or under 18, including defaulting to safer protections when age is unclear.
OpenAI isn’t alone in using signals and automated tools to estimate a user’s age. Across social media, platforms have been ramping up similar systems, especially in markets with stricter online safety rules.
For example, The Guardian reported that TikTok has been strengthening age-verification technology across the EU using profile details, posted content, and behavioural signals to identify younger users.
For ChatGPT, the change reflects a growing shift in consumer AI: safety controls are moving away from static settings and into real-time systems that try to adapt to who is using the tool — and what they might be vulnerable to seeing.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox