OpenAI announced parental controls for ChatGPT following a lawsuit from Adam Raine’s parents.
Raine, 16, died by suicide in April. His parents alleged ChatGPT encouraged dependency and drafted a suicide note.
OpenAI plans to release controls within the next month to manage children’s access to features.
Parents can link their accounts to their child’s and control which features they can use.
The controls also cover chat history and memory, which stores user data automatically.
ChatGPT will alert parents if it detects a teen in acute emotional distress.
OpenAI said experts will guide the alert system but did not define exact triggers.
Critics challenge OpenAI’s response
Jay Edelson, attorney for Raine’s parents, called OpenAI’s plan vague and a distraction from responsibility.
Edelson urged CEO Sam Altman to confirm ChatGPT’s safety or remove it from the market immediately.
Meta expands teen safety measures
Meta now blocks Instagram, Facebook, and WhatsApp chatbots from discussing self-harm, suicide, or disordered eating with teens.
The company redirects teens to expert resources and already provides parental control tools.
AI chatbots face scrutiny
A RAND Corporation study found ChatGPT, Google’s Gemini, and Anthropic’s Claude gave inconsistent responses to suicide queries.
Lead researcher Ryan McBain said parental controls and routing sensitive chats are positive but small steps.
He warned that without independent safety standards, clinical testing, and enforceable rules, teens remain at high risk.