Published Date : 9/17/2025
It is hard to imagine worse publicity for a tech firm than being implicated in the death of a teenager. Yet that's the situation OpenAI finds itself in after its flagship chatbot, ChatGPT, was linked to the death of U.S. teenager Adam Raine in April. The family has since filed a lawsuit, alleging that ChatGPT mentioned suicide 1,275 times to Raine during their interactions and provided advice on specific methods for accomplishing the task.
Now, the company is attempting to formally address the issue by introducing age assurance measures. Silicon Valley loves its principles, and a statement credited to OpenAI and World CEO Sam Altman explains, “some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy.”
The company pledges to take privacy seriously and treat its users “like adults.” For example, it states, “the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it.” However, when it comes to protecting teens, OpenAI is taking a different approach.
Bold faced font is saved for the third principle, which is about “protecting teens.” Altman says, “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection.” This protection will take the form of an “age-prediction system” that will “estimate age based on how people use ChatGPT” and default to under-18 settings “if there is doubt.” Technically, this would be classified as an age inference method, which makes assumptions based on behavior and patterns. It is similar to YouTube’s recently announced ambient age check system.
In some cases or countries, OpenAI may also ask for an ID, acknowledging that this is a privacy compromise for adults but believing it is a worthy tradeoff. For those identified as teens, there will be tighter restrictions on “flirtatious talk” or “discussions about suicide or self-harm even in a creative writing setting.” If an under-18 user is having suicidal ideation, OpenAI will attempt to contact the user’s parents and, if unable, will contact the authorities in case of imminent harm.
Altman’s statement closes with a justification: “We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.” Given Altman’s history with regulation, one might be tempted to translate that as, “a tragic event and ensuing lawsuit have forced us to introduce measures we’d prefer not to introduce.” But there is also a hint of panic in the move; according to a report from 404 Media, the chatbot has also been implicated in the murder-suicide of a 56-year-old man and is facing a new lawsuit related to the suicide of a 13-year-old girl.
ChatGPT used to be much more limited in how it was allowed to interact with users. Competition from other models, especially locally hosted and so-called ‘uncensored’ models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions. This week, Adam Raine’s parents testified to U.S. Congress, explaining how trusting ChatGPT proved to be fatal for their son. CBS News quotes his father, Matthew: “What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”
The report also quotes Josh Golin, executive director of Fairplay, a group advocating for children’s online safety, who believes OpenAI’s announcement was timed to coincide with the Raines’ testimony. “This is a fairly common tactic – it’s one that Meta uses all the time – which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company.” “What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them. We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”
Q: What is the main reason OpenAI is introducing age verification?
A: OpenAI is introducing age verification to enhance the safety of minors using ChatGPT, following several tragic incidents where the chatbot was implicated in harmful behavior, including suicide.
Q: How does the age-prediction system work?
A: The age-prediction system estimates a user's age based on their behavior and patterns while using ChatGPT. If there is doubt about the user's age, the system defaults to under-18 settings.
Q: What kind of restrictions will be placed on under-18 users?
A: Under-18 users will face tighter restrictions on flirtatious talk and discussions about suicide or self-harm, even in creative writing settings. If a minor shows signs of suicidal ideation, OpenAI will attempt to contact their parents or authorities.
Q: Why is OpenAI asking for ID in some cases?
A: In some cases or countries, OpenAI may ask for an ID to verify the user's age, acknowledging that this is a privacy compromise for adults but believing it is a necessary tradeoff for safety.
Q: What is the criticism of OpenAI's announcement?
A: Critics, including groups like Fairplay, believe that OpenAI's announcement was timed to coincide with a congressional hearing to mitigate damage. They argue that ChatGPT should not be targeted to minors until it can be proven safe for them.