cupure logo
trumpcharliekirkcharlie kirkpolicesuspectstatechinadealqatar

OpenAI outlines new mental health guardrails for ChatGPT

OpenAI outlines new mental health guardrails for ChatGPT
ChatGPT guardrails for teens and people in emotional distress will roll out by the end of the year, OpenAI promised Tuesday. Why it matters: Stories about ChatGPT encouraging suicide or murder or failing to appropriately intervene have been accumulating recently, and people close to those harmed are blaming or suing OpenAI.ChatGPT currently directs users expressing suicidal intent to crisis hotlines. OpenAI says it does not currently refer self-harm cases to law enforcement, citing privacy concerns.The big picture: Last week the parents of a 16-year-old Californian who killed himself last spring sued OpenAI, suggesting that the company is responsible for their son's death.Also last week, The Wall Street Journal reported that a 56-year-old man killed his mother and himself after ChatGPT reinforced the man's paranoid delusions, which professional mental health experts are trained not to do.Last month, the mother of a 29-year-old wrote an op-ed in The New York Times about how her daughter asked ChatGPT to help write her suicide note. ChatGPT did not encourage the woman to kill herself, but also did not report that she was a danger to herself — which a human therapist would be mandated to do.Between the lines: The work to improve how its models recognize and respond to signs of mental and emotional distress has already been underway, OpenAI said in a blog post today.The post outlines how the company has been making it easier for users to reach emergency services and get expert help, strengthening protections for teens and letting people add trusted contacts to the service.Driving the news: OpenAI's post on Tuesday previews its plans for the next 120 days, and says the company is making "a focused effort" to launch as many of these improvements as possible this year.How it works: "We're beginning to route some sensitive conversations, such as when signs of acute distress are detected, to reasoning models like GPT-5-thinking," OpenAI says.GPT-5's thinking model applies safety guidelines more consistently, per the company. A network of over 90 physicians across 30 countries will give input on mental health contexts and help evaluate the models, OpenAI says. Zoom in: ChatGPT users must be 13 and up, with parent permission for users under 18. Within the month, parents will be able to link their accounts with those belonging to their teens for more direct control.Once accounts are linked, the parent can manage how ChatGPT responds and "receive notifications when the system detects their teen is in a moment of acute distress." "These steps are only the beginning," OpenAI wrote in the blog post.Character.AI, which has also been blamed for more than one teenager's suicide, introduced similar parental controls in March. Reality check: Keeping savvy kids from accessing sites and apps they're not old enough to use is a thorny problem. Convincing those ages 13-18 to link their accounts to their parents' could be an even tougher sell.What they're saying: The problem is as old as the internet itself, Kate O'Loughlin, CEO of kids' digital media platform SuperAwesome, told Axios last week.O'Loughlin says everything cool and new on the internet is created by adults with adults in mind, but kids will always want to use it — and find pathways to riskier environments.Then, she says, the platforms tend to lay the responsibility for monitoring kids on these platforms solely on the parents. What we're watching: One way to potentially mitigate these tragedies is to stop allowing ChatGPT or other bots to act as therapists — or to stop designing them to act like any person at all.Go deeper: AI's mental health fix: Stop pretending it's humanIf you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.

Comments

World news