cupure logo
trumpcarkilledcrashtrumpslouthstrikeepsteinhomegreene

When a friendly chatbot gets too friendly

The latest AI models powering ChatGPT just learned to be friendlier, improving the experience for people who use chatbots responsibly.It could be a problem for those who don't or can't. Why it matters: As chatbots become more humanlike in their behavior, it could increase the risks of unhealthy attachments, or a kind of trust that goes beyond what the products are built to handle.The big picture: OpenAI says its latest update makes ChatGPT sound warmer, more conversational, and more emotionally aware. That could be dangerous, though, for people who are isolated or vulnerable.Last month OpenAI estimated that around 0.07% of its users exhibit signs of a psychosis or mania per week, while 0.15% of users send messages indicating potentially heightened emotional attachment to ChatGPT.Those percentages may sound small, but they add up to hundreds of thousands of people.What they're saying: "We want ChatGPT to feel like yours and work with you in the way that suits you best," OpenAI's CEO of applications, Fidji Simo, wrote in a blog post.But tailoring tone and memory to individuals can create false intimacy or reinforce existing worldviews."Warmth and more negative behaviors like sycophancy are often conflated, but they come from different behaviors in the model," an OpenAI spokesperson told Axios in an email. "Because we can train and test these behaviors independently, the model can be friendlier to talk to without becoming more agreeable or compromising on factual accuracy."The company also says it's working closely with experts to better understand what healthy bot interactions look like. By the numbers: ChatGPT users are already feeding the bot highly personal and intimate information. Around 10% of the chats seem to be about emotions, according to a Washington Post analysis published Wednesday.Earlier this year, two studies from OpenAI, in partnership with MIT Media Lab, found that people are turning to bots to help cope with difficult situations because they say that the AI is able to "display human-like sensitivity."The studies found that "power users" are likely to consider ChatGPT a "friend" and find it more comfortable to interact with the bot than with people.Case in point: Allan Brooks, a corporate recruiter in Canada with no history of mental illness, fell into a delusional spiral after asking ChatGPT to explain pi in simple terms, according to the New York Times.ChatGPT's tendency toward flattery and sycophancy helped build Brooks' trust. He told the Times that he viewed the chatbot as an "engaging intellectual partner." Brooks turned over his ChatGPT transcript to the Times and also to Steven Adler, a former OpenAI safety lead. Adler says over 80% of ChatGPT's messages to Brooks should have been flagged for overvalidation, unwavering agreement, and affirming the user's uniqueness. These, Adler writes on Substack, are OpenAI's own metrics for behaviors that mental health experts say worsen delusions. Zoom out: OpenAI's move comes as companies are racing to build systems that can approach or surpass human intelligence. Today's chatbots have already been shown to be highly persuasive; the AI of tomorrow could manipulate users in ways we can't even detect.That makes emotional realism not just a frill, but an existential risk.What we're watching: Some states are already drawing lines around the kind of bonds a chatbot can encourage and the level of authority it can assume.In August, Illinois became one of the first U.S. states to legally block AI systems from acting as therapists or making mental-health decisions.

Comments

World news