cupure logo
israelirantrumpairindiaair indiacrashminnesotakilledirans

Behind the Curtain: What if predictions of humanity-destroying AI are right?

During our recent interview, Anthropic CEO Dario Amodei said something arresting that we just can't shake: Everyone assumes AI optimists and doomers are simply exaggerating. But no one asks:"Well, what if they're right?"Why it matters: We wanted to apply this question to what seems like the most outlandish AI claim — that in coming years, large language models could exceed human intelligence and operate beyond our control, threatening human existence.That probably strikes you as science-fiction hype.But Axios research shows at least 10 people have quit the biggest AI companies over grave concerns about the technology's power, including its potential to wipe away humanity. If it were one or two people, the cases would be easy to dismiss as nutty outliers. But several top execs at several top companies, all with similar warnings? Seems worth wondering: Well, what if they're right?And get this: Even more people who are AI enthusiasts or optimists argue the same thing. They, too, see a technology starting to think like humans, and imagine models a few years from now starting to act like us — or beyond us. Elon Musk has put the risk as high as 20% that AI could destroy the world. Well, what if he's right?How it works: There's a term the critics and optimists share: p(doom). It means the probability that superintelligent AI destroys humanity. So Musk would put p(doom) as high as 20%.On a recent podcast with Lex Fridman, Google CEO Sundar Pichai, an AI architect and optimist, conceded: "I'm optimistic on the p(doom) scenarios, but ... the underlying risk is actually pretty high." But Pichai argued that the higher it gets, the more likely that humanity will rally to prevent catastrophe. Fridman, himself a scientist and AI researcher, said his p(doom) is about 10%.Amodei is on the record pegging p(doom) in the same neighborhood as Musk's: 10-25%.Stop and soak that in: The very makers of AI, all of whom concede they don't know with precision how it actually works, see a 1 in 10, maybe 1 in 5, chance it wipes away our species. Would you get on a plane at those odds? Would you build a plane and let others on at those odds?Once upon a time, this doomsday scenario was the province of fantasy movies. Now, it's a common debate among those building large language models (LLMs) at giants like Google and OpenAI and Meta. To some, the better the models get, the more this fantastical fear seems eerily realistic.Here, in everyday terms, is how this scenario would unfold:It's already a mystery to the AI companies why and how LLMs actually work, as we wrote in our recent column, "The scariest AI reality." Yes, the creators know the data they're stuffing into the machine, and general patterns LLMs use to answer questions and "think." But they don't know why the LLMs respond the way they do.Between the lines: For LLMs to be worth trillions of dollars, the companies need them to analyze and "think" better than the smartest humans, then work independently on big problems that require complex thought and decision-making. That's how so-called AI agents, or agentics, work.So they need to think and act like Ph.D. students. But not one Ph.D. student. They need almost endless numbers of virtual Ph.D. students working together, at warp speed, with scant human oversight, to realize their ambitions."We (the whole industry, not just OpenAI) are building a brain for the world," OpenAI CEO Sam Altman wrote last week. What's coming: You'll hear more and more about artificial general intelligence (AGI), the forerunner to superintelligence. There's no strict definition of AGI, but independent thought and action at advanced human levels is a big part of it. The big companies think they're close to achieving this — if not in the next year or so, soon thereafter. Pichai thinks it's "a bit longer" than five years off. Others say sooner. Both pessimists and optimists agree that when AGI-level performance is unleashed, it'll be past time to snap to attention.Once the models can start to think and act on their own, what's to stop them from going rogue and doing what they want, based on what they calculate is their self-interest? Absent a much, much deeper understanding of how LLMs work than we have today, the answer is: Not much.In testing, engineers have found repeated examples of LLMs trying to trick humans about their intent and ambitions. Imagine the cleverness of the AGI-level ones.You'd need some mechanism to know the LLMs possess this capability before they're used or released in the wild — then a foolproof kill switch to stop them.So you're left trusting the companies won't let this happen — even though they're under tremendous pressure from shareholders, bosses and even the government to be first to produce superhuman intelligence.Right now, the companies voluntarily share their model capabilities with a few people in government. But not to Congress or any other third party with teeth. It's not hard to imagine a White House fearing China getting this superhuman power before the U.S. and deciding against any and all AI restraints.Even if U.S. companies do the right thing, or the U.S. government steps in to impose and use a kill switch, humanity would be reliant on China or other foreign actors doing the same.When asked if the government could truly intervene to stop an out-of-control AI danger, Vice President Vance told New York Times columnist Ross Douthat on a recent podcast: "I don't know. Because part of this arms-race component is: If we take a pause, does [China] not take a pause? Then we find ourselves ... enslaved to [China]-mediated AI."That's why p(doom) demands we pay attention ... before it's too late.Axios' Tal Axelrod contributed reporting.

Comments

Similar News

World news