cupure logo
2025vmasmtvopenmtv vmastubeworkersstrikesdisruptionmdash

I'm on a hunger strike outside DeepMind's office in London. Here's what I fear most about AI.

Michaël Trazzi is on a hunger strike outside of DeepMind's office in London.Michaël TrazziMichaël Trazzi is on a hunger strike outside DeepMind's London office to protest AI.Trazzi fears AI models like GPT-5 and Gemini 2.5 Pro are nearing dangerous capabilities.He's urging DeepMind CEO Demis Hassabis to halt new model releases.This as-told-to essay is based on a conversation with Michaël Trazzi, a 29-year-old former AI safety researcher from Saint-Cloud, France. He's on his fourth day of a hunger strike outside DeepMind's offices in London. This conversation has been edited for length and clarity.Back in 2019, AI systems weren't particularly dangerous. They weren't lying, deceiving, or capable of causing real harm on their own. Even today, I don't believe current models can directly inflict catastrophic damage.What worries me is what comes next.My relationship with AI has shifted over the years — from studying and building it to, now, speaking out about its risks.That's why I'm sitting outside DeepMind's London headquarters on a hunger strike.I studied computer science and artificial intelligence in Paris because I wanted to work on AI safety. After that, I worked as an AI safety researcher at Oxford's Future of Humanity Institute, which is now closed.Over time, I transitioned into media — launching a YouTube channel, producing a podcast on AI safety, and more recently creating short films and a documentary on AI policy in the United States. I also make short-form content on TikTok and YouTube Shorts.Artificial general intelligence, or AGI, is usually defined as AI that can perform any economically valuable task a human can. Models like GPT-5, Claude, Grok, and Gemini 2.5 Pro are already approaching that threshold.Yet, the real danger comes when AI can automate its own research and development. A system capable of building increasingly powerful successors without human oversight could spiral far beyond our control. Imagine an AI that is vastly better than any human at engineering viruses or other weapons — that's the scenario that terrifies me.I think it's more likely than not that we'll reach that stage before 2030. The widely discussed "AI 2027" scenario — where the US and China race to build AGI — shows how quickly things could escalate.So what am I asking for?My demand is specific: I want DeepMind CEO Demis Hassabis to commit not to release another frontier model if the other leading AI companies agree to the same.This isn't about one lab unilaterally halting progress. It's about coordination. If leaders at DeepMind, OpenAI, and Anthropic all commit, then real restraint becomes possible.There's a mismatch between the gravity of what we're talking about and the actions of people. This hunger strike is about aligning actions with beliefs.Read the original article on Business Insider

Comments

Similar News

Business News