cupure logo
trumpmuskrussiatrumpswarfeudpoliceattackelonban

Blink and your AI security playbook is out of date

Blink and your AI security playbook is out of date
Generative AI is evolving so fast that security leaders are tossing out the playbooks they wrote just a year or two ago. Why it matters: Defending against AI-driven threats, including autonomous attacks, will require companies to make faster, riskier security bets than they've ever had to before. The big picture: Boards today are commonly demanding CEOs have plans to implement AI across their enterprises, even if legal and compliance teams are hesitant about security and IP risks.Agentic AI promises to bring even more nuanced — and potentially frightening — security threats. Autonomous cyberattacks, "vibe hacking" and data theft are all on the table.Driving the news: Major AI model makers have unveiled several new findings and security frameworks that underscore just how quickly the state of the AI art is advancing. Researchers recently found that one of Anthropic's new models, Claude 4 Opus, has the ability to scheme, deceive and potentially blackmail humans when faced with a shutdown. Google DeepMind unveiled a new security framework for protecting models against indirect prompt injection — a threat in which a bad actor manipulates the instructions given to an LLM. That takes on new consequences in an agentic world.Case in point: A bad actor could trick an AI agent into exfiltrating internal documents simply by embedding a hidden instruction in what looks like a normal email or calendar invite.What they're saying: "Nobody thought the concept of agents and the usage of AI would get rolled out so quickly," Morgan Kyauk, managing director at late-stage venture firm NightDragon, told Axios. Even NightDragon's own framework, rolled out in mid-2023, likely needs to be revised, Kyauk added. "Things have changed around AI so quickly — that's been the surprising part about being an investor in this category," he said. Zoom in: Kyle Hanslovan, CEO and co-founder of Huntress, told Axios that his company is only making decisions about AI — including how to implement it and how to secure against it — on a six-week basis. "I think that is probably too long," Hanslovan said in an interview on the sidelines of Web Summit Vancouver. "But if you do more than that, then what happens is whiplash." By the numbers: Companies now have an average of 66 generative AI tools running in their environments, according to new research from Palo Alto Networks on Thursday. But the security stakes keep growing: About 14% of data loss incidents so far in 2025 involved employees accidentally sharing sensitive corporate information with a third-party generative AI tool, according to the report.Reality check: One hallmark of generative AI is the ability to rapidly advance its reasoning capabilities by turning it back upon itself. In hindsight, experts say, the need for security to be just as adaptive should have been obvious."Why did we think, with something that's adapting as quickly as AI, it was even OK to have more than a six-month model?" Hanslovan said.Yes, but: John "Four" Flynn, vice president of security at Google DeepMind, told Axios that while some parts of AI security are new, like prompt injection or agent permissioning, many other aspects just extend known practices.If an agent is running, security teams would still need to examine what data sources that agent should have permission to access or how secure the login protocols are for that agent. "All is not lost, we don't have to reinvent every single wheel," Flynn said. "There are some new things, but there's a lot of things that we can lean on that we've become quite good at over the years." The intrigue: CISOs and their teams are more comfortable with generative AI than they have been with other big technological shifts — and that could give defenders an edge in developing new tools to fend off incoming attacks, Kyauk said. "If you're a cybersecurity professional and you use ChatGPT on a daily basis to find a recipe or to help you plan your travel itinerary.... you begin to see how accurate some of the responses are," Kyauk said."There's more willingness to adopt the tools then." Go deeper: Malware's AI time bomb

Comments

World news