cupure logo
trumpgazacitytrumpsukraineisraelpolicewargaza cityhome

Foreign disinformation enters its AI era — just as U.S. pulls back resources to fight it

Foreign disinformation enters its AI era — just as U.S. pulls back resources to fight it
The world of AI-generated, adversarial disinformation is growing rapidly — and it's already indistinguishable from run-of-the-mill social posts.Why it matters: Foreign disinformation has become a highly disruptive force in the U.S., with Russia and other foreign powers seeking to influence elections and inflame discord. Generative AI is now making it far more effective and harder for average users to detect."We are seeing now an ability to both develop and deliver at an efficiency, at a speed, at a scale that we've never seen before," Gen. Paul Nakasone, former head of the NSA and now director of the Vanderbilt Institute, told reporters at the DEF CON hacker conference earlier this month.That acceleration comes at a time when the U.S. government is pulling back on efforts to debunk and raise awareness about influence campaigns.Driving the news: At least one China-based technology company, GoLaxy, seems to be using generative AI to build influence operations in Taiwan and Hong Kong, according to internal documents leaked to researchers at Vanderbilt University's Institute of National Security.Documents detailed how GoLaxy appears to be tapping generative AI to mine social media profiles and create content that "feels authentic, adapts in real-time and avoids detection," the researchers wrote in a New York Times opinion piece.Using tools including DeepSeek's open-source reasoning model, GoLaxy created synthetic personas that can adapt their messaging to cater to certain audiences, and can also mimic real people.GoLaxy — which operates in close alignment with the Chinese government's interests — allegedly used these personas in the lead-up to the 2024 Taiwanese election and to rebut opposition to the 2020 national security law that ended Hong Kong's autonomy.The intrigue: Documents also show that GoLaxy has created profiles for at least 117 members of Congress and over 2,000 American political figures and thought leaders. Axios could not independently verify the documents, which haven't been publicly released. GoLaxy has denied the claims.Between the lines: Experts have long warned about the potential for large language models like ChatGPT to amplify and enhance malign foreign influence operations. Generative AI also helps foreign adversaries overcome language barriers that previously would have tipped people off to their schemes."This is a whole new level of gray zone conflict," Brett Goldstein, a researcher at Vanderbilt and former director of the Defense Digital Service, told reporters at DEF CON. "We need to figure out how to get ahead of it." Catch up quick: Russia and China have spent decades investing in their influence operations apparatuses. Russia has typically had more success with influence operations beyond its own borders. Tactics include hiring companies to run bot farms and infiltrating legitimate media companies to plant fake stories.China has historically struggled to generate engagement for its propaganda arm outside of its own borders, in part because its operations have been less sophisticatedThreat level: Generative AI is making it exponentially easier for China to create believable, engaging content, Nakasone warned.The big picture: GoLaxy is likely just the tip of the iceberg. Foreign adversaries have been experimenting with ChatGPT and other chatbots since they became publicly available.A pro-Russian propaganda group has been experimenting with AI as it mimics legitimate Western news organizations, including ABC and Politico.And China, especially, works with a robust network of third-party contractors for offensive cyberattacks against the United States."Generative AI is definitely bringing down that cost of entry enough that a lot more firms are able to provide these types of services," C. Shawn Eib, head of investigations at disinformation detection firm Alethea, told Axios. Reality check: These advancements are coming as the Trump administration dismantles several key offices designed to counter foreign disinformation. CISA, the FBI and the State Department have each cut their respective offices that worked with the private sector to trade information about foreign-backed influence operations. What to watch: Technological advancements and personnel investments in the private sector are the only way to combat the rise of believable, difficult-to-detect disinformation, researchers say. "This is what the private sector has got to help us with going into the future," Nakasone said. "We need a novel approach where we can still innovate and yet be ahead of the threat." Regulatory pressure from European countries could also push social media platforms to build their trust and safety teams back up to detect AI-generated disinformation, Eib said. Go deeper: Russian disinformation floods AI chatbots, study finds

Comments

Similar News

World news