cupure logo
trumpzelenskywarukrainewhitehousechinaandrewprincetomahawk

OpenAI's new Sora app and other AI video tools give scams a new edge, experts warn

OpenAI's new Sora app and other AI video tools give scams a new edge, experts warn
New AI video apps are providing fertile ground for scammers looking to take their fraud and impersonation schemes to the next level. Why it matters: AI-generated content is quickly blurring the lines between what's real and what's not — and scammers thrive on blurred realities.Driving the news: OpenAI rolled out its new Sora iOS app last week, powered by the company's updated, second-generation video-creation model. The app is unique in that it allows users to upload photos of themselves and others to create AI-generated videos using their likenesses — but app users need the consent of anyone who will be shown in a video.OpenAI CEO Sam Altman said in an update Friday that the app will also give people "more granular control over generation of characters," including specifying in what scenarios their character can be used. People have been quick to show off fun ways they can use the tool, with some posting videos of themselves in TV ads or being arrested.The flip side: The number of reported impersonation scams has skyrocketed in the U.S. in recent years — and that's before AI tools have came into the picture. In 2024, Americans lost $2.95 billion to imposter scams where fraudsters pretended to be a known person or organization, according to the Federal Trade Commission. Between the lines: AI voice scams — which have a lower barrier to entry given how advanced the technology already is — have already taken off.Earlier this year, scammers impersonated the voices of Secretary of State Marco Rubio, White House chief of staff Susie Wiles, and other senior officials in calls to government workers. Last week, a mother in Buffalo, New York, said she received a scam call in which someone pretended to be holding her son hostage and used a likeness of his voice to prove he was there.What they're saying: "This problem is ubiquitous," Matthew Moynahan, CEO of GetReal Security, which helps customers identify deepfakes and forgeries, told Axios. "It's like air, it's going to live everywhere."Threat level: It's easy to download and share content created using Sora outside of OpenAI's platform, and it's possible to remove the watermark indicating it's AI-generated.Scammers can use that capability to dupe unsuspecting people into sending money, clicking on malicious links, or making poor investment decisions, Rachel Tobac, CEO of SocialProof Security, told Axios."We have to inform everyday people that we now live in a world where AI video and audio is believable," she said. Zoom in: Tobac laid out a few scenarios where she could see Sora being abused: A parent could receive a video as part of an extortion scam impersonating their child.A threat actor hoping to keep people from voting could create a video of a long line outside a polling center or fake interviews with poll workers saying the polls are closing early.A nation-state could even create a fake but believable video of an attack on a major city to sow unrest and panic in the U.S.The intrigue: Fraudsters were already impersonating company executives, and new AI video tools are only going to amplify those schemes, Rafe Pilling, director of threat intelligence at Sophos, told Axios. "Things have improved leaps and bound," Pilling said. "Ultimately, [these services] will get abused, no doubt." The other side: Meanwhile, creating realistic deepfakes with Meta AI's new tools has proven difficult because of one simple thing: They don't clone people's voices. In Meta AI's new "Vibes" section, every video was just set to vague music and showed people and animals vibing to the tunes. Each one looked like the AI slop videos that have flooded users' Facebook and Instagram feeds for months. Yes, but: Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters Monday that people are using ChatGPT three times more often to identify potential scams than adversaries are using it in their scam operations.What to watch: The world is still only at the beginning of AI development, and experts have warned that video tools will only get better at duping everyone. "This is the greatest unmanaged enterprise risk I have ever seen," Moynahan said. "This is an existential problem."Go deeper: Scammers may benefit from ChatGPT's new image tool

Comments

World news