I was lucky enough to spend several days last week at the Aspen Institute’s Crosscurrent Summit on AI and National Security in San Francisco. My first conclusion: I highly recommend being in sunny (at least for the moment) San Francisco instead of wet, muddy New York in early March. The second took a little longer to form.
The conference was packed with former national security officials, cybersecurity executives, and AI leaders, and the conversation mostly went where it was expected: the fight between Anthropo and the Pentagon, the role of AI in the Iran conflict, the arrival of autonomous weapons. But the panel that caught my attention was about something less dramatic. It was something almost old-fashioned, now powered by AI: scams.
At one point, Todd Hemmen, deputy assistant director of the Cyber Capabilities Branch of the FBI’s Cyber Division, described how North Korean agents are using AI-generated facial overlays to pass remote job interviews at Western technology companies, and then working multiple remote positions simultaneously, funneling salaries and any intelligence back to the regime in Pyongyang. They build resumes with AI, prepare for interviews with AI, and use AI to put on the “face of someone other than the person behind the camera,” Hemmen told the audience. Some of the most competent actors hold multiple full-time jobs at once, all with fake identities, all enabled by tools that didn’t exist two years ago.
That detail has been bouncing around in my head ever since, mostly because it made me wonder how these hard-working workers can handle multiple jobs when I find only one to be exhausting enough. But Hemmen’s story captures something deeper about the moment in which we find ourselves. The risks of AI getting the most airtime right now are speculative and cinematic: killer robots, AI panopticons. But the AI threat is here right now is a foreign agent with a synthetic face on a Zoom call, collecting a paycheck from his company. And almost no one treats it with the same urgency.
How cybercrime got worse than ever
Cybercrime has been a problem since the days of dial-up, but the scale of what is happening now is staggering. The FBI reported that the United States suffered known cybercrime losses worth $16.6 billion in 2024, an increase of 33 percent in a single year and more than doubling in three years. Americans over 60 lose nearly $5 billion. And those are just the numbers reported; Alice Marwick, research director at Data & Society, told the Aspen Institute audience that only one in five victims reports a scam. The real figure is unknown, but it is much worse.
And now comes generative AI to make all of this faster, cheaper and more compelling. Phishing emails no longer arrive riddled with typos from supposed Nigerian princes; LLMs can produce fluent, region-specific language. AI image generators can create entire synthetic identities: dozens of photos of a person who doesn’t exist, complete with vacation photos and designer handbags.
Voice cloning has enabled heists that five years ago were science fiction: in early 2024, a finance worker in the Hong Kong office of the British engineering firm Arup transferred $25 million after a deepfake video call in which the company’s chief financial officer and several colleagues appeared to appear on the screen. Turns out they were all fake. CrowdStrike’s 2026 Global Threat Report found that AI-enabled attacks increased 89 percent year over year, while the average time from the initial breach to the ability to spread across a network dropped to just 29 minutes. The fastest escape observed: 27 seconds.
Will AI cyber attack defeat AI cyber defense?
Why is this problem so neglected? Partly because we have normalized it. Cybercrime has been growing for years, driven by the professionalization of criminal syndicates, cryptocurrencies, remote work, and the industrialization of fraudulent compounds in Southeast Asia. (My Vox colleague Josh Keating wrote a great story a couple of years ago about so-called hog slaughter scams.)
We have absorbed record losses each year as a cost of doing business online. But the curve is steepening: Deloitte projects that AI fraud losses in the United States alone could reach $40 billion by 2027. “Just as legitimate businesses are integrating automation, so is organized crime,” Marwick said.
The fact that much of this is left unsaid and unreported adds to the price. Marwick’s research focuses on romance scams: people targeted during periods of loneliness or transition, slowly having their savings taken away by someone they believe loves them. He told the audience that victims often refuse to believe they are being scammed, even when faced with direct evidence. AI makes emotional manipulation much more persuasive, and no spam filter will protect someone who voluntarily sends money.
Can the defense keep up? Marwick made a hopeful comparison to spam, which nearly broke email in the 1990s before a combination of technical fixes, legislation and social adaptation tamed it, at least to a large extent. Financial institutions are deploying AI to detect AI-enabled fraud. The FBI froze hundreds of millions in stolen funds last year.
But the consensus at the conference was largely bleak. “We’re entering this window of time where the offense is much more capable than the defense,” said Rob Joyce, former cybersecurity director at the National Security Agency. Marwick was more direct: “I would say that, in general, I am quite pessimistic.”
Me too. While writing this story, I received an email from a friend with what appeared to be an invitation to Paperless Post. The language in the email seemed a little strange, but when I clicked on the invitation, it took me to a page that looked very similar to Paperless Post, right down to the logo. Still suspicious, I emailed my friend asking if this was real. “Yes, it’s legit,” he responded.
That was proof enough for me, but I got distracted and didn’t click on the next step of the invitation. The good thing is that a few minutes later, my friend emailed me and a few other people to tell us that yes, he had been hacked.
A version of this story originally appeared in the future perfect information sheet. Register here!

