In the week before President Donald Trump’s war in Iran, the Pentagon was fighting a different battle: a fight with the artificial intelligence company Anthropic over its flagship artificial intelligence model, Claude.
That conflict came to a head on Friday, when Trump said the federal government would immediately stop using Anthropic’s artificial intelligence tools. However, according to a Wall Street Journal report, the Pentagon made use of those tools when it launched strikes against Iran on Saturday morning.
Were the experts surprised to see Claude on the front line?
“Not at all,” said Paul Scharre, executive vice president of the Center for a New American Security and author of Four battlefields: power in the age of artificial intelligencehe told Vox.
According to Scharre: “We’ve seen, for almost a decade now, the military using narrow AI systems, such as image classifiers, to identify objects in drones and video feeds. What’s newer are big language models like ChatGPT and Anthropic’s Claude that the military is reportedly using in operations in Iran.”
Scharre spoke with Today, explained Co-host Sean Rameswaram talks about how AI and the military are increasingly intertwined and what that combination could mean for the future of warfare.
Below is an excerpt from their conversation, edited for length and clarity. There’s a lot more in the full episode, so listen Today, explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.
People want to know how Claude or ChatGPT could be waging this war. Do we know?
We don’t know yet. We can make some guesses based on what the technology could do. Artificial intelligence technology is really excellent at processing large amounts of information and the US military has attacked more than a thousand targets in Iran.
Then they must find ways to process information about those targets (satellite images, for example, of targets they have hit), look for new potential targets, prioritize them, process information, and use AI to do it at machine speed instead of human speed.
Do we know anything more about how the military may have used AI in, say, Venezuela, in the attack that brought Nicolás Maduro to Brooklyn, of all places? Because we recently discovered that AI was also used there.
What we do know is that Anthropic’s AI tools have been integrated into the US military’s classified networks. They can process classified information to process intelligence and help plan operations.
We’ve had these kinds of tantalizing details that these tools were used in the Maduro raid. We don’t know exactly how.
We’ve also seen AI technology in a broad sense used in other conflicts: in Ukraine, in Israel’s operations in Gaza, to do some different things. One of the ways AI is being used in Ukraine in a different context is to give autonomy to the drones themselves.
When I was in Ukraine, one of the things I saw Ukrainian drone operators and engineers demonstrate is a small box, about the size of a cigarette pack, that could be placed on a small drone. Once the human locks on a target, the drone can carry out the attack on its own. And that has been used to a small extent.
We’re seeing AI starting to infiltrate all of these aspects of military operations in intelligence, planning, logistics, but also at the edge in terms of its use where drones complete attacks.
What about Israel and Gaza?
There have been some reports about how the Israel Defense Forces have used AI in Gaza; not necessarily long-speak models, but machine learning systems that can synthesize and fuse large amounts of information, geolocation data, cell phone data and connections, social media data to process all that information very quickly to develop targeting packages, particularly in the early phases of Israel’s operations.
But it raises thorny questions about human involvement in these decisions. And one of the criticisms that came out was that humans were still approving these targets, but that the volume of attacks and the amount of information that needed to be processed was such that perhaps human oversight in some cases was more of a rubber stamp.
The question is: where is this going? Are we heading down a trajectory where humans are eventually left out of the loop and we see, later, fully autonomous weapons that make their own decisions about who to kill on the battlefield?
That’s the direction things are going. No one is unleashing the swarm of killer robots today, but the trajectory is heading in that direction.
We saw reports that a school was bombed in Iran, where [175 people] They were murdered, many of them girls and boys. Presumably it was a mistake made by a human.
Do we think that autonomous weapons will be able to make the same mistake or will they be better at war than us?
This question of whether autonomous weapons will be better than humans is one of the central issues in the debate around this technology. Advocates of autonomous weapons will say that people make mistakes all the time and that machines could do it better.
Part of that depends on how hard militaries using this technology try to avoid mistakes. If the military doesn’t care about civilian casualties, then AI may simply allow them to attack targets faster and, in some cases, even commit atrocities faster, if that’s what the military is trying to do.
I think there’s really big potential here to use technology to be more precise. And if you look at the long arc of precision-guided weapons, say around the last century, we’re pointing toward much greater precision.
If we look at the example of the US attacks on Iran right now, it’s worth contrasting them with the widespread aerial bombing campaigns against cities that we saw in World War II, for example, where entire cities were devastated in Europe and Asia because the bombs were less than accurate, and the air forces dropped massive amounts of artillery to try to attack even a single factory.
The possibility here is that AI could improve over time to allow the military to hit military targets and avoid civilian casualties. Now, if the data is wrong and they have the wrong target listed, they are going to hit the wrong target very accurately. And AI isn’t necessarily going to solve that.
On the other hand, I saw a report in it. New scientist That was quite alarming. The headline was: “AIs can’t stop recommending nuclear strikes in war game simulations.”
They wrote about a study in which models from OpenAI, Anthropic, and Google chose to use nuclear weapons in simulated war games in 95 percent of the cases, which I think is a little more than we humans typically resort to nuclear weapons. Should that scare us?
It’s a little worrying. Fortunately, as far as I know, no one is connecting long-language models to decisions about the use of nuclear weapons. But I think it points to some of the strange failure modes of AI systems.
They tend to flattery. They tend to just agree with everything you say. Sometimes they can do it to the point of absurdity, where, you know, “that’s brilliant,” the model will tell you, “that’s kind of cool.” And you say, “I don’t believe it.” And that is a real problem when talking about intelligence analysis.
Do we think ChatGPT is saying that to Pete Hegseth right now?
I hope not, but it’s possible his people are telling him that.
It starts with this ultimate phenomenon of “yes men” with these tools, where it’s not just that they’re prone to hallucinations, which is a fancy way of saying that they sometimes make things up, but also that the models could actually be used in ways that reinforce existing human biases, that reinforce biases in the data, or that people just trust them.
There’s this appearance of “the AI said this, so it must be the right thing.” And people swear by it, and we really shouldn’t. We should be more skeptical.

