For as long as AI has existed, humans have had fears around AI and nuclear weapons. And movies are a great example of those fears. Skynet of the terminator The franchise becomes sentient and fires nuclear missiles at the United States. WOPR war games He almost started a nuclear war due to a lack of communication. Kathryn Bigelow’s recent release, dynamite houseasks if AI is involved in a nuclear missile attack targeting Chicago.
AI is already in our nuclear enterprise, says Vox’s Josh Keating Today, explained co-host Noel King. “Computers have been a part of this from the beginning,” he says. “Some of the first digital computers ever developed were used during the construction of the atomic bomb in the Manhattan Project.” But we don’t know exactly where or how he is involved.
So should we worry? Well, maybe Keating is back. But not that AI turns against us.
Below is an excerpt from their conversation, edited for length and clarity. There’s a lot more in the full episode, so listen Today, explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.
There is a part to it A house of dynamite where they are trying to figure out what happened and if AI is involved. Do these movies have anything to do with these fears?
The interesting thing about movies, when it comes to nuclear war, is: this is a type of war that has never been fought. There is no kind of veteran of the nuclear wars other than the two bombs we dropped on Japan, which is a very different scenario. I think movies have always played a huge role in debates about nuclear weapons. You can go back to the ’60s, when the Strategic Air Command really produced its own comeback. Dr. Strange Love and Failsafe. In the 80s, that television movie Other It was kind of a galvanizing force for the nuclear freeze movement. President [Ronald] Reagan was apparently very disturbed when he saw it, and it influenced his thinking on arms control with the Soviet Union.
On the specific topic I’m looking at, which is artificial intelligence and nuclear weapons, there have been a surprising number of movies that have that as a plot. And it comes up a lot in political debates about this topic. I’ve had people advocate for integrating AI into the nuclear command system and say, “Look, this isn’t going to be Skynet.” General Anthony Cotton, current commander of the Strategic Command, which is the branch of the military responsible for nuclear weapons, advocates for greater use of artificial intelligence tools. He referred to the 1983 movie WarGames and said, “We’re going to have more AI, but there won’t be a WOPR in strategic command.”
where I think [the movies] What falls a little short is the fear that a super-intelligent AI will take over our nuclear weapons and use them to wipe us out. For now, that’s a theoretical concern. What I think is the more real concern is that as AI enters more and more parts of the command and control system, do the humans in charge of the decisions to make nuclear weapons really understand how AIs work? And how will it affect the way they make these decisions, which could be (without exaggeration to say it) some of the most important decisions ever made in human history?
Do humans working on nuclear weapons understand AI?
We don’t know exactly where AI is in the nuclear enterprise. But people will be surprised to learn how low-tech the nuclear command and control system really was. Until 2019, they used floppy disks for their communication systems. I’m not even talking about the little plastic ones that look like the save icon in Windows. I mean, the old flexible ones from the 80s. They want these systems to be safe from outside cyber interference, so they don’t want everything to be connected to the cloud.
But as this multibillion-dollar nuclear modernization process is underway, a big part of that is upgrading these systems. And several StratCom commanders, including a couple I spoke to, said they think AI should be part of this. What everyone is saying is that AI should not be in charge of making the decision about whether we launch nuclear weapons. They think AI can analyze massive amounts of information and do it much faster than people. And if you have seen A house of dynamiteOne thing the movie shows very well is how quickly the president and senior advisors will have to make absolutely extraordinary and difficult decisions.
What are the big arguments against bringing AI and nuclear weapons together?
Even the best AI models we have available today are still error-prone. Another concern is that there could be external interference with these systems. It could be a hack or cyberattack, or foreign governments could devise ways to introduce inaccurate information into the model. There have been reports that Russian propaganda networks are actively trying to sow disinformation in the training data used by Western consumer AI chatbots. And another is how people interact with these systems. There is a phenomenon that many researchers pointed out called automation biaswhich is simply that people tend to trust the information that computer systems provide them.
There are plenty of examples in history of times when technology has led to near-nuclear disasters, and it is humans who have intervened to prevent escalation. There was a case in 1979 in which Zbigniew Brzezinski, the US national security advisor, was awakened by a phone call in the middle of the night informing him that hundreds of missiles had just been launched from Soviet submarines off the Oregon coast. And just before he was about to call President Jimmy Carter to tell him that the United States was under attack, there was another call that [the first] had a false alarm. A few years later, there was a very famous case in the Soviet Union. The computer system informed Colonel Stanislav Petrov, who was working on its missile detection infrastructure, that a US nuclear launch had occurred. According to protocols, he was supposed to inform his superiors, who could have ordered immediate retaliation. But it turned out that the system had misinterpreted sunlight reflecting off the clouds as a missile launch. So it’s very good that Petrov made the decision to wait a few minutes before calling his superiors.
I’m listening to those examples, and what I could make clear if I think about it really simplistically is that human beings pull us back from the brink when technology fails.
It’s true. And I think there’s some really interesting recent evidence about AI models in military crisis scenarios, and they actually tend to be more aggressive than human decision makers. We don’t know exactly why this is so. If you look at why we haven’t fought a nuclear war (why, 80 years after Hiroshima, no one has dropped another atomic bomb, why there has never been a nuclear exchange on the battlefield), I think part of it is because of how terrifying it is. How humans understand the destructive potential of these weapons and what this escalation can lead to. That there are certain steps that can have unintended consequences and fear is a big part of it.
From my perspective, I think we want to make sure there is fear built into the system. Those entities that are capable of being frightened by the destructive potential of nuclear weapons are the ones making the key decisions about whether or not to use them.
It sounds like looking A house of dynamiteone can vividly think that maybe we should take all AI out of this completely. It sounds like what you’re saying is: AI is part of the nuclear infrastructure for us, for other nations, and is likely to remain so.
One thing an advocate for more automation told me was: “if you don’t think humans can build trustworthy AI, then humans have nothing to do with nuclear weapons.” But the thing is, I think it’s a statement that people who think we should completely eliminate all nuclear weapons would also agree with.
I may have been worried that the AI was going to take over and seize the nuclear weapons, but now I realized that I’m quite worried about what people we are going to do with nuclear weapons. It’s not like AI is going to kill people with nuclear weapons. It’s that AI could make it more likely that people will kill each other with nuclear weapons. To some extent, AI is the least of our worries. I think the movie shows well how absurd the scenario in which we would have to decide whether to use them or not really is.

