
The US military was able to “hit a whopping 1,000 targets in the first 24 hours of its attack on Iran” thanks in part to its use of artificial intelligence, according to Washington Post. The Army has used Claude, Anthropic’s artificial intelligence tool, combined with Palantir’s Maven system, to select real-time targets and prioritize targets in support of combat operations in Iran and Venezuela.
While Claude is only a few years old, the US military’s ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and qualified personnel. Only thanks to many decades of investment and experience can the United States use AI in warfare today.
In my experience as an international relations scholar studying strategic technology at Georgia Tech, and previously as an intelligence officer in the US Navy, I find that digital systems are only as good as the organizations that use them. Some organizations explore the potential of advanced technologies, while others can compensate for technological weaknesses.
Myth and reality in military AI
Science fiction accounts of military AI are often misleading. Popular ideas about killer robots and drone swarms tend to exaggerate the autonomy of artificial intelligence systems and underestimate the role of humans. Success or failure in war usually depends not on the machines but on the people who use them.
In the real world, military AI refers to a huge collection of different systems and tasks. The two main categories are automated weapons and decision support systems. Automated weapons systems have some ability to select or engage targets themselves. These weapons are most often the subject of science fiction and the focus of considerable debate.
Decision support systems, on the other hand, are now at the core of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including in the current and recent wars in the Middle East, are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration, and cybersecurity.
Claude is an example of a decision support system, not a weapon. Claude is integrated into the Maven Smart System, widely used by military, intelligence and law enforcement organizations. Maven uses artificial intelligence algorithms to identify potential targets from satellite data and other intelligence data, and Claude helps military planners classify information and decide objectives and priorities.
The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but ultimately humans make the decisions.
Researcher Craig Jones explains how the US military is using artificial intelligence in its attack on Iran and some of the problems arising from its use.
The long history of military AI
Weapons with a certain degree of autonomy have been used in war for more than a century. Naval mines of the 19th century exploded on contact. German electric bombs of World War II were gyroscopically guided. Guided torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel’s Iron Dome and the United States’ Patriot system, have long offered fully automatic modes.
Robotic drones became common in 21st century wars. Unmanned systems now perform a variety of “boring, dirty and dangerous” tasks on land, sea, air and orbit. Remotely piloted vehicles such as the American MQ-9 Reaper or the Israeli Hermes 900, which can loiter autonomously for many hours, provide a platform for reconnaissance and attacks. Fighters in the Russia-Ukraine war have pioneered the use of first-person vision drones as kamikaze munitions. Some drones rely on AI to acquire targets because electronic interference prevents remote control by human operators.
But systems that automate reconnaissance and attacks are simply the most visible parts of the automation revolution. The ability to see further and attack faster dramatically increases the information processing load on military organizations. This is where decision support systems come in. If automated weapons enhance the eyes and arms of an army, decision support systems augment the brain.
Cold War-era command and control systems anticipated modern decision support systems, such as Israel’s battle management-enabled Tzayad. Automation research projects such as the United States’ Semi-Automatic Ground Environment (SAGE) in the 1950s produced important innovations in computer memory and interfaces. In the US war in Vietnam, Igloo White collected intelligence data into a centralized computer to coordinate US airstrikes against North Vietnamese supply lines. The U.S. Defense Advanced Research Projects Agency’s strategic computing program in the 1980s spurred advances in semiconductors and expert systems. In fact, defense funding originally enabled the rise of AI.
Organizations enable automated warfare
Automated weapons and decision support systems depend on complementary organizational innovation. From the electronic battlefield of Vietnam to the air-land battle doctrine of the late Cold War and later concepts of network-centric warfare, the US military has developed new ideas and organizational concepts.
Particularly noteworthy is the emergence of a new style of special operations during America’s global war on terrorism. AI-based decision support systems became invaluable in finding terrorist agents, planning raids to kill or capture them, and analyzing intelligence gathered in the process. Systems like Maven became essential to this style of counterterrorism.
The impressive American war-making on display in Venezuela and Iran is the fruit of decades of trial and error. The U.S. military has refined complex processes to gather intelligence from many sources, analyze target systems, evaluate options for attacking them, coordinate joint operations, and assess bomb damage. The only reason AI can be used throughout the entire targeting cycle is because countless people everywhere work to keep it running.
AI raises significant concerns about automation bias, or the tendency of people to give excessive weight to automated decisions, in military targeting. But these are not new concerns. Igloo White was often fooled by Vietnamese lures. A state-of-the-art American cruiser, Aegis, accidentally shot down an Iranian airliner in 1988. Intelligence errors led American stealth bombers to accidentally attack the Chinese embassy in Belgrade, Serbia, in 1999.
Many Iraqi and Afghan civilians died due to analytical errors and cultural biases within the US military. More recently, evidence suggests that a Tomahawk cruise missile hit a girls’ school adjacent to an Iranian naval base, killing about 175 people, mostly students. This attack could have been the result of a failure of American intelligence.
Automated prediction needs human judgment
The successes and failures of decision support systems in war are due more to organizational factors than to technology. AI can help organizations improve efficiency, but it can also amplify organizational biases. While it may be tempting to blame Lavender for excess civilian deaths in the Gaza Strip, lax Israeli rules of engagement probably matter more than automation bias.
As the name implies, decision support systems support human decision making; AI does not replace people. Human personnel still play important roles in the design, management, interpretation, validation, evaluation, repair and protection of your systems and data flows. The commander remains in command.
In economic terms, AI improves prediction, which means generating new data based on existing data. But prediction is only one part of decision making. Ultimately, people make the judgments that matter about what to predict and how to use the predictions. People have preferences, values, and commitments regarding real-world outcomes, but AI systems intrinsically do not.
In my opinion, this means that the growing military use of AI is actually making humans more important, not less, in warfare.
Jon R. Lindsay is an associate professor of cybersecurity and privacy and international affairs at the Georgia Institute of Technology..
This article is republished from The conversation under a Creative Commons license. Read the original article.

