In 1964, science fiction writer Arthur C. Clarke predicted that computers would surpass human evolution. “Current electronic brains are complete idiots, but this will not be true in the next generation,” he told the BBC. “They will start to think and eventually completely surpass their creators.”
Daniel Roher premieres his new documentary The AI Doc: Or How I Became an Apocal-Optimist (2026) with this joyful prophecy. And in the hundreds of minutes that follow, he tries to make sense of a technology that, by his own admission, no longer understands a rapidly changing world. Explaining that he sees AI as a “magic box floating in space,” he enlists the help of experts to give him a crash course in what exactly AI is. is.
Roher’s real concern, however, is not so much how AI works (although some of his subjects try to explain it to him) but whether it could displace us, as Clarke’s prediction suggests.
While making the film, Roher learns that his wife Caroline is pregnant with their first child. He follows his wife’s pregnancy and the birth of his son in parallel with the arrival of AI. It’s a smart choice that’s based on a fear all parents share: What kind of world are we creating for our children? And behind that question there is another that vibrates in an anxious silence: What will happen after our descendants replace us? This twin existential angst fuels his efforts to listen to the fatalists, the techno-optimists, and the “apocal-optimists” in between whose ranks he eventually joins.
He AI documentAs its broad title suggests, it wants to shape and lead the narrative around AI. He’s certainly prepared to do that: Roher just won an Oscar for his documentary. Navalnyand the film was released in nearly 800 theaters, which counts as a wide release for a non-fiction title. The final product is indicative of how public attitudes around AI are changing enormously. Roher hopes to reach people of my grandmother’s generation who combine AI with smartphones and spell check, as well as people who don’t seem to care whether a video was generated by AI.
But I think this documentary came too late to direct the conversation, something the film itself acknowledges. For all its transformative potential, AI is not yet unique among emerging technologies—it has neither been catastrophic nor ushered in a golden age of prosperity—but Roher and many of those interviewed tend to treat it as a radical break from everything that has come before. As a result, they tend to fixate on the binary extremes of perdition or salvation. It’s an approach that reinforces our own helplessness in the face of AI-driven change, while muddying our understanding of what we might still be able to do as we seek to adapt, mitigate damage, and shape the world that AI might otherwise truly begin to remake.
Roher, contemplating his son’s future, chooses to hear the bad news first. Tristan Harris, co-founder of the Center for Humane Technology, doesn’t mince words: “I know people who work in AI risks who don’t expect their kids to make it to high school.”
Many of the film’s other interviewees are equally enthusiastic. Geoffrey Hinton, the “godfather of AI,” for example, claims that as AI becomes smarter, it will better manipulate humanity. But no one is more pessimistic than Eliezer Yudkowsky, the well-known artificial intelligence expert and co-author of the controversial book. If someone builds it, everyone dies. As the title suggests, Yudkowsky believes that superintelligent AI would wipe out humanity, a position he defends and exposes to Roher.
By turning his back on these storm clouds and following the advice of his wife, Caroline, who tells him he needs to find hope for the future, Roher tunes into the chorus of AI optimists. You are told, in various ways, that AI has more potential benefits than drawbacks; that technology has improved the world in every way; that this will be the tool that helps us solve all our biggest problems. Not to mention, AI will bring the best healthcare on the planet to the poorest people on Earth, extend our lifespans by decades, and allow us to live in a post-charity utopia free of monotony. Oh, and: we will become an interplanetary species, all thanks to AI.
Initially, these promises reassure Roher, perhaps because he seems easily guided by whoever spoke most recently. It is Harris who finally convinces him that we cannot separate the promise of AI from the danger it presents. The resulting conclusions will be obvious to anyone who has thought about these issues for more than a moment or two: If AI automations work, for example, how will people make a living?
It doesn’t help that many of the most concerned actors reflect on these questions superficially, if at all. OpenAI CEO Sam Altman tells Roher that he is concerned about how authoritarian governments will use AI, a statement that is followed in the film by a cut to footage of Altman posing with authoritarian leaders. Other tech CEOs resort to PR jokes in response to the filmmaker’s questions, and Roher too often goes easy on them, never delving deeper when they admit that even they aren’t sure everything will work out. That these are the leaders of AI companies competing with each other to make the technology more and more advanced does not inspire confidence.
(Some of the techno-pessimists interviewed for the documentary have expressed great dissatisfaction with the final result.)
“Why can’t we just stop?” Roher asks these tech CEOs. You’ve been told that a moratorium is a pipe dream: many groups around the world are building advanced AI, all with different motivations. Legislation lags far behind the pace of technological progress. Even if we could pass laws in the US and EU that would stop or slow things down, says Anthropic CEO Dario Amodei, we would have to convince the Chinese government to do the same.
If we don’t create it, it is thought, our enemies will. It is best to get ahead of them.
This is, of course, the logic of nuclear deterrence: if us If we don’t mitigate the risk of ending the world through mutually assured destruction, there’s nothing stopping someone from pushing the button first.
An apocalypse in every generation
The atomic comparison is apt, if only because Roher sees what is at stake in equally clear terms. “Will my son live in a utopia or will we become extinct in 10 years?” Hey, wonders of yesteryear. It is a central question for the film. But the more likely scenario that AI will not lead to human extinction or end all disease and drudgery is never considered. Every generation faces the specter of its own annihilation, and yet the ends of days continue to pile up, no matter how close the doomsday clock is to the apocalypse.
The point, then, is not that AI won’t be bad for us, but that by framing the question in strictly utopian or dystopian terms, we miss the confusing reality that lies between hell on earth and heaven in the stars. Although The AI document tries to chart an “apocal-optimistic” course between two extremes, it does not understand what is at stake. In reality, AI does not create new risks as such: it is a force multiplier for existing ones, such as the threat of nuclear war and the development and use of biological weapons. The main existential risks of AI are caused and driven by man. And that means, as Caroline says in the film’s final narration, “We decide how this goes.” She’s right, but her husband never seems to understand it. as she is right.
Like many big-issue documentaries, Roher’s film has many problems and few solutions. It offers something: it requires international cooperation, transparency, legal responsibilities for companies if something goes wrong, testing before publication and adaptable rules to match the speed of progress. But just as this is a strictly introductory AI course, one that will likely irritate those who have already moved on to AI 102, these recommendations are just a starting point. For Roher, they offer reasons for hope. For the rest of us, they are just the beginning of an opportunity to meaningfully direct the course of our future.

