Great news for the pursuit of artificial general intelligence, or AI that has human-level intelligence across the board. OpenAI, which describes its mission as “ensuring AGI benefits all humanity,” yesterday finalized its long-in-the-making corporate restructuring plan. It could completely change the way we approach AI risks, especially biological ones.
First, a quick refresher: OpenAI was originally founded as a nonprofit in 2015, but gained a for-profit arm four years later. The nonprofit will now be called OpenAI Foundation, and the for-profit subsidiary is now a public benefit corporation, called OpenAI Group. (PBCs have legal requirements to balance mission and profits, unlike other structures.) The foundation will continue to control OpenAI Group and hold a 26 percent stake, which was valued at around $130 billion at the close of the recapitalization. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
“We believe that the world’s most powerful technology should be developed in a way that reflects the world’s collective interests,” OpenAI wrote in a blog post.
One of OpenAI’s first steps (in addition to the big deal with Microsoft) is the foundation that allocates $25 billion to accelerate health research and support “practical technical solutions for AI resilience, which is maximizing the benefits of AI and minimizing its risks.”
Sign up here to explore the big, complicated problems facing the world and the most efficient ways to solve them. Sent twice a week.
Maximizing the benefits and minimizing the risks is the essential challenge around the development of advanced AI, and no topic better represents that edge than the life sciences. The use of AI in biology and medicine can strengthen disease detection, improve response, and advance the discovery of new treatments and vaccines. But many experts think one of the biggest risks of advanced AI is its potential to help create dangerous biological agents, lowering the barrier to entry for launching deadly biological weapons attacks.
And OpenAI is well aware that its tools could be misused to help create biological weapons.
The frontier AI company has put safeguards in place for its ChatGPT agent, but we’re in the early days of what biological AI capabilities can make possible. That’s why another recent piece of news—that OpenAI’s Startup Fund, along with Lux Capital and Founders Fund, provided $30 million in seed funding for Valthos, a New York-based biodefense startup—may prove almost as important as the company’s complex corporate restructuring.
Valthos aims to build the next generation “tech stack” for biodefense, and quickly. “As AI advances, life itself becomes programmable,” the company wrote in an introductory blog post after coming out of stealth last Friday. “The world is moving closer to near universal access to powerful dual-use biotechnologies capable of eliminating or creating diseases.”
You may be wondering if the best course of action is to completely stop these tools, with their catastrophic and destructive potential. But that’s unrealistic at a time when we feel handicapped by advances (and investments) in AI at ever-increasing speeds. At the end of the day, the essential bet here will be whether the AI we develop defuses the risks that will be caused by… the AI we develop. It’s a question that becomes even more important as OpenAI and others move toward AGI.
Can AI protect us from the risks it brings?
Valthos imagines a future in which any biological threat to humanity can be “immediately identified and neutralized, whether the origin is external or within our own bodies. We build artificial intelligence systems to rapidly characterize biological sequences and update medications in real time.”
This could allow us to respond more quickly to outbreaks, potentially preventing epidemics from becoming pandemics. We could repurpose therapies and design new drugs in record time, helping dozens of people with diseases that are difficult to treat effectively.
We’re not even close to AGI in biology (or anything), but we don’t have to be for there to be significant risks from AI’s biological capabilities, such as intentionally creating new pathogens deadlier than anything in nature, that could be released deliberately or accidentally. Efforts like Valthos’ are a step in the right direction, but AI companies still have to follow the path.
“I am very optimistic about the potential for improvement and the benefits that society can derive from the biological capabilities of AI,” said Jaime Yassif, vice president of global biological policies and programs at the Nuclear Threat Initiative. “However, at the same time, it is essential that we develop and deploy these tools responsibly.”
(Disclosure: I used to work at NTI.)
But Yassif insists there is still much work to be done to perfect the predictive power of AI tools for biology.
And AI cannot deliver its benefits in isolation for now: there has to be continued investment in the other structures driving change. AI is part of a broader ecosystem of biotech innovation. Researchers still have to do a lot of lab work, conduct clinical trials, and evaluate the safety and effectiveness of new therapies or vaccines. They also have to disseminate those medical countermeasures to the populations that need them most, which is very difficult to do and is fraught with bureaucracy and funding problems.
Bad actors, on the other hand, can operate in the here and now, and could affect the lives of millions of people much faster than it takes for the benefits of AI to be realized, particularly if there are no smart ways to intervene. This is why it is so important that safeguards intended to protect against the exploitation of beneficial tools can a) be implemented in the first place and b) keep up with rapid technological advances.
SaferAI, which rates the risk management practices of cutting-edge AI companies, ranks OpenAI as the second best framework after Anthropic. But everyone has more work to do. “It’s not just about who’s on top,” Yassif said. “I think everyone should do more.”
As OpenAI and others move closer to smarter-than-human AI, the question of how to maximize the benefits and minimize the risks of biology has never been more important. We need greater investment in AI biodefense and biosecurity across the board as the tools to redesign life itself become increasingly sophisticated. So I hope that using AI to address your risks is a gamble that pays off.

