Big changes are coming to OpenAI. On Wednesday, the company announced it was shutting down its AI video creation app, Sora, just a couple of months after it launched. In October, OpenAI completed a massive restructuring of its organization that shakes the foundation on which it was built.
OpenAI, which powers ChatGPT, among other AI products, was originally founded exclusively as a nonprofit organization. It now has a for-profit branch. According to OpenAI CEO Sam Altman, the nonprofit will continue to guide the for-profit side’s work to ensure that artificial intelligence works for the “benefit of all humanity.” On top of that, the OpenAI Foundation would be in charge (theoretically) of $180 billion, making it one of the largest charities in the world.
Catherine Bracy, founder of the nonprofit Tech Equity, believes this restructuring is a blatant attempt to free up the for-profit wing to act like any other AI company. He stated that OpenAI’s for-profit wing will only act for the benefit of its investors. Bracy believes the OpenAI Foundation is simply a glorified, toothless corporate social responsibility arm. We reached out to OpenAI for comment and did not receive a response.
Bracy spoke with Today, explained Host Sean Rameswaram talks about the legality of OpenAI’s new structure and his concerns about how this could all pan out. Below is an excerpt from their conversation, edited for length and clarity.
There’s a lot more in the full podcast, so give it a listen Today, explained wherever you get your podcasts, including Apple Podcasts, Pandora, and Spotify.
(Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
Did you used to chat with Sam Altman?
We worked together in the past and then lost touch for a few years. Then, when I was writing a book on venture capital, I was really interested in the nonprofit model of open AI. Sam had been very explicit that the reason they founded OpenAI as a nonprofit was to put the technology at arm’s length from investors because they knew that investors would exploit it in a way that would make this technology, which they thought was very dangerous, actually live up to that potential danger.
So I wanted to talk to him about the decision-making process behind this. And he was very forthcoming about that being the explicit reason OpenAI was founded as a nonprofit. They put a lot of thought, skill and energy into creating this. [nonprofit] governance structure that would protect the technology from the whims of investors, [profit-generating] imperatives that investors impose on technology companies.
And a few months later, I saw it all fall apart.
And when you found out that Open AI was restructuring and was going to try to be both: a mission-driven nonprofit, but also a money-driven for-profit, what was your reaction?
Disappointment. I would say that was my initial reaction. And then the secondary response was: Well, what can we do about it? And many of us joined this coalition that really began to ask questions about the liability of nonprofits and the responsibility of the California attorney general to enforce the law on nonprofits. And things started from there.
Tell me more about that. What is the nonprofit law like as it relates to, for example, OpenAI?
I run a non-profit organization. In the tax code, that means my organization does not need to pay taxes, but in exchange for that tax exemption, we must operate in service of a public service mission. Our mission is to ensure that the technology industry is creating opportunities for everyone. OpenAI’s nonprofit mission is to ensure that AI is developed for the benefit of all humanity. And legally, Sam Altman must prioritize OpenAI’s mission above all else.
So when they decided they were going to split the nonprofit from the for-profit, they discovered that they actually legally couldn’t do it without divesting the intellectual property that the nonprofit owned, including all of the intellectual property that was created that underlies the ChatGPT model, and the equity interest that the nonprofit owned in the for-profit company.
I think they looked at that price tag and said: That is not a price we are willing to pay.. And so instead of splitting up nonprofits and for-profits, they decided to continue down this path of nonprofit ownership, which in my opinion is completely unsustainable, unsustainable and irreconcilable.
Basically, every day that OpenAI exists, they are breaking the law.
And really what they’re doing is just challenging the attorney general to hold them accountable for it. I think they think they are too big to be accountable and need the AG. [of California] assume that you will not win a case. And that’s what they’ve done. They’ve loaded up on lawyers and are betting that the attorney general won’t pursue this in any way that’s really meaningful.
Well. So if I follow you, even though OpenAI has split into a for-profit branch and a nonprofit branch, their nonprofit mission still takes precedence over everything they do. And because of that, they are violating California law, because there is no way that nonprofit interests are going to be primary in their business..
Good. I think, as the kids would say, they’re playing in our faces. They expect us to take their word that while they operate, while they make deals with the Department of Defense to develop autonomous weapons and surveillance systems for American citizens, while they fight in court against parents whose children have committed suicide because of conversations those children were having with their chatbots, they expect us to believe that the nonprofit mission is being prioritized over the company’s profit motive.
We all know that OpenAI’s top priority is to “win” the AI race. It’s about beating the competition in the market and establishing the biggest AI company they can create. To the extent that the nonprofit’s mission ever comes into tension with that, the company always prioritize profits over mission.
A law is only as good as its application. And I think if there is a rule in Silicon Valley it is to ask for forgiveness and not permission. I think they said, You know, this is worth it. There is enough money at stake for us to simply break the law and do the public relations work and the lobbying work and the other work that we need to do to ensure that these laws are never applied against us..
And when you talk about public relations work, lobbying work, do you mean saying that we will eventually donate this $180 billion?
Well, here’s the thing. They announced this week a list of priorities in which the foundation would invest. They listed Alzheimer’s research as one of their priorities. My mother is currently dying of Alzheimer’s. I have one copy of the gene that puts me at extreme risk of developing Alzheimer’s when I am older. That’s why I pray every day that AI will help us find a solution to Alzheimer’s quickly enough that I can benefit from it, that my family can benefit from it.
But let me ask you a question. What happens, do you think, if research funded by the OpenAI Foundation finds that Anthropic models are actually better at drug discovery or scientific breakthroughs than ChatGPT or any of the other OpenAI models? What does it mean for the independence of scientific research that all this research is funded by an entity that has an irreconcilable conflict of interest?
“We don’t have to take these companies’ word for it that they know best how to manage this technology. We should have more imagination about what is possible.”
We would not accept the science on nicotine that the tobacco companies were funding. We do not accept the science on alcohol addiction that alcohol companies fund. We don’t accept the science on sugary drinks from the soft drink industry. And we should not accept this scientific research being funded by an entity that has a vested financial interest in the outcome.
And that’s why it’s so important that the OpenAI Foundation in fact be independent, that it has an independent board of directors, that it can deploy its resources independently, that the research it funds is independent.
Do you still think that maybe we’re better off with OpenAI saying it wants to donate billions to a better society, than saying Anthropic, Google, maybe having some promises to donate money, but not that much?
Well, Google has a corporate base. It’s called Google.org. And I hope that in this structure, with the tension and conflict of interest that the OpenAI Foundation has, it will function much more like Google.org, which is essentially an arm of the marketing department, a corporate social responsibility program that gives money to innocent groups, but will never do anything to undermine Google’s priorities.
I think if you read between the lines of Open AI’s press release, the work they say they want to continue doing with community funding has to do with convincing people about the importance, value and benefit of using AI. I mean, that’s a market-creating opportunity for them. In reality, that is not what will ensure that AI is developed for the benefit of something. So, no, I don’t think they’re going to operate any differently than any of the corporate social responsibility arms of other companies. That’s essentially what they’ve built here.
This is the fight of our time. AI is not inevitable. The way it plays out is not inevitable. And we don’t have to take these companies’ word for it that they know best how to manage this technology. We should have a greater imagination about what is possible. And if anything, this should give us more energy and motivation to fix what’s wrong with our democracy than to simply sit back and let billionaires control our future.
Do you ever talk to Sam Altman anymore?
He doesn’t return my calls.
Well, thanks for talking to us.

