I spent last weekend writing about OpenAI's nonprofit board — now at the heart of Sam Altman's ouster
Here's what I discovered about OpenAI's unusual nonprofit structure, which wields outsized decision-making power over the for-profit side of the company
As I started packing yesterday afternoon to travel for a week’s PTO, I (along with the world) experienced the ultimate tech news mic drop: The OpenAI board fired Sam Altman and replaced him with chief technology officer Mira Murati, who will serve as interim CEO while the company searches for a full-time replacement.
A blog post on the OpenAI website said: “[Altman] was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”
After hours of the rumor mill grinding away on possibilities ranging from shocking to salacious, and news emerged that OpenAI president Greg Brockman was also removed as chair of the board of directors and then quit, it turns that the truth may be tied to something more mundane. It may all wind back to OpenAI’s unusual structure — a nonprofit in charge of a for-profit entity — and, as a result, the outsized role of its nonprofit board of directors.
In a personal twist, I actually spent last weekend frantically searching for experts on nonprofit board governance to comment about OpenAI’s board, in order to write the following story for VentureBeat, which was published last Monday: OpenAI’s six-member board will decide ‘when we’ve attained AGI’
At the time, I felt a little silly nerding out over minute details of board governance. But last Saturday morning, November 11, I came upon a thread on X written by OpenAI developer advocate Logan Kilpatrick. Kilpatrick was responding to a comment by Microsoft president Brad Smith, who at a recent panel with Meta chief scientist Yann LeCun tried to frame OpenAI as more trustworthy because of its “nonprofit” status — even though the Wall Street Journal recently reported that OpenAI is seeking a new valuation of up to $90 billion in a sale of existing shares.
But I was immediately drawn to one of Kilpatrick’s posts within the thread:
That got me thinking — yikes. Basically, according to OpenAI, the six members of its nonprofit board of directors will determine when the company has “attained AGI” — which it defines as “a highly autonomous system that outperforms humans at most economically valuable work.” Thanks to a for-profit arm that is “legally bound to pursue the Nonprofit’s mission,” once the board decides AGI, or artificial general intelligence, has been reached, such a system will be “excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”
But putting all this power in the hands of a half-dozen? Keep in mind, the very definition of artificial general intelligence is far from agreed-upon — so what does it mean to have this group deciding on whether or not AGI has been reached — for OpenAI, and therefore, the world? And what will the timing and context of that possible future decision mean for its biggest investor, Microsoft?
That’s when I reached out to a couple of lawyers to find out whether this decision-making power was unusual — the answer was yes, but likely perfectly legal, and probably even required.
One person I spoke to was Anthony Casey, a professor of law and economics at the University of Chicago Law School. I asked him: “Is it unusual to have a board of directors make the kind of determination about artificial general intelligence (AGI)? I know OpenAI has a unique nonprofit structure, but what would that mean not to have any further outside decision-makers on such a decision? Would there be any situation where it would not be considered appropriate?”
Casey immediately expressed fascination on the topic — he agreed that having “the board decide something as operationally specific like that is unusual.” But he pointed out that he did not think there was any legal impediment: “Boards are supposed to run corporations. They are allowed to delegate authority to officers. But it should be fine to specifically identify certain issues that must be made at the Board level. Indeed, if an issue is important enough, Corporate law generally imposes a duty on the directors to exercise oversight on that issue. This has been a big issue in Delaware for-profit corporate litigation lately. A series of cases have said that Directors would be failing their fiduciary duties by not exercising oversight on mission critical issues. AGI must be a mission critical issue here. And I don’t think the non-profit nature changes anything.”
But then he added: “To digress a bit, I am skeptical about some of the other claims they make. In particular they say this:
First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
He said: “I would need to know more about the Operating Agreement of the capped profit LLC. The default rule would be that OpenAI GP LLC owes a fiduciary duty to all equity holders of the capped-profit company, including Microsoft. In exercising those duties, the default rule would not allow them to ignore Microsoft’s desire for profit simply because they have a mission of benefiting humanity. Indeed any time the interests of the for-profit clashed with the interests of the non-profit, then Open AI GP LLC would have a conflict of interest that made it subject to lawsuit from Microsoft.
Now everything I just said is the “default rule.” The capped-profit LLC is an LLC (as opposed to a corporation) it can contractually change the rules and opt out of some fiduciary duties. So the key would be what the LLC Operating agreement says and what state the LLC is registered in. Perhaps they have drafted the agreements to allow exactly what they say. But I would imagine that Microsoft didn’t just roll over and agree to waive all fiduciary duties.”
Finally, he closed by adding: “One final thought: Whatever they think they are doing, I think if OpenAI is a huge success this structure and relationship with Microsoft will have to lead to some big dispute if OpenAI is sincere about its non-profit mission.”
With all of this in mind, to me it makes sense that OpenAI’s nonprofit board would be at the heart of Altman’s ouster and Brockman’s removal from the board. The Information has reported that there were disagreements brewing about AI safety at OpenAI — and the article said that when board member Ilya Sutskever, OpenAI’s chief scientist, took questions during an all-hands meeting yesterday, “at least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a ‘coup’ or ‘hostile takeover,’ according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.”
The article continues: “‘You can call it this way,’ Sutskever said about the coup allegation. ‘And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.’”
The article I published also examined the makeup of OpenAI’s six-person nonprofit board, and their ties to the Effective Altruism movement, which among other things is very focused on AI safety — particularly from the standpoint of making sure that artificial general intelligence (AGI) does not destroy humanity. There were three employees on the nonprofit board — CEO Sam Altman, president Greg Brockman, and chief scientist Sutskever. There were also three non-employees: Adam D’Angelo, Tasha McCauley, and Helen Toner.
D’Angelo, who is CEO of Quora, as well as tech entrepreneur McCauley and Honer, who is director of strategy for the Center for Security and Emerging Technology at Georgetown University, all have been tied to the Effective Altruism movement — which came under fire earlier this year for its ties to Sam Bankman-Fried and FTX, as well as its ‘dangerous’ take on AI safety. And OpenAI has long had its own ties to EA: For example, In March 2017, OpenAI received a grant of $30 million from Open Philanthropy, which is funded by Effective Altruists.
And Jan Leike, who leads OpenAI’s superalignment team, reportedly identifies with the EA movement — and this summer OpenAI established a team, co-led by Sutskever and Leike, to “work on technical solutions to prevent its AI systems from running rogue.” In a blog post, OpenAI said it would dedicate a fifth of its computing resources to solving threats from “superintelligence,” which Sutskever and Leike wrote “could lead to the disempowerment of humanity or even human extinction.”
I want to be clear — Before publication, I reached out to OpenAI about the Effective Altruism connections of their board, and a spokesperson responded saying “None of our board members are effective altruists,” adding that “non-employee board members are not effective altruists; their interactions with the EA community are focused on topics related to AI safety or to offer the perspective of someone not closely involved in the group.”
Still, if we put this all together — if Ilya Sutskever and the three non-employee board members agreed that Altman was moving too fast and making too many deals on the for-profit side of OpenAI — it would make perfect sense that they would push him out. Because the mission of the OpenAI nonprofit is to “to ensure that artificial general intelligence benefits all humanity,” and the nonprofit board is required to oversee that mission.
Phew…it’s a lot to take in. I was happy to be able to chat with
of yesterday for an Emergency Podcast about some of what was going on! Sadly, I won’t be able to take this story further over the next week, but I’m also excited to take a week off…in France! In addition to a few days in Paris with my husband, I’ll be traveling to Nice to take part in the SophI.A Summit’s International AI Conference. I’m thrilled to be meeting Simone Vannuccini, professor of economics of AI and innovation at the Université Côte d'Azur, who kindly invited me to come.But I’ll definitely keep track of the news while I’m away — there’s too much going on to miss out!
Thank you for giving us a deeper insight regarding these issues. I felt the need to investigate further and you saved me some invaluable time!
Great piece! And what timing, too.
There's some reporting that suggests that Ilya had been demoted himself a month ago, and this was a power play to retaliate:
https://slate.com/technology/2023/11/sam-altman-fired-openai-mira-murati.html
"About a month ago, “Sutskever’s responsibilities at the company were reduced,” thanks to an oppositional alliance between Altman and Brockman."
The AI safety nonsense might just be cover for standard backstabbing and politics.