The day before OpenAI CEO Sam Altman was fired, I attended a secretive AI security summit
I hung out with doomers, heard sobering predictions, and chatted with government officials. In hindsight, it helped me understand the OpenAI drama.
Thanks to all of you who signed up for my AI Extra over the past few days — just to re-introduce myself, my name is Sharon Goldman 👋, I am a senior writer at VentureBeat reporting on all things AI. This Substack is a place where I post extra takes that tend to be more noodl-ey and casual and personal.
So…😊…
For two days last week, right before OpenAI CEO Sam Altman was fired by the company’s nonprofit board, I attended an invite-only secretive event in the Utah mountains called the AI Security Summit, attended by over 100 AI executives and researchers, venture capitalists and government officials focused on how AI could used by bad actors, the future of generative AI, and the geopolitics of artificial intelligence.
In hindsight, I realize that the discussions I heard and the conversations I had help me understand the current OpenAI drama that began with Altman’s ouster — purportedly because it took its focus on AI safety so seriously — and led to what has amounted to an OpenAI staff mutiny, in which nearly all employees have signed a letter saying they will quit if Altman and former president Greg Brockman are not reinstated and the board doesn’t resign.
The event was hosted by the San Francisco-based Scale AI, which was founded in 2016 as a data-labeling business that provided training data to top companies. It has now grown into a $7 billion-valued business that helps organizations like OpenAI, Cohere, SAP and Toyota fine-tune their data and unlock LLM performance.
Fun fact: Both of the hosts of the event — Scale AI CEO Alexandr Wang and former GitHub CEO and Scale investor Nat Friedman, were approached by the OpenAI board to take on the role of interim CEO of OpenAI. Both declined.
I’m still not entirely sure why some press (there were about four of us) were invited to the summit. Coverage of the event was not particularly encouraged — everyone, including reporters, had to agree to abide by “Chatham House Rules” — where participants “are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.”
But ultimately, I learned a great deal: During a panel I moderated about AI security risk mitigation, I heard about the US Department of Defense’s National Security Agency (NSA)’s new artificial intelligence security center; The AI roadmap for CISA (US Cybersecurity and Infrastructure Security Agency) and how deeply intertwined government and industry already are partnership-wise when it comes to working to secure today’s AI models and applications.
After a panel about strategic AI competition between the US and China, I learned that the future of AI seems to keep a lot of people up at night. The audience seemed to walk away with the equivalent of a grimacing face emoji 😬 as they heard how concerned high-level US defense officials seemed to be about the most world’s most sophisticated AI models getting into the wrong hands.
But I also learned a lot just from chatting with people over meals and drinks. I was honest about things that have puzzled me all year long — the Effective Altruists vs. the Techno-Optimists, the open source vs. closed source debate, how companies like OpenAI and Anthropic can talk about AI possibly destroying humanity while still selling products that help enterprises write better emails.
And that’s where, in hindsight, I can begin to understand the OpenAI drama. Scale’s AI Security Summit was mostly populated by serious-minded AI insiders who, if not doomers, certainly reminded me of Jack Nicholson’s character, Col. Jessup, in A Few Good Men: You know — “You can’t handle the truth!” and “You need me on that wall!”
And it’s true — I was happy to know there are AI folks out there taking AI security efforts very seriously. I was happy to get to chat with a couple of attendees who are clearly part of the Effective Altruism (EA) movement, which has been in the spotlight since it was revealed the non-employee OpenAI board members have EA ties. But I appreciated having a chance to more fully understand their opinions about AI’s long-term risks — which in a some ways made a lot of sense to me. That context helps me understand where, perhaps, the OpenAI nonprofit board was coming from over this past wild weekend. If the issues with Sam Altman were somehow related to what the nonprofit board perceived as antithetical to their mission to develop safe AGI that would benefit humanity, maybe that sent them spinning.
I also chatted with a couple of tech CISOs (chief information security officers), and I was impressed by how seriously and earnestly they take their work. I learned about how they are working to secure LLM model weights and all the simple yet essential methods they use to make sure that the most important AI does not get into the wrong hands. I also heard about how they are using AI tools to defend their organizations against the always-evolving abilities of adversaries.
But on the other hand, I noticed that the AI Security Summit had very little focus on open source, and little presence from the techno-optimist or commercial AI camps. The idea of open, decentralized development of AI was not on the Summit menu — even though a week later, the closed, proprietary efforts of OpenAI have lost some of their luster, now that we know how quickly a small, powerful board can turn a company upside down and ultimately threaten AI safety. As I said in a story I wrote for VentureBeat yesterday, OpenAI proved that, for now, it remains humans that err and go rogue. The AI is fine, it seems — but the humans have a lot of work to do.
It’s amazing to me that, apparently, everyone at the AI Security Summit — including the most important leaders, and even the attendees from OpenAI — were just as surprised by the drama that unfolded the very next day. It showed me that no one in the world of AI knows it all, and the complexities of technology and industry mean that all parties will ultimately — hopefully — have to work together rather than retreat into tribal warfare. I’m grateful to know that there are smart people working on all sides of the debates around the future security of AI — since now we know that one company, and one board, with one outlook — should not hold too much power.
Seems like AI is suffering from a human-alignment problem 👀
Relevant: From Henry Farrell's substack: https://www.programmablemutter.com/p/look-at-scientology-to-understand (on the relation with EA as well)