This former OpenAI researcher thinks it’s time to start gaming out AI’s future
Here's what happened in an AI 2027 simulation, and why Steven Adler thinks we all should try it
I have long-struggled with how to cover the ‘doomer’ side of the AI debate — particularly those who genuinely, fervently believe that there is a real chance a powerful AI could go off the rails and kill all of humanity, and that the kind of ‘superintelligence’ that could give AI systems the ability to go rogue is coming far quicker than we might think.
Part of it is the fact that there are so many experts and researchers I respect that do not believe that we are that close to building an AI system that vastly exceeds human cognitive abilities across virtually all areas, including creativity, problem-solving, decision-making, and social manipulation. The majority (I think) don’t believe we need to freak out about a potential AI that might not be perfectly aligned with human goals and could take actions that are catastrophically harmful. Partly I am suspicious of the massive funding ecosystem — venture capital, philanthropists, AI safety nonprofits, and even governments — pouring resources into both existential risk research and companies that position themselves as uniquely capable of 'safely' developing powerful AI.
Still, I the more I use AI tools and speak to researchers developing them, the more I’m glad that there are people on that “wall,” so to speak — like a geeky version of Jack Nicholson playing Nathan Jessup in a Few Good Men — tackling potential existential AI risks, whatever they might be. Just as I’m happy that there are policy experts working on AI policy; privacy experts working on AI privacy; bias experts working on AI bias’ and security experts working on AI security.
I must say, it’s hard not to feel compassionate for researchers working hard on what they see as the long-term risks of AI systems — everything from loss of human control over AI to the possibility that even if AI systems themselves aren't misaligned, humans might deliberately misuse them to cause global-scale harm (like to build a bioweapon). Those that I have met or spoken with have sometimes seemed extremely anxious and stressed out about it, to the point that I imagine it is not great for their health.
But
, a former safety lead at OpenAI (he left in 2024) who is now working as an independent researcher and publishes his own Substack, seems to be managing his anxiety well enough. We had chatted a couple of times over the past few months and I have enjoyed his posts. I was particularly taken with his latest: where he shares his recent experience participating in a five-hour discussion-based simulation, or “tabletop exercise,” with 11 others, which he said was similar to wargames-style exercises in the military and cybersecurity. Together, the group explored how world events might unfold if “superintelligence,” or AI systems that surpass human intelligence, emerges in the next few years.I decided to speak with Adler about his experience for Fortune’s Eye on AI newsletter this week, (it publishes every Tuesday and Thursday and is free to get to your inbox!).
The simulation was organized by the AI Futures Project, a nonprofit AI forecasting group led by Daniel Kokotajlo, Adler’s former OpenAI teammate and friend. Fun fact: I met Kokotajlo at an event in Washington, DC last year where I was specifically hoping to speak to folks focused on AI and existential risk. I’m always eager to learn about our impending doom! 😬 But seriously, Kokotajlo was kind enough to speak with me and I have kept in touch since then.
His organization drew attention in April for “AI 2027,” a forecast-based scenario mapping out how superhuman AI could emerge by 2027—and what that might mean. According to the scenario, by then AI systems could be using 1,000 times more compute than GPT‑4 and rapidly accelerating their own development by training other AIs. But this self-improvement could easily outpace our ability to keep them aligned with human values, raising the risk that seemingly helpful AIs might ultimately pursue their own goals.
Each participant has their own character whom they try to represent realistically in conversations, negotiations and strategizing, he explained. Those characters included members of the US federal government (each branch, as well as the President and their Chief of Staff), the Chinese government/AI companies, the Taiwanese government, NATO, the leading Western AI company, the trailing Western AI companies, the corporate AI safety teams, the broader AI safety ecosystem (e.g., METR, Apollo Research), the public/press, and the AI systems themselves.
Adler was tapped to play what he called “maybe the most interesting role”—a rogue artificial intelligence. During each 30-minute round of the five-hour simulation, which represented the passage of a few months in the forecast, Adler’s AI got progressively more capable—including at training even more powerful AI systems.
You can read more in my Fortune piece — but suffice to say Adler encouraged others to express interest in running the simulation for their organization (there is a form to fill out), but he admitted that forecasts and predictions are hard. “I understand why people would feel skeptical, it’s always hard to know what will actually happen in the future,” he said. “At the same time, from my point of view, this is the clear state of the art in people who’ve sat down and for months done tons of underlying research and interviews with experts and just all sorts of testing and modeling to try to figure out what worlds are realistic.”
Ultimately, I’ve come to the conclusion that I’m glad Adler — and others — are on this particular wall. But I’m even happier that in Adler’s Substack post he emphasized that he and his fellow researchers don’t think the AI 2027 will definitely happen. They want to be prepared and hope human societies rise to the challenge. I say, bring on the game.
I recall reading about AI safety as one of the most important and underfunded problems, as per an 80,000 hours career breakdown. I understood the urgency of the situation, but this simulation, 2027, is already presenting quite stark scenarios. I hadn't considered that we'd face not just AI-human alignment challenges, but also risks from AIs competing or potentially colluding with each other. Such a significant risk to work on (yet remains underfunded!)
The doom debate is fundamentally flawed and often distracts from the actual dynamics unfolding now. Eighty thousand years ago, the Toba supervolcano likely reduced humanity to just a few thousand individuals. That wasn’t technically “doom”—by definition, the species survived—but for countless humans alive at the time, it was catastrophic. Yet conversations about doom often treat survival as the sole measure of success, overlooking the immense suffering and destruction that can occur long before extinction.
A more grounded focus is necessary: the hollowing out of the workforce through AI. Entire roles across customer service, analytics, HR, media, logistics—teams are already being replaced by high-performing AI orchestrators. This is happening in practice, not theory, and the ripple effects are real.
Governments rely on tax revenue from employed populations. Bots don’t pay taxes, buy homes, support communities, volunteer at schools—so when jobs vanish without redistribution mechanisms, entire towns, sectors, and countries approach critical thresholds. No supernatural event is required—just the combination of unregulated capitalism and unchecked technological deployment.
This isn’t solely about alignment. It’s about competence—risk-aware policies, deployment strategies, governance systems that identify societal trade-offs before damage occurs. Many developers, investors, and executives operate within professional siloes. Their priorities often center on profit, speed, and breakthroughs—not public trust, community cohesion, or systemic stability.
That doesn’t imply global annihilation. It implies misery on a massive scale. Collapse doesn’t need to be total to be devastating. A situation can be profoundly horrific without triggering human extinction.
So the question shifts:
Not “Will AI doom us?”
But: “How much damage—and to how many—are acceptably within policy inertia?”
At some point, the reasonable response might look overreactive. And if that tipping point is already crossed, the failure won’t be extinction—it will be society broken by neglect.