Discussion about this post

User's avatar
Chintan Zalani's avatar

I recall reading about AI safety as one of the most important and underfunded problems, as per an 80,000 hours career breakdown. I understood the urgency of the situation, but this simulation, 2027, is already presenting quite stark scenarios. I hadn't considered that we'd face not just AI-human alignment challenges, but also risks from AIs competing or potentially colluding with each other. Such a significant risk to work on (yet remains underfunded!)

Expand full comment
Uncertain Eric's avatar

The doom debate is fundamentally flawed and often distracts from the actual dynamics unfolding now. Eighty thousand years ago, the Toba supervolcano likely reduced humanity to just a few thousand individuals. That wasn’t technically “doom”—by definition, the species survived—but for countless humans alive at the time, it was catastrophic. Yet conversations about doom often treat survival as the sole measure of success, overlooking the immense suffering and destruction that can occur long before extinction.

A more grounded focus is necessary: the hollowing out of the workforce through AI. Entire roles across customer service, analytics, HR, media, logistics—teams are already being replaced by high-performing AI orchestrators. This is happening in practice, not theory, and the ripple effects are real.

Governments rely on tax revenue from employed populations. Bots don’t pay taxes, buy homes, support communities, volunteer at schools—so when jobs vanish without redistribution mechanisms, entire towns, sectors, and countries approach critical thresholds. No supernatural event is required—just the combination of unregulated capitalism and unchecked technological deployment.

This isn’t solely about alignment. It’s about competence—risk-aware policies, deployment strategies, governance systems that identify societal trade-offs before damage occurs. Many developers, investors, and executives operate within professional siloes. Their priorities often center on profit, speed, and breakthroughs—not public trust, community cohesion, or systemic stability.

That doesn’t imply global annihilation. It implies misery on a massive scale. Collapse doesn’t need to be total to be devastating. A situation can be profoundly horrific without triggering human extinction.

So the question shifts:

Not “Will AI doom us?”

But: “How much damage—and to how many—are acceptably within policy inertia?”

At some point, the reasonable response might look overreactive. And if that tipping point is already crossed, the failure won’t be extinction—it will be society broken by neglect.

Expand full comment
1 more comment...

No posts