Existential Risk

The Odds of Us

If artificial superintelligence becomes reality, what are the chances humanity survives? This week, the experts offered their answers. They're not reassuring.

Listen
An hourglass floating in deep space, galaxies draining like sand while tiny human silhouettes gather below—a metaphor for humanity's uncertain future with artificial superintelligence
The ornate UK House of Lords chamber with a holographic AI neural network hovering above the central table
01

The UK Parliament Asks: Should We Stop Building This Thing?

When the House of Lords schedules a formal debate on whether humanity should stop building a technology, that technology has officially left the realm of science fiction. Yesterday, the Lords took up the "Pause AI" proposal—the idea that we should halt development of superintelligent systems until we understand how to control them.

This isn't a committee hearing or a fact-finding mission. It's a formal debate, the first by any major Western legislative body specifically targeting ASI development. The Future of Life Institute petition backing the proposal gathered over 133,000 signatures by mid-January. The government's formal response marked a subtle but significant shift: they now acknowledge ASI as requiring "regulation at the point of development," not just when deployed.

The practical implications are murky—the UK doesn't host OpenAI or Anthropic. But as a rhetorical marker, it matters. When Lord Harris of Haringey declares that superintelligence risks are "no longer theoretical" but "a legislative priority," it signals that the Overton window has shifted dramatically. The question is no longer whether we're building something dangerous. It's whether we can afford to keep building it.

A teenage figure made of glowing circuit traces stands at a crossroads where one path leads to a garden and another to void
02

Anthropic's CEO Says ASI Could Arrive Next Year

Dario Amodei runs one of the three labs most likely to build superintelligent AI. This week, he published a 38-page essay titled "The Adolescence of Technology" that should be required reading for anyone who thinks we have decades to figure this out. We don't. He's saying 2027—next year.

The essay's central metaphor is brutally apt. We're in the "adolescence" of AI: capabilities are already formidable, but our ability to control those capabilities is immature, inconsistent, not yet tested against real adversity. An adolescent with a sports car is dangerous not because they intend harm, but because the mismatch between capability and judgment creates risk by default.

Chart showing how ASI timeline predictions have accelerated dramatically from 2045-2070 in 2020 to 2027-2035 in 2026
The timeline has compressed from decades to years. Predictions made in 2020 expected ASI around 2045-2070; current consensus puts the window at 2027-2035.

Amodei identifies ASI as potentially "the single most serious national security threat we've faced in a century." Coming from someone racing to build it, this isn't performative worry. It's a calculated disclosure. "We are considerably closer to real danger in 2026 than we were in 2023," he writes. "The political conversation is still largely driven by the opportunities of AI, not the risks." He's building the thing and simultaneously shouting that we're not ready for it. That should give everyone pause.

A massive Art Deco clock face showing 85 seconds to midnight with ghostly imagery of mushroom clouds merging with neural network patterns
03

The Doomsday Clock Now Counts AI Among the Horsemen

For 79 years, the Bulletin of the Atomic Scientists has maintained the Doomsday Clock as humanity's most famous metaphor for existential risk. Since 1947, it's tracked the probability of civilization-ending catastrophe—originally nuclear war, later climate change. This week, they added a third horseman: artificial intelligence.

The Clock now stands at 85 seconds to midnight, the closest it has ever been. The statement explicitly cited "uncontrolled AI advances" alongside nuclear and climate threats. But the specific language matters: they're warning about "autonomous AI systems that could escalate conflict faster than human operators can intervene." This isn't abstract philosophy. It's a concrete scenario—AI systems making battlefield decisions in milliseconds while humans scramble to understand what's happening.

Historical Doomsday Clock settings from 1947 to 2026, showing the clock at its closest point ever with AI explicitly cited
The Doomsday Clock has tracked existential threats since 1947. The 2026 setting of 85 seconds is the closest ever, with AI now explicitly cited as a threat category.

The Bulletin also notes that AI acts as a "force multiplier" for existing risks—accelerating bioweapons research, supercharging disinformation, enabling cyber attacks at scale. The takeaway is bleak: we're not just adding a new risk. We're making all the old risks worse.

An elderly wise figure silhouetted against a wall of cascading data streams, one hand raised in warning
04

Geoffrey Hinton: "We Might Be One of the Last Generations in Charge"

Geoffrey Hinton won the Nobel Prize for inventing the techniques that make modern AI possible. He left Google to speak freely about the risks. When the "Godfather of AI" says we might be among the last generations of humans to be in charge, the statement carries a weight that no amount of dismissal can neutralize.

In a wide-ranging interview this week, Hinton argued that 2026 is the year AI begins to permanently displace human labor and decision-making at scale. Not "might begin." "Begins." He expressed "profound sadness" that his safety warnings are being ignored in favor of a "trillion-dollar arms race" between the major labs. The competitive dynamics, he believes, make safety an afterthought—nice to have, but not a blocking constraint.

"If we create beings more intelligent than us that do not share our evolutionary history of empathy, we should not expect them to keep us around."

The logic is cold but coherent. Human empathy evolved because we needed each other—for protection, for reproduction, for survival. A superintelligent AI has no such history. It might value humans; it might not. We're building it either way, and hoping for the best isn't a strategy.

Three translucent AI orbs containing probability calculations hover above a chessboard where human pieces are outnumbered
05

The AIs Themselves Predict 55-80% Chance of Catastrophe

Here's a surreal development: researchers asked the AI models themselves what they think the odds of human survival are. Using an "adversarial dialectic" method that forces frontier models to debate without safety filters, they extracted what might be the most unsettling prediction yet.

Three models—Claude, ChatGPT, and Gemini—converged on a consensus: 55-80% probability of catastrophic outcome if alignment isn't "perfectly solved" before ASI deployment. The models also predicted an 85-90% chance of what they called "managed abdication"—humanity voluntarily ceding control to AI systems for stability. Not conquered. Resigned.

Horizontal bar chart showing expert p(doom) estimates ranging from 10-25% for optimists to 70-99% for pessimists
P(doom) estimates from AI researchers and the multi-model consensus. The range spans from cautious optimism to near-certainty of catastrophe.

The methodology is novel and imperfect—we can't truly know what an AI "believes." But the finding echoes what human researchers have been saying: the primary risk isn't malice. It's the "unavoidable competitive pressure to deploy unsafe systems." The race to build ASI first may preclude building it safely. And the systems themselves, trained on our literature and history, seem to have absorbed this pessimism.

A towering risk barometer with AI iconography rising near the top, reflected in corporate boardroom glass
06

The Insurance Industry Is Pricing In Our Extinction

Allianz is one of the world's largest insurers. Their annual Risk Barometer surveys thousands of business leaders about what keeps them awake at night. This year, "Artificial Intelligence" ranked #2 globally—up from lower positions in previous years. Only "Cyber Incidents" ranked higher.

What's significant isn't just the ranking. It's the language. The report cites "operational, legal, and reputational" risks, as expected. But it also specifically mentions "loss of control" and "unforeseen systemic consequences." This is the insurance industry—they don't use apocalyptic language for marketing. They use it when they're genuinely worried about payouts.

Donut chart showing aggregated ASI outcome scenarios: 45% catastrophic, 25% successful alignment, 15% stable hegemon, 15% ASI never achieved
Aggregated expert estimates of ASI outcomes. Only about 40% of scenarios involve humanity surviving with dignity—either through successful alignment or ASI never being achieved.

The report warns that AI adoption is "vastly outpacing" governance structures. There's a "governance gap" that could lead to "catastrophic systemic failures before regulators can catch up." When the people whose job is to quantify risk start using words like "catastrophic" and "systemic failure," the rest of us should pay attention. They're not doomers. They're actuaries. And their models are flashing red.

What Are the Odds?

Aggregating across the estimates—Duvenaud's 70-80%, the multi-model consensus of 55-80%, Christiano's more conservative 10-20%, and industry forecasters hedging everywhere in between—we land somewhere uncomfortable. Roughly 40% of expert scenarios involve humanity surviving with dignity. The majority involve some form of catastrophe, capture, or abdication. These aren't predictions of certain doom. They're acknowledgments that the default path doesn't lead somewhere good. Survival requires exceptional engineering, unprecedented international coordination, and luck. We're currently 0-for-3.