AGI Timelines

The Five-Year Mirage

AGI has been five years away for a decade. This week, the world's sharpest minds and deepest pockets are placing their bets—and they still can't agree on the finish line, let alone when we'll cross it.

Listen
A cosmic clockface with teal circuit traces forming clock hands, neural network nodes replacing numbers, suspended above a horizon where silicon meets organic brain tissue
Abstract visualization of prediction market data with luminous probability curves converging around 2031 in teal-cyan
01

The Crowd Says 2031. The Crowd Has Been Wrong Before.

Here's the thing about crowdsourced intelligence: it's excellent at integrating public information and terrible at pricing genuine surprises. Metaculus—the prediction platform beloved by rationalists and AI researchers alike—updated its flagship AGI question this week, and the community median sits at 2031. That's a gentle pull-in from the 2032–33 range that held through most of last year, almost certainly a reaction to the barrage of capable model releases hitting in February.

The more interesting signal is in the tails. The 25th percentile has crept down to 2027—meaning one in four forecasters thinks we're less than eighteen months away. The "weakly general AI" question (a lower bar that requires broad competence without human-level everything) hovers around that same 2027 mark at the median. Meanwhile, the right tail still stretches comfortably past 2035 for the skeptics who think the current paradigm needs a fundamental rethink.

Line chart showing Metaculus community AGI median forecast shifting from 2060 in 2020 to 2031 in February 2026
The Great Pull-In: Metaculus community median AGI forecast has shifted 29 years closer since 2020, with major inflection points at ChatGPT's launch (late 2022) and the GPT-4 scaling boom (2023).

What should actually make you sit up: since 2020, the median has moved 29 years closer. That's not a gradual refinement—it's the forecast equivalent of the ground accelerating beneath your feet. Whether that means we're actually approaching AGI or just collectively hallucinating progress is, of course, the trillion-dollar question.

A vault door swinging open revealing glowing neural network weights spilling out like golden light, with Chinese Lunar New Year motifs in the metalwork
02

DeepSeek Drops v4 on Lunar New Year—and the Moat Gets Thinner

DeepSeek has a flair for the dramatic. Releasing v4 on Lunar New Year wasn't accidental—it was a statement. The Chinese lab's latest open-weights model lands with massive improvements in mathematical reasoning and coding, areas where only months ago the proprietary leaders held clear advantages.

The pace alone tells a story. DeepSeek v3 was December. V4 is February. That's not a product cycle; it's a sprint. And it's happening with open weights, meaning any research lab, startup, or determined hobbyist on the planet can fine-tune and deploy it. The implications for the AGI timeline debate are straightforward: if open-source can track the frontier with a two-month lag, the "intelligence" part of AGI isn't the bottleneck. Reliability, agency, and world-modeling might be—but raw capability? That's commoditizing in real time.

This also scrambles the economics of the AGI race. OpenAI's newly released GPT-5.3-Codex—a specialized agentic coding model that resolves GitHub issues 45% more reliably than its predecessor—debuted the same week. And Anthropic's Claude Opus 4.6 set fresh SWE-bench and MATH highs. Three labs, three frontier models, one fortnight. The gap between "state of the art" and "what you can run locally" has never been narrower.

A towering word-prediction machine made of stacked letter blocks reaching a glass ceiling, with a richer world of embodied intelligence glowing above
03

Yann LeCun: “Predicting the Next Word Is a Dead End”

If the AI optimists were writing a movie, Yann LeCun would be the grizzled scientist in Act Two who delivers the speech nobody wants to hear. Speaking at the India AI Impact Summit in Delhi, Meta's Chief AI Scientist doubled down with the confidence of a man who's been right before and intends to be right again: large language models are “statistical machines” that lack world models and common sense. Scaling them further will not produce AGI. Full stop.

His argument isn't fringe. LeCun contends that LLMs have no persistent understanding of the physical world, no causal reasoning, and no ability to plan beyond token-level prediction. They're pattern-completion engines dressed up in impressive benchmarks. His proposed alternative—Joint Embedding Predictive Architecture (JEPA)—would build internal world models that predict outcomes in abstract representation space rather than pixel or word space. The problem? JEPA is years behind LLMs in practical deployment.

What makes LeCun dangerous to the "AGI by 2027" crowd isn't just his credentials (Turing Award, inventor of ConvNets). It's that his critique is structural. He isn't saying we need more compute or better data. He's saying the entire paradigm needs replacement. If he's right, the Metaculus median of 2031 is optimistic. If he's wrong, he's the most prominent speed bump between here and superintelligence.

Horizontal bar chart comparing AGI timeline predictions from experts ranging from Altman at 2025-2026 to Marcus at 2040-2075
Where the experts stand: AGI timeline predictions as of February 2026. The gap between the most optimistic (Altman, Amodei, Musk) and most skeptical (LeCun, Marcus) spans nearly half a century.
A winding road stretching from a silicon cityscape toward a luminous horizon of abstract intelligence, with road markers showing years 2026 through 2034
04

Hassabis Puts It at 2031–2034. That's the Conservative Bet.

DeepMind CEO Demis Hassabis is the rare figure in this debate who commands respect from both camps. The man who built AlphaGo and AlphaFold doesn't traffic in hype cycles. So when he tells the India AI Impact Summit that AGI is “five to eight years” away, that deserves a different weight than the same words from a fundraising deck.

His qualifier matters more than his number. Hassabis warned of “jagged intelligence”—the phenomenon where current models are genius-level at some tasks and bewilderingly incompetent at others. A model that aces the bar exam but can't reliably count objects in an image isn't generally intelligent; it's spectacularly uneven. Solving that unevenness, Hassabis suggests, is the real challenge, and it may require fundamentally new approaches to training and architecture.

Position Hassabis on the spectrum: he's more conservative than Altman or Musk, roughly aligned with the Metaculus median, and dramatically more optimistic than LeCun. In practice, his 2031–2034 range functions as the AI establishment's center of gravity—the "sensible" forecast that acknowledges both the staggering progress and the unsolved problems. Whether "sensible" is the right posture during an exponential curve is another question entirely.

A football goalpost on wheels being pushed further back down a misty field, with a trail of discarded definition papers and benchmarks
05

Sam Altman Says AGI Is “Basically Here.” Depends Who’s Asking.

You know the AGI debate has entered its postmodern phase when the CEO of the world's most prominent AI lab declares victory using the word “spiritual.” Sam Altman told interviewers this week that OpenAI has “basically built AGI, or very close to it,” before walking it back to a “spiritual milestone” rather than a technical one.

The response was immediate and bifurcated. Researchers pointed out that GPT-5 still hallucinates, can't maintain consistent reasoning across long contexts without scaffolding, and has no embodied world understanding. Critics called it goalpost-shifting: define AGI loosely enough and you can claim it before your next fundraise. Supporters argued that the collective capability of AI systems—coding agents, research assistants, scientific tools—already exceeds what most people imagined AGI would look like in 2020.

Here's what Altman's comment actually reveals: the definition wars are now the main event. If you define AGI as “a system that can do any cognitive task a human can do, with human-level reliability”—we're nowhere close. If you define it as “AI systems that collectively match or exceed human-level performance across most economically valuable tasks”—we're arguably in the foothills. The timeline question can't be answered until the definition question is settled, and nobody is settling it.

Also worth noting: Google's Gemini 3 “Deep Think” upgrade shipped the same day, pushing inference-time compute to match OpenAI's o-series reasoning. The frontier is converging. Altman's claim of victory may be premature, but the terrain he's claiming is very real.

GPU server racks arranged like gold ingots in a modern bank vault with teal cooling lines, a single figure silhouetted at the entrance dwarfed by compute infrastructure
06

Anthropic Raises $30 Billion Because Nobody Builds AGI on a Budget

Money talks, and Anthropic's Series G is screaming. Thirty billion dollars at a $380 billion valuation, led by GIC and Coatue, with continued backing from Amazon and Google. The funds are explicitly earmarked for “next-generation infrastructure”—compute for training runs that will cost tens of billions of dollars each.

Bar chart showing frontier AI lab funding rounds escalating from $1B in 2019 to $30B in February 2026
The AGI arms race: funding rounds for frontier AI labs have escalated from single-digit billions to $30B in under two years. Anthropic's Series G represents a 3x jump in just 12 months.

The numbers have a clarity that prediction markets and philosophical debates lack. If Anthropic's investors—some of the most sophisticated capital allocators on the planet—are writing checks this size, they're not betting on incremental chatbot improvements. They're pricing in a world where the next generation of models requires $10–20B training runs, and where whoever builds the most capable system first captures asymmetric value. This is AGI-or-bust capital deployment.

And Anthropic isn't alone. xAI raised $12B in 2025. OpenAI took $6.6B. The total capital committed to frontier AI development in the last 18 months exceeds the GDP of most countries. That doesn't tell you when AGI arrives, but it tells you that the people with the most to lose genuinely believe the scaling laws haven't topped out. As one Anthropic executive put it: “We are entering a new phase of capital intensity required to build safe, reliable, and steerable AI systems.” That's the polite way of saying: the next model costs a nation-state's budget, and they think it's worth it.

The Map Is Not the Territory

Every expert prediction about AGI is, at bottom, a confession of uncertainty wearing the clothes of precision. The honest answer to "when will AGI arrive?" is that the question itself needs better engineering. Watch the benchmarks, sure—but watch the definitions harder. The goalposts will tell you more than the scoreboard. The one thing we can say with confidence: nobody building these systems is acting like they have decades to spare.