Artificial Superintelligence

The Gods Are Impatient

From nuclear power deals to trillion-dollar valuations, the race to build minds greater than our own has never moved faster—or felt more uncertain. Here's what January 2026 tells us about when superintelligence might actually arrive.

Listen
A massive translucent brain made of crystalline circuitry floating above a futuristic city at twilight
Surreal World Economic Forum scene with AI bubbles floating ominously
01

Davos Pours Cold Water on the Superintelligence Hype

For three years running, Davos has been an AI victory lap—founders announcing billion-dollar raises while snow-dusted executives nodded along about "transformative potential." This year was different. The central theme wasn't possibility but skepticism: investors asking hard questions about when these models will actually make money, not just headlines.

The conversation has shifted from "generative hype" to what organizers called "the hard reality of implementation." Reuters reported that investors are becoming "more discerning," actively punishing companies that spend on GPUs without showing clear ROI. Translation: the capital markets are losing patience with promises of superintelligence that arrive "any day now."

This matters because ASI requires money—lots of it. Meta's 6.6 gigawatt nuclear deal (see below) doesn't fund itself. If the "AI Bubble" narrative gains traction, the trillion-dollar clusters required for recursive self-improvement may never get built. The optimistic timeline assumes infinite capital. Davos just reminded us that capital has limits.

Bar chart showing AI lab valuations as of January 2026
The capital flowing into frontier AI labs suggests investors still believe—but Davos skepticism could slow the spigot.
Obsidian fortress of servers protected by a glowing safety shield
02

Anthropic's $350B Valuation Proves "Safety" Sells

Anthropic is now worth $350 billion. Let that sink in. A company founded in 2021 by researchers who left OpenAI over safety concerns has achieved a valuation that exceeds Boeing, Goldman Sachs, and Starbucks combined. Their 2026 revenue is projected at $18 billion—with $55 billion expected in 2027.

What's driving this? Enterprise trust. While OpenAI chases consumer products like earbuds and viral chatbots, Anthropic has quietly become the default choice for regulated industries. Their Claude Cowork autonomous desktop agent and Claude for Healthcare are winning contracts that require airtight safety guarantees.

The ASI implications are significant: Anthropic now has the war chest to stay in the scaling race without cutting corners on alignment research. Critics have long argued that safety-focused labs would lose the race to "move fast and break things" competitors. The market is betting otherwise. Whether that bet pays off—whether safety-first approaches can actually reach superintelligence—remains the central question of our decade.

NVIDIA emperor on throne of GPU chips absorbing competitors
03

NVIDIA Swallows Groq, Now Controls the Entire Stack

NVIDIA just paid $20 billion for Groq, the inference chip startup famous for absurdly fast language model responses. On paper, this is about latency—Groq's LPU technology enables real-time voice interactions that feel instantaneous. In practice, this is about control.

NVIDIA now owns both ends of the AI compute pipeline: training (via their H100/Blackwell GPUs that power every major lab) and inference (via Groq's specialized chips that power deployment). When a single company controls the physical substrate of intelligence, questions about ASI timelines become questions about NVIDIA's roadmap.

This consolidation has geopolitical implications too. China's DeepSeek just proved efficient architectures can rival US giants without cutting-edge chips. But efficient training means nothing if you can't deploy efficiently. By owning inference hardware, NVIDIA maintains leverage even over competitors who escape their training GPU dominance. The path to superintelligence now runs through Santa Clara—whether the world likes it or not.

Chinese dragon made of code and semiconductor wafers
04

DeepSeek-V4 Shatters American Compute Supremacy

The assumption has always been that superintelligence would be built in American data centers, by American companies, on American chips. DeepSeek-V4 just complicated that narrative. The Chinese lab released an open-weight Mixture-of-Experts model that surpasses GPT-4.5 Turbo on HumanEval and MATH benchmarks—and runs on consumer hardware.

Read that again: dual RTX 4090s. Not a cluster of H100s. Not a billion-dollar data center. Two gaming GPUs you can buy at Best Buy. DeepSeek's "Silent Reasoning" architecture achieves this through radical efficiency, with inference costs estimated at just 40% of OpenAI's equivalent. This is what analysts call "the first fully sovereign reasoning model."

Horizontal bar chart showing ASI timeline predictions from various experts
Expert predictions vary wildly—from Elon Musk's 2026-2030 window to Gary Marcus's suggestion that true ASI may be 50+ years away.

Meanwhile, Google maintains "loud silence" about Gemini 4. Industry leaks suggest it could be a 100-trillion parameter model—10x larger than anything publicly known. If true, Google is preparing to brute-force the scaling laws while DeepSeek proves you don't have to. The race to ASI just became a three-way competition between American scale, Chinese efficiency, and whoever figures out which approach actually matters.

Code editor in space with autonomous code writing itself
05

GPT-5.2 Codex: The Recursive Self-Improvement Threshold?

OpenAI's GPT-5.2 Codex is now generally available, and it represents something more than an incremental improvement. This is a dedicated "Thinking" model architecture optimized specifically for autonomous coding—the task that AI researchers have long identified as the critical threshold for recursive self-improvement.

Here's the theory: once an AI can reliably improve its own codebase, it can improve the AI that improves its codebase, creating a positive feedback loop that accelerates toward superintelligence. "Superhuman Coder" status isn't just a benchmark—it's potentially the trigger for takeoff.

Timeline of AGI/ASI milestones from 2020 to 2035
The road from GPT-3 to potential ASI: achieved milestones vs. projected breakthroughs.

Reports indicate Sam Altman issued an internal "code red" to refocus against Google's Gemini, suggesting the race dynamics are intensifying. OpenAI is also preparing for a $1 trillion valuation IPO and a consumer device launch ("Sweetpea" AI earbuds). Whether they're building superintelligence or consumer electronics depends on which Sam Altman quote you read.

Nuclear cooling tower transforming into a glowing AI brain
06

Meta's 6.6 Gigawatt Bet: Energy Is the Bottleneck

Meta just signed one of the largest corporate energy procurement contracts in history: 6.6 gigawatts of nuclear power from Vistra, TerraPower, and Oklo. This isn't for running Facebook or Instagram. This is explicitly for the Prometheus AI Supercluster, scheduled to come online later in 2026.

Dual-axis chart showing exponential growth in training compute and power requirements
From GPT-3 to Meta's Prometheus: the exponential cost of building intelligence.

To put 6.6 GW in perspective: that's roughly the output of six nuclear power plants, or enough to power 5 million homes. Meta is building infrastructure at civilizational scale to train models that don't exist yet. Mark Zuckerberg is betting that the path to superintelligence isn't just about algorithms—it's about energy.

This reframes the ASI timeline question. It's not just "when will we figure out the architecture?" but "when will we have enough power to run it?" If Meta's Prometheus cluster comes online by late 2026, they'll have the physical capacity to train models orders of magnitude larger than GPT-4. Whether that scale produces superintelligence—or just more expensive autocomplete—is the trillion-dollar question.

The Only Honest Answer

When will ASI arrive? Expert predictions range from 2027 to "never." The optimists point to recursive self-improvement in coding models. The skeptics point to Davos bubble talk and fundamental barriers we haven't identified yet. The honest answer is that nobody knows—and anyone claiming certainty is selling something. What we can say: the infrastructure is being built, the capital is flowing, and the race has never moved faster. Whether we're two years away or twenty, this is the most consequential technology bet in human history.