Vibe Coding × Open Source

The Vibes Are Rotting

Vibe coding went from Karpathy's casual tweet to Collins Dictionary's Word of the Year. It spawned a $36 billion industry. And it might be killing the open source ecosystem that made it possible.

Listen
A digital garden of open source code, with AI tendrils growing through ancient code trees — some flourishing, others withering under waves of automated pull requests
Academic research paper dissolving into algorithmic fragments among open source repository icons
01

The Paper That Said the Quiet Part Out Loud

When researcher Miklós Koren titled his paper "Vibe Coding Kills Open Source," he wasn't reaching for clickbait — he was building an economic model that makes the threat uncomfortably precise. The argument: vibe coding raises individual productivity by lowering the cost of using existing code, but it simultaneously drains the engagement channels through which open source maintainers earn their living.

The mechanism is elegant and devastating. When a developer uses Cursor or Claude to scaffold a project, they never visit the Stack Overflow question that would have taught them the answer. They never open the documentation page that funds the maintainer through ad revenue. They never file the bug report that alerts the project to a real issue. Access to ChatGPT alone reduced Stack Overflow activity by 25% within six months. Tailwind CSS's creator reports docs traffic down 40% and revenue down nearly 80% — while npm downloads keep climbing.

The paper's most uncomfortable prediction: if vibe coding mediates final-user consumption of open source (not just developer-to-developer tooling), the equilibrium features lower entry barriers, lower average quality, and a hollowed-out middle where the serious maintainers who held everything together simply leave. The demand-diversion channel isn't a bug in the system — it's the system working exactly as vibe coding incentivizes it to.

A cracking shield overwhelmed by digital noise, representing curl's bug bounty overwhelmed by AI spam
02

curl Pulls the Plug After Drowning in AI Slop

After 87 confirmed vulnerabilities and over $100,000 paid to researchers since 2019, curl creator Daniel Stenberg killed the project's bug bounty program on January 31, 2026. The reason isn't funding, strategy, or a change in security philosophy. It's that AI turned the submission pipeline into an open sewer.

The numbers tell the story bluntly: in previous years, over 15% of submissions were confirmed vulnerabilities. By 2025, that rate cratered below 5%. In the first weeks of 2026 alone, seven reports arrived within a 16-hour window — some were actual bugs, but none were security vulnerabilities. Twenty submissions in a single week, with a hit rate approaching zero. Stenberg's words cut: "The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live."

curl isn't alone. Django updated its security docs to explicitly reject AI-generated reports. Node.js imposed minimum HackerOne Signal scores. libxml2 maintainer Nick Wellnhofer ended embargoed vulnerability reports entirely. The pattern is clear: the financial incentive structure of bug bounties, designed to attract security researchers, now incentivizes LLM prompt-and-submit operations at industrial scale. Stenberg tried banning AI submissions in May 2025. It didn't work. The only remaining move was to shut the door.

The asymmetry problem: It takes seconds to generate a plausible-looking vulnerability report with an LLM. It takes a maintainer hours to prove it's hallucinated. When you have three hours per week to contribute to open source, one false report wipes out your entire capacity.

A code-block rocket launching from Stockholm, trailing golden sparks and investor silhouettes
03

$6.6 Billion Says the Vibes Will Keep Flowing

While maintainers drown in AI slop, the companies building the slop machines are printing money. Lovable — the Stockholm-based "vibe coding" platform formerly known as GPT Engineer — closed a $330 million Series B at a $6.6 billion valuation, led by CapitalG and Menlo Ventures with participation from NVIDIA's venture arm, Salesforce Ventures, and Databricks Ventures.

The valuation tripled in five months. Revenue hit $200 million ARR just twelve months after launch — with Klarna, Uber, and Zendesk among its customers. And Lovable isn't even the biggest player: Cursor raised $2.3 billion in November at a staggering $29.3 billion valuation, with $1 billion in annualized revenue.

Bar chart showing Cursor and Lovable valuations doubling and tripling through 2025
Vibe coding startup valuations through 2025. Cursor's value nearly tripled in five months; Lovable's nearly quadrupled. Source: CNBC, TechCrunch.

Here's the tension that makes this story uncomfortable: the capital markets are betting tens of billions that vibe coding is the future of software. That future requires open source as raw material — every prompt-to-app tool runs on open source frameworks, libraries, and infrastructure. But none of that $36+ billion in combined valuation flows back to the maintainers whose work makes it possible. The extraction has never been more explicit, or more well-funded.

Two assembly lines: human-operated producing clean code, AI-operated producing cracked code with warning indicators
04

The Code Machines Are Fast. They're Also 1.7x Buggier.

CodeRabbit's "State of AI vs Human Code Generation" report analyzed 470 real-world open source pull requests and found what experienced developers suspected but couldn't prove: AI-generated code introduces significantly more defects across every major category of software quality. Not just some categories. All of them.

Bar chart comparing AI vs human code across logic, maintainability, security, and performance categories
AI-generated code produces more issues than human-written code across all quality categories. Source: CodeRabbit Report, Dec 2025.

Logic and correctness errors: 1.75x more. Code quality and maintainability: 1.64x more. Security findings: 1.57x more. Performance issues: 1.42x more. But the security numbers are where it gets genuinely alarming. AI code was 2.74x more likely to introduce XSS vulnerabilities, 1.91x more likely to create insecure object references, and 1.82x more likely to implement insecure deserialization.

Horizontal bar chart showing AI security vulnerability multipliers vs human code
AI-generated code is nearly 3x more likely to introduce XSS vulnerabilities compared to human-written code. Source: CodeRabbit Report.

This isn't an argument against AI coding tools — it's an argument for understanding what they actually produce. Sonar frames it as an "Engineering Productivity Paradox": the sheer volume of AI-generated code creates bottlenecks in review and verification that offset the speed gains. Cortex's 2026 benchmark found PRs per author up 20% while incidents per PR rose 23.5% and change failure rates climbed 30%. Google's 2025 DORA report linked a 90% increase in AI adoption to a 9% climb in bug rates, 91% more code review time, and 154% larger pull requests. Speed without quality isn't velocity — it's technical debt at scale.

Cursor AI towering over a cityscape like a colossus, with billions flowing as glowing data streams
05

Cursor Hits $29 Billion and Nobody Blinks

Cursor closed a $2.3 billion Series D at a $29.3 billion valuation — nearly tripling its $9.9 billion mark from just five months earlier. The round was led by Thrive, with a16z, Accel, Coatue, NVIDIA, and Google all participating. Revenue crossed $1 billion annualized. Enterprise adoption grew 100x in 2025.

For context: OpenAI reportedly approached Cursor's parent company Anysphere about an acquisition earlier in 2025. It didn't go anywhere. Anthropic disclosed that Claude Code alone generates over $500 million in run-rate revenue. Cognition's AI developer platform Devin raised $400 million at a $10.2 billion valuation. The AI coding market isn't hot — it's incandescent.

What makes the Cursor story particularly relevant to open source: the tool's power comes from understanding codebases — which overwhelmingly means open source codebases. Every Cursor user who scaffolds a project from Next.js, Express, or FastAPI is consuming open source infrastructure through an AI intermediary. The maintainers who built those frameworks get zero signal about what's working, what's breaking, or who's using their code. The mediation layer has become opaque, and $29 billion worth of capital says that's by design.

A developer at a desk with two clocks showing different times, surrounded by AI suggestion bubbles creating distraction
06

You Think You're Faster. The Clock Says Otherwise.

The METR study is the one nobody in the AI coding industry wants to talk about. A rigorous randomized controlled trial — 16 experienced open source developers, 246 tasks, screen recordings, self-reported timings — found that when developers were allowed to use AI tools (primarily Cursor Pro with Claude 3.5/3.7 Sonnet), they took 19% longer to complete issues.

Bar chart showing expected +24% speedup, perceived +20% speedup, but actual -19% slowdown
The AI Productivity Paradox: developers expected a 24% speedup, perceived a 20% speedup, but measured 19% slower. Source: METR RCT Study, Jul 2025.

The perception gap is almost more interesting than the result itself. Developers expected AI to speed them up by 24%. Even after experiencing the measured slowdown, they still believed AI had made them 20% faster. The vibes were excellent. The clock disagreed.

The mechanism matters for understanding the open source impact. Screen recordings showed more idle time during AI-assisted coding — not just "waiting for the model" time but straight-up no activity. Developers spent 9% of their time reviewing and cleaning AI outputs, plus 4% waiting for generations. The AI behaved, in their words, "like an inexperienced team member struggling with backwards compatibility or proposing edits in the wrong locations." These were developers with an average of five years and 1,500 commits of experience on their projects. They were already fast. The AI introduced overhead without improving outcomes.

METR frames this as a snapshot of early-2025 capabilities. Models have improved since. But the study's deeper implication holds: when experienced developers on familiar codebases see a slowdown, the productivity narrative that justifies the entire vibe coding industry deserves far more scrutiny than it currently receives.

The Commons Has a Predator Problem

The irony is almost too perfect: vibe coding tools are built on open source, trained on open source, and deployed against open source. They extract value from maintainers' unpaid labor, divert engagement from the channels that fund maintenance, and flood contribution pipelines with noise that burns out the humans keeping critical infrastructure alive. The $36 billion in VC money isn't flowing to the commons — it's flowing to the companies that figured out how to mine the commons more efficiently. The vibes are immaculate. The foundations are cracking. Something has to give.