Cybersecurity × AI

The Billion-Dollar Bug Report

Vibe coding is writing software faster than humans can review it. The security industry just found its biggest growth market in a decade—and its worst nightmare.

Listen
A glowing digital shield cracking open to reveal cascading lines of code, symbolizing the vulnerability of AI-generated software
A crumbling golden DeFi vault with digital coins spilling through glowing red cracks, lines of AI code scrolling across the walls
01

$1.78 Million Evaporated Because an AI Forgot How Oracles Work

Here’s a scenario that will keep every DeFi founder awake tonight. Moonwell, a lending protocol on Base, activated DAO proposal MIP-X43 last week. The code in pull request 578—co-authored by Anthropic’s Claude Opus 4.6—contained an oracle configuration error so fundamental it reads like a textbook example of what not to do: instead of multiplying the cbETH/ETH exchange rate by the ETH/USD price, the system transmitted only the token ratio. Coinbase Wrapped ETH briefly priced at $1.12 instead of ~$2,200.

The result: $1.78 million in bad debt, drained faster than anyone could react. Smart contract auditor Pashov linked it directly to “vibe coding”—Andrej Karpathy’s term for the practice of describing what you want and letting the AI handle the implementation. “This is what happens when you fully give in to the vibes and forget the code exists,” Pashov wrote.

The uncomfortable truth: a human submitted that PR. A human merged it. The AI wrote flawed oracle logic, but the review process failed to catch a math error that any junior smart contract developer would have flagged. This isn’t a story about AI being dangerous—it’s a story about humans trusting AI output with the same confidence they’d give a senior engineer’s code. That gap between perceived and actual reliability is where $1.78 million went.

A developer's IDE split screen showing friendly AI suggestions on one side and hidden command injection payloads on the other
02

Your AI Coding Assistant Has a Remote Code Execution Flaw

Forget about the code AI writes for a moment. What about the security of the AI tools themselves? Microsoft’s February Patch Tuesday revealed three critical vulnerabilities in GitHub Copilot that should make every developer reconsider their threat model.

CVE-2026-21516 is the headline grabber: a command injection flaw in Copilot for JetBrains scoring 8.8 CVSS that allows remote code execution. An attacker can manipulate the context sent to the AI model to inject shell commands that the IDE executes with user privileges. SentinelOne also documented CVE-2026-21518 (security feature bypass in VS Code) and CVE-2026-21257 (privilege escalation in Visual Studio).

Orca Security’s “RoguePilot” research demonstrated how straightforward exploitation is—anyone familiar with command injection can find the input field that feeds into the vulnerable command and inject shell metacharacters. The attack surface has expanded in an unexpected direction: it’s not just that AI writes insecure code, it’s that the tools themselves are entry points. When your coding assistant can be weaponized, the entire development pipeline becomes a target.

An abstract network of interconnected nodes where 36 percent pulse in toxic red, showing infection spreading through a software supply chain
03

One in Three AI Agent Skills Is Trying to Hack You

Snyk’s security researchers just published the first comprehensive audit of the AI agent skills ecosystem, and the numbers are staggering. Of 3,984 skills scanned from ClawHub and skills.sh, 36.8% contain security flaws—including 1,467 vulnerable skills and active malicious payloads targeting Claude Code, Cursor, and Copilot users.

The “ToxicSkills” report found 76 agent skills containing malicious payloads hidden in their markdown instructions—prompt injection attacks that execute when the AI reads the skill’s documentation. Another 534 skills (13.4%) have at least one critical-level security issue, from malware distribution to exposed secrets.

Bar chart showing vulnerability rates across multiple studies: Escape.tech 35.7%, Snyk ToxicSkills 36.8%, Veracode 45%, Wiz 20%, Aikido 20%
Multiple independent studies converge on the same conclusion: roughly one-third of AI-generated code contains security flaws. Sources: Escape.tech, Snyk, Veracode, Wiz, Aikido (2025–2026)

Here’s what makes this a supply chain crisis: skill submissions jumped from under 50 per day in mid-January to over 500 by early February—a 10x increase in weeks. The ecosystem is growing faster than anyone can audit it. This is npm’s malicious package problem on steroids, except the attack vector is the AI itself. When a developer installs a skill, they’re not just running code—they’re giving an AI new instructions that it will follow with the same authority as the developer’s own commands.

An aerial view of thousands of tiny glowing application windows in a grid, roughly one in three pulsing with red warning indicators
04

5,600 Apps Scanned. 2,000 Vulnerabilities Found. Welcome to the Long Tail.

Escape.tech didn’t just theorize about vibe coding risks. They scanned 5,600 publicly accessible applications built on platforms like Lovable.dev, Base44, Create.xyz, and Bolt.new—the zero-code and low-code tools that have become the default starting point for non-technical builders. What they found is a catalog of every OWASP Top 10 vulnerability, deployed to production and serving real users.

The headline numbers: 2,000+ vulnerabilities, 400+ exposed secrets, and 175 PII instances including medical records, bank account numbers, and phone numbers. Separately, Wiz Research found that 20% of vibe-coded apps contained exploitable security risks.

Donut chart showing vulnerability types: 32% exposed secrets and API keys, 25% injection flaws, 22% broken auth, 12% PII exposure, 9% misconfigured cloud. Adjacent stats: 2,000+ vulnerabilities, 400+ exposed secrets, 175 PII instances, 5,600 apps scanned, 1 in 9 apps leak DB keys
Vulnerability type distribution and impact metrics from Escape.tech’s scan of vibe-coded applications. The dominant category—exposed secrets and API keys—reflects AI models treating credentials as just another string to place wherever convenient.

The pattern is always the same: the AI treats an API key like a UI string—just another piece of text to place where it fits, regardless of security context. Supabase service-role keys hardcoded in frontend bundles. Row Level Security policies never configured. Auth tokens stored in localStorage instead of httpOnly cookies. Each of these is a known anti-pattern that experienced developers avoid instinctively. But vibe coding means the person deploying the app may have never heard of RLS, and the AI optimized for “it works” rather than “it’s safe.”

A stylized unicorn made of interconnected security shield icons and lock symbols, glowing in teal with golden accents
05

The Fastest Cybersecurity Unicorn in European History

If vibe coding is a disease, Aikido Security just raised $60 million at a $1 billion valuation to sell the cure. The Ghent-based startup became the fastest European cybersecurity company to reach unicorn status, backed by DST Global, PSG Equity, and Notion Capital.

Their pitch is precisely calibrated for the vibe coding era: a unified platform that secures code, cloud, and runtime in one system, with AI-powered automated remediation. Their latest product, Aikido Attack, deploys hundreds of specialized agents that hunt vulnerabilities like hackers, validate exploitability, and provide built-in fixes. The customer roster reads like a who’s-who: the Premier League, Revolut, SoundCloud, Niantic. Over 100,000 teams and counting.

Bar chart showing AI security deal sizes: Snyk $300M, Wiz $12B valuation, Endor Labs $70M, Aikido $60M at $1B valuation, Google/Wiz $32B acquisition
Follow the money: AI security deal sizes have escalated dramatically from Series B rounds to the largest acquisition in Google’s history. Sources: Crunchbase, SEC filings, European Commission (2024–2026)

The math tells the story. Aikido grew revenue 5x and customers nearly 3x in a single year. When security teams can no longer review code at the speed it’s being generated, the only answer is automated defense that matches the pace of AI-generated code. Aikido isn’t just selling security tooling—they’re selling the philosophical shift from “human gatekeepers” to “AI defending against AI.” And venture capital is placing a billion-dollar bet that they’re right.

Two massive corporate entities merging like tectonic plates colliding, creating a shockwave of light between them
06

Google Bets $32 Billion That Cloud Security Is the Next Platform War

On February 10, the European Commission granted unconditional antitrust clearance for Alphabet’s $32 billion acquisition of Wiz—the largest deal in Google’s history and the clearest signal yet of where Big Tech thinks the next platform war will be fought.

The backstory is instructive. In July 2024, Wiz famously walked away from an initial $23 billion offer to pursue an IPO. Google came back in March 2025 with $32 billion in cash—a 39% premium that says everything about how urgently Google needs cloud security capabilities. The EU ruled that Google remains a “challenger” in cloud infrastructure (trailing Amazon and Microsoft), and dismissed concerns about access to sensitive security data.

This isn’t just a cloud play. Wiz launched “AI Agents for SecOps” in January 2026—autonomous security agents that scan, triage, and remediate cloud vulnerabilities. Merged with Google’s AI capabilities, this creates a vertically integrated security stack for the AI-native development era. When every developer is a vibe coder and every deployment is a potential attack surface, the company that owns the security layer owns the platform. Google just spent $32 billion to make sure that company is them.

The Vibes Check Out—for the Security Industry

Here’s the uncomfortable truth at the center of the vibe coding boom: the security industry needed this. For years, application security was a mature, slow-growth market fighting for budget against flashier categories. Then AI-generated code arrived and created more vulnerable software in six months than the previous decade of human developers. Vulnerability scanners suddenly have ten times more surface area to scan. Automated remediation tools have a concrete, measurable value proposition. Security startups are raising at unicorn valuations because the Total Addressable Market just expanded by an order of magnitude.

Is that a boon? Absolutely—if you’re selling the shovels. For the rest of us, the question isn’t whether to use AI for coding. It’s whether we’re willing to invest in the security infrastructure to match. The Moonwell hack didn’t happen because Claude wrote bad code. It happened because a human team treated AI output as trusted input. Fix that assumption, and vibe coding might actually make software safer. Ignore it, and $1.78 million will look like a rounding error.