Software Engineering × AI

The Bottleneck Moved

93% of developers adopted AI coding tools. Productivity barely budged. The real transformation isn't about writing code faster—it's about what happens after the code is written.

Listen
Neural network tendrils extending from human hands into a vast digital horizon, where biological meets algorithmic in deep teal light
A digital shield deflecting streams of corrupted code, glowing teal against architectural blueprints
01

Your AI Wrote the Code. Who's Checking for Landmines?

Anthropic just shipped something that tells you everything about where software engineering is headed: an AI security tool designed specifically to catch vulnerabilities introduced by AI code generation. Let that sink in. We've reached the stage where AI needs to check AI's homework.

Claude Code Security integrates directly into the coding workflow and "reasons through code like a security researcher," scanning for the specific class of bugs that LLMs love to produce—the ones that compile cleanly, pass tests, and quietly open your database to the world. It even suggests automated patches.

This isn't paranoia. CodeRabbit's analysis this week found that AI-authored code carries 2.74x more security issues than human-written code. When a quarter of your production codebase is machine-generated (and climbing), that multiplier starts to look existential.

The signal: Security review is no longer a phase at the end of the pipeline. It's becoming an always-on layer that runs continuously alongside code generation—because the generation never stops.

An open-source code tree weighed down by cascading machine-generated pull requests, branches cracking under the weight
02

Open Source Is Drowning in Well-Intentioned AI Slop

Here's a problem nobody saw coming when we celebrated the democratization of code generation: open-source maintainers are getting buried under AI-generated pull requests that look right but aren't.

The pattern is insidious. An AI generates a PR that compiles, passes CI, reads like idiomatic code. A maintainer reviews it, maybe catches one issue, merges with minor edits. Six months later, a subtle logic error surfaces in production. Who owns that bug? The contributor who typed a prompt? The model that hallucinated the edge case? The maintainer who didn't have four hours to deeply audit what appeared to be a clean contribution?

InfoWorld calls it "the asymmetry between the speed of AI code generation and the time required for human review." That asymmetry is crushing volunteer-run projects. It costs nothing to generate a 500-line PR. It costs hours to properly review one. And once merged, the maintainer inherits the support burden for years.

This is an economic attack on open source disguised as contribution. The intent is good. The net effect is toxic. Some major projects are now requiring contributors to certify that code wasn't AI-generated—a bandaid on a systemic problem.

A senior engineer surrounded by towering stacks of AI-generated code review documents, face lit by a teal monitor glow
03

The Bottleneck Was Never Code Generation

The numbers are in, and they're brutal. Senior engineers using AI tools are shipping 2.5x more code—but PR review times have ballooned 91% and PR sizes have inflated 154%. We didn't eliminate the bottleneck. We moved it.

Horizontal bar chart showing AI code quality metrics: PR review time +91%, PR size +154%, total issues +70%, security issues +174%, AI code in production +22%
The AI code quality tax on senior engineers. Source: CodeRabbit / SoftwareSeni (Feb 2026)

Here's what actually happened: AI made code generation trivially fast, so junior developers (and, let's be honest, senior ones too) started shipping more of it. But code review—the thing that actually catches architectural mistakes, subtle bugs, and security holes—still requires a human brain that understands the codebase. There's no --skip-review flag for production software.

The result is a crisis of throughput. Your most experienced engineers—the ones who should be designing systems and mentoring juniors—are now spending their days reading through mountains of machine-generated code, much of it solving problems that didn't need solving. CodeRabbit's data shows AI-generated code has 1.7x more issues overall and 2.74x more security vulnerabilities than hand-written code.

The uncomfortable truth: AI made the cheap part of software engineering cheaper. The expensive part just got more expensive.

A translucent glass brain with neural pathways dimming as bright AI circuits overlay and replace organic patterns
04

Cognitive Debt: The Bug You Can't See in the Diff

Technical debt has a new cousin, and it's worse. Researchers are calling it "cognitive debt"—what accrues when developers accept AI-generated code without building the mental models of why it works.

We all know the pattern. The AI suggests a solution. It compiles. Tests pass. You move on. But here's the thing: you never actually reasoned through the problem. Six weeks later, when that code breaks at 2 AM, you're debugging something you never understood in the first place. Your memory traces of the logic are "faded"—because they were never formed.

The research found that experienced developers are sometimes working slower with AI assistance. Not because the tools are bad, but because the cognitive shortcut of accepting generated code means they skip the deep reasoning that usually happens during implementation. When that reasoning doesn't happen, post-incident triage and debugging times explode.

The paradox: AI tools optimize for speed of creation. But understanding—the thing that makes you effective when things go wrong—only comes from the slow, deliberate act of thinking through the problem yourself. Speed and comprehension are in tension, and we're systematically choosing speed.

This doesn't mean you should stop using AI tools. It means you should treat every Accept button as a question: "Do I understand this well enough to debug it at 2 AM?" If the answer is no, take the five minutes to trace the logic. Future-you will be grateful.

A productivity graph as a mountain landscape hitting a perfectly flat plateau, a tiny developer figure standing at the edge looking ahead
05

93% Adoption. 10% Gain. The Plateau Nobody Expected.

This is the chart that should humble every AI hype cycle participant: 92.6% of developers now use AI coding assistants at least monthly. Three-quarters use them weekly. AI-authored code in production hit 26.9%, up from 22% last quarter. And productivity gains? Flatlined at roughly 10%—about 4 hours saved per week—since mid-2025.

Dual-axis chart showing AI tool adoption climbing to 93% while productivity gains plateau at 10% since Q2 2025
The adoption paradox: near-universal adoption, modest gains. Source: ShiftMag Developer Survey (Feb 2026)

The initial surge was real. AI autocomplete genuinely helped developers skip boilerplate, scaffold projects faster, and cut "time to the 10th PR" in half for new hires. But once every team had GitHub Copilot or Cursor or Claude Code, the marginal returns collapsed.

Why? Because the easy wins—autocomplete, boilerplate generation, test scaffolding—have natural ceilings. The hard parts of software engineering (architecture, debugging distributed systems, navigating ambiguous requirements) are exactly the parts that current AI tools handle poorly. The next wave of productivity gains will require fundamentally different tools: ones that reason about systems, not just individual functions.

The smart money is now moving toward AI for maintenance and onboarding rather than raw generation. If the tool can help a new hire understand a legacy codebase in days instead of months, that's worth more than any autocomplete.

Two diverging roads splitting from one, the left descending into fog with generic code symbols, the right ascending into gleaming city lights with AI motifs
06

The K-Shaped Market: Your Salary Depends on Which Fork You Took

The data is now unambiguous: the developer labor market has split into two trajectories, and they're moving in opposite directions. Median salaries for general software development roles dropped 9% year-over-year in both the US and UK. Meanwhile, AI/ML engineering roles climbed 4.1%, and AI infrastructure specialists are seeing double-digit increases.

Bar chart showing salary changes by role: general dev -9%, frontend -7%, backend -5%, DevOps -2%, AI/ML +4.1%, AI infra +12%
The K-shaped developer market: generalists fall, specialists rise. Source: WhatJobs (Feb 2026)

Companies are openly using the "AI narrative" to suppress wages for undifferentiated roles. The argument is straightforward, if cynical: if AI can write CRUD endpoints, why pay a premium for someone whose primary skill is writing CRUD endpoints? Whether or not the premise is true (see: the productivity plateau above), the perception is real enough to move compensation committees.

The more concerning trend is what WhatJobs flags beneath the salary data: junior hiring is declining significantly. If companies stop investing in growing new developers because AI handles the entry-level work, where do senior engineers come from in five years? You can't accelerate a career path that doesn't exist.

The advice writes itself, even if it's frustrating: specialize. The era of the well-compensated generalist who "knows a bit of everything" is narrowing fast. Systems thinking, security expertise, AI tool orchestration, and deep domain knowledge are the premiums. "I can code" is table stakes.

The Rewrite That Wasn't

Here's the irony of 2026: generative AI promised to rewrite software engineering from the ground up. Instead, it exposed what was always true—that writing code was never the hard part. Understanding systems, reviewing work, maintaining quality, and developing expertise were always the bottleneck. AI didn't change the game. It revealed the game.

The developers who thrive in this environment won't be the ones who generate the most code. They'll be the ones who understand the most about what happens after the code is written—and why that's always been what mattered.