The Future of Software Engineering

The Last Software Engineer

This week, the industry stopped pretending AI coding tools are just fancy autocomplete. The question isn't whether agents will write code. It's whether anyone will be left to review it.

Listen
Human hands and robotic hands reaching toward each other across a digital divide of luminous code
Autonomous robots assembling code blocks on a factory floor while human engineers watch from above
01

The Company That Wants to Fire You (Politely)

StrongDM unveiled something this week that made a lot of engineers choke on their cold brew: a "Software Factory" platform with a coding agent called Attractor designed explicitly for "non-interactive" code generation. Translation: no human review required.

Let that sit for a moment. We've spent the last three years collectively agreeing that AI coding tools are "copilots"—helpful assistants that augment human judgment. StrongDM just skipped that entire metaphor and went straight to "factory." The platform includes CXDB, an "AI Context Store" that creates immutable audit logs of every tool output and agent decision, which is their answer to the obvious question: if no human reviews the code, who's accountable when it breaks?

As Simon Willison noted in his characteristically measured analysis, the "human-out-of-the-loop" philosophy isn't just a product positioning choice—it's a bet on where enterprise software maintenance is heading. StrongDM isn't targeting greenfield development or creative system design. They're targeting the 80% of engineering work that's migrations, dependency updates, and boilerplate. The boring stuff. The stuff that, if we're honest, most engineers don't love doing anyway.

The uncomfortable question: what happens to the junior engineers who learn by doing that boring stuff?

Two paths diverging in a digital forest, one lit with warm human light and the other with cool machine glow
02

The Great Schism: Do You Manage Assistants or Delegate to Agents?

A thread on Y Combinator's Hacker News crystallized what many have felt brewing: the AI coding tool industry has split into two irreconcilable philosophies. OpenAI is building "Collaborators"—agents that pause, ask, and wait for you to steer them. Anthropic is building "Operators"—agents designed to take the wheel entirely and come back when they're done.

The AI Developer Tool Autonomy Spectrum showing tools positioned from human-in-the-loop to human-out-of-the-loop
The autonomy spectrum has widened dramatically in 2026. Traditional autocomplete (Copilot) now sits at one extreme, with fully autonomous agents (Attractor) at the other.

Early user reports are revealing. OpenAI's Codex approach is reportedly safer but slower—engineers describe it as "pair programming with someone who actually listens." Anthropic's Opus 4.6 runs faster but is "harder to audit"—it produces working code in bulk, but understanding why it made specific choices requires forensic reconstruction. Both models identified zero-day vulnerabilities in standard benchmarks during launch week, which is either reassuring or terrifying depending on your threat model.

This isn't just a product design debate. It's a question about what engineering is in 2026. If you spend your day reviewing agent output rather than writing code, are you still a software engineer? Or are you a quality inspector on a code assembly line? The job title might not change, but the lived experience of the work already has.

Two colossal AI entities facing each other across a digital chessboard, radiating competing energy
03

Two AI Giants Launch on the Same Day. This Is Not a Coincidence.

On February 5th, both OpenAI and Anthropic dropped major releases within hours of each other. OpenAI launched GPT-5.3-Codex with a dedicated macOS desktop app featuring "Skills" (reusable agent capabilities) and "Automations" for scheduling recurring engineering tasks. Anthropic countered with Claude Opus 4.6, boasting a 1-million-token context window and a near-perfect 4.8/5 score in independent code generation benchmarks.

Context window sizes from GPT-4 at 8K tokens to Claude Opus 4.6 at 1 million tokens
Context windows have expanded by 125x in under three years. Opus 4.6's 1M window can hold an entire medium-sized codebase in memory—no RAG pipeline required.

The OpenAI desktop app is particularly notable. This isn't a browser tab you switch to—it's a persistent process on your machine that can read your filesystem, watch your terminal, and proactively suggest actions. The team described it as "instrumental in creating itself," which is the kind of recursive flex that either impresses or unnerves you. Anthropic's angle is different: raw reasoning depth. A 1M-token context window means an agent can hold an entire medium-sized repository in working memory simultaneously—no retrieval-augmented generation needed, no chunking artifacts, no lost context.

What matters for working engineers: the moat around "understanding large codebases" just evaporated. The competitive advantage used to be institutional knowledge—knowing where the bodies are buried in a legacy system. When an AI can ingest the entire system in one pass, that advantage narrows considerably.

A towering library of living documentation with pages reorganizing themselves and connecting to AI agent nodes
04

Google Just Gave AI Agents a Library Card

Here's a problem every developer using AI tools has encountered: you ask the model to use a library, and it confidently generates code for an API that was deprecated two versions ago. Google launched the Developer Knowledge API this week—a programmatic interface that feeds AI agents live, up-to-date documentation in machine-optimized Markdown.

The real innovation is the included Model Context Protocol (MCP) server, which lets agents securely query private internal documentation too. This is infrastructure for the agent-first world: if coding agents are going to write production code, they need access to the same reference materials human developers use. Not cached training data from 2024—live docs, updated in real time.

This matters because it addresses the single biggest complaint about AI-generated code: it hallucinates library APIs. By giving agents a reliable ground truth, Google is building the plumbing that makes autonomous code generation actually viable at scale. It's not glamorous work. It's the kind of infrastructure play that, in retrospect, will look inevitable.

An IDE window shattered into fragments, each reflecting a different AI model, reassembling into something new
05

Apple Opens the Gate: Xcode 26.3 Lets You Choose Your AI Brain

In a move that would have been unthinkable two years ago, Xcode 26.3 RC ships with native integrations for both OpenAI's Codex and Anthropic's Claude Agent. Developers can now select their preferred "backend brain" for predictive code completion, refactoring, and automated test generation—right in the IDE settings.

This is Apple formally conceding that the AI coding war won't be won at the IDE level. The IDE becomes a platform; the intelligence becomes pluggable. For iOS and macOS developers, this means the tools they already use daily are now first-class citizens in the agent ecosystem. No more copy-pasting between Cursor and Xcode. No more context-switching tax.

The broader signal: the "walled garden" era of developer tools is ending. If Apple—the company that famously controls every pixel of its ecosystem—is making its IDE model-agnostic, the rest of the industry has no excuse. Expect JetBrains, VS Code, and every other major IDE to follow within months. The IDE wars of 2024 are becoming the agent-platform wars of 2026.

Empty office chairs in a modern tech campus with morning light streaming through windows and holographic AI avatars working at some desks
06

8,800 Jobs Gone in One Week. The Reallocation Has Begun.

The first week of February brought a familiar but intensifying pattern: Amazon cut 2,200 jobs (plus another 800 in the Bay Area specifically), Meta laid off 1,500 from Reality Labs, Ericsson dropped 1,600, ASML shed 1,700, and Autodesk cut 1,000. That's nearly 8,800 jobs in a single week.

Bar chart showing early February 2026 tech layoffs by company, totaling 8,800 jobs
Early February 2026 saw nearly 8,800 tech layoffs in a single week. Companies cited "shifting investments to AI and wearables" as the primary driver.

The stated reason across the board: "shifting investments to AI and wearables." But read between the lines and a clearer story emerges. These aren't random cuts—they're targeted at divisions that either don't involve AI or whose work AI is beginning to automate. Amazon's cuts hit operations and middle management. Meta's hit the hardware side of Reality Labs. Autodesk's hit traditional software engineering roles.

For software engineers specifically, this creates a paradoxical job market. Demand for AI/ML engineers is at an all-time high. Demand for "traditional" software engineers—the ones who build CRUD apps, maintain internal tools, or write integration code—is cratering. The profession isn't dying. It's bifurcating. And the gap between the two halves is widening fast. If you're a CS student graduating in May, the question isn't "will I get a job?" It's "which half of the profession am I training for?"

The Craft Remains. The Job Description Doesn't.

Here's what I keep coming back to: every technology shift that automated the mechanical parts of a craft ultimately made the creative parts more valuable. The printing press didn't kill writing—it killed scribes. Photography didn't kill visual art—it killed portrait miniaturists. What we're watching right now is the software engineering equivalent: the mechanical act of translating ideas into code is being automated. The judgment of what to build, why to build it, and whether it's correct—that's harder to automate than anyone in a demo video will tell you. The last software engineer won't be someone who types the fastest. They'll be the person who knows which questions to ask.