Skip to main content

2 posts tagged with "agents"

View All Tags

· 9 min read
Elisha Sterngold

The old way vs. the new way of shipping features with AI agents

AI Agents Don't Just Change How You Code — They Demand a New Kind of Company

The Quiet Disappointment of "AI Adoption"

Almost every company I speak with has "adopted AI." Engineers are using Claude Code, Cursor, Copilot, Codex. Designers are in Figma with AI plugins. Marketing teams have entire workflows built around generative tools. On paper, the revolution has arrived.

And yet, something is off. Recently I asked a senior executive at a large software company whether their development was now going several times faster. I expected at least a careful "two or three times." The answer was no. Not two times. Not even close. He then added the sentence that has stayed with me since: "Development is only about 20% of the time it takes to ship a feature. Even if you make that part infinitely fast, the whole system isn't much faster."

That is the story of AI adoption right now. Companies plugged AI agents into individual seats, but they did not touch the scaffolding around them — the stages, the approvals, the handoffs, the committees. The work still moves through the same pipes. The pipes themselves are the bottleneck.

Faster Tools in a Slower System

Imagine a factory where every worker suddenly gets a tool that is ten times more productive. If the conveyor belts still run at the old speed, if every part still needs six signatures to move forward, if QA still happens at the end in a single batch, the factory does not produce ten times more. It produces roughly the same, with more people waiting between stations.

That is modern software development in 2026. The coding step has genuinely collapsed. A feature that once took a week can often be drafted in an afternoon. But the surrounding choreography — the kickoff meetings, the design reviews, the stakeholder approvals, the pixel‑perfect handoff, the QA cycle, the release train — is untouched. So the feature still takes a month. The AI did its job; the company did not.

The industry keeps measuring the wrong thing. We compare lines of code per hour, or pull requests per engineer. The real question is how long it takes for an idea, a customer insight, or a competitive threat to become a shipped feature in the hands of users. That number, for most companies, has barely moved.

Rethinking the Design‑Then‑Development Handoff

Consider one of the most sacred sequences in product work: design first, then development. Designers create a spec. Developers implement it pixel‑perfect. Back‑and‑forth for edge cases. Another round of review. Eventually it ships.

This sequence made sense when implementation was expensive. You did not want engineers guessing. You wanted them cutting once, after measuring twice. Design was the cheap stage; development was the costly one, so you resolved ambiguity before writing code.

AI agents invert this economics. Implementation is no longer the expensive stage. Generation is nearly free. And yet we continue to hand agents a frozen design and ask them to faithfully translate it. We are still optimizing for a scarcity that no longer exists.

A better approach: describe the intent, the constraints, and the brand guardrails, then ask the agent to generate three full, working, interactive pages. Not mockups. Not prototypes. Real pages, wired to the backend, navigable in a browser. Then, as a team, you look at the three options and pick the one that actually feels right. You iterate from inside the working product, not from a static file.

Notice what just happened. You skipped a whole stage. The separate "design approval, then build" loop collapsed into a single "explore, pick, refine" loop. The code is already written — not because you rushed it, but because producing it was cheap enough that generating three variants cost less than arguing over one. Interaction is no longer something you negotiate through screenshots; it is something you experience directly.

Letting the Agent Surprise You

There is a second, subtler shift. The traditional handoff assumes the human knows the right answer and the implementer is there to reproduce it. But AI agents do not merely reproduce — they propose. Sometimes what they produce is worse than what you had in mind. Sometimes it is roughly equivalent. And sometimes, honestly, it is better.

The discipline we need is the ability to look at what an agent built and ask two separate questions: Is this actually worse than what I wanted, or is it just not what I imagined? Those are not the same thing. The first is a real problem. The second is ego dressed up as taste.

Companies that treat every deviation from the brief as a defect will grind to a halt correcting things that were never actually broken. The ones that treat deviations as candidate improvements — evaluated on their own merit, not on their fidelity to the original mental image — will move dramatically faster and, more often than not, arrive somewhere better than they originally planned.

Flatten the Company or Lose the Race

None of this works if the process wrapping the product is still hierarchical. If every change still needs to climb a ladder of approvals — product manager, design lead, engineering lead, VP, exec sponsor — the agent's speed is wasted waiting in queues. The agent can refactor your entire system over lunch. Your Slack thread to get sign‑off will still take three days.

Dynamic, flat companies are about to run circles around large, permission‑heavy ones. Not because the engineers are smarter, but because the distance between a good idea and a shipped feature is short. When you combine genuine AI superpowers with a small, empowered team that does not need committee approval to try something, you get cycle times that used to belong to demo‑day prototypes — but with production quality.

This is uncomfortable for established organizations, because the approval chains were not accidents. They were scar tissue from real incidents. Someone shipped something dangerous, so we added a review. Someone broke production, so we added an approval. Every gate has a story behind it. Removing them feels reckless.

But those gates were designed for a world where mistakes were hard to detect and expensive to reverse. That is not our world anymore.

The New Checks and Balances

Flattening a company does not mean removing discipline. It means moving the discipline to where it actually protects you.

The new center of gravity is simple: nothing should be breaking. Not "nothing should be unfamiliar." Not "nothing should deviate from last year's patterns." Nothing should be breaking. That is the invariant worth defending.

How do you defend it when code is flowing in from AI agents ten times faster than before? Not with more human reviewers in the loop — that is exactly the pipe that is already clogged. You defend it with systems that run at the same speed as the agents:

  • Unit tests that actually assert behavior, not placeholder shells. When an agent changes a function, the tests catch regressions before the PR is even reviewed.
  • Integration tests that exercise real flows — real database, real services, real boundaries. Mocked integrations pass while production burns. Real ones tell the truth.
  • CI/CD that runs automatically and refuses to merge anything that breaks the invariants.

But tests and CI/CD only tell you whether the code behaves the way its author expected. They do not tell you what is actually happening in production — under real load, with real users, with real data no test fixture would think to invent. That is where logs come in, and it is where most teams underinvest most badly.

Logs are the ground truth of a running system. They record what the code actually did, not what it was supposed to do. In the old world, logs were mostly for humans — a developer grepping through files after something had already gone wrong. In the AI‑agent world, they become something far more powerful: the substrate agents reason over. Feed production logs into an agent and it can tell you which errors share a root cause, which users are hitting which failure paths, which feature you shipped yesterday quietly broke something you did not notice. The loop closes: the system reports on itself, the agent reasons over that report, and problems surface before a human has to spot them. This is exactly the gap we are building Shipbook to close — turning logs from an afterthought into the ground‑truth layer that both humans and AI agents depend on.

When all of this is in place — tests, CI/CD, and a genuine ground‑truth layer from production — something remarkable happens. The speed at which you can try new features, validate them under real usage, and ship them becomes enormous. Not because you removed the checks, but because the checks now run at machine speed instead of meeting speed.

Staying Ahead of the Curve

The companies that will lead the next decade are not the ones with the most AI licenses. They are the ones that used AI adoption as a reason to rebuild how they work.

That means fewer stages and more loops. Fewer approvals and more guardrails. Less "pixel perfect from spec" and more "pick the best of three working options." Less deference to what the brief said, more willingness to recognize that the agent's suggestion might actually be better. A flatter structure. A deeper investment in tests, observability, and logs as the real safety net.

AI agents are an enormous gift. But a gift you do not unwrap is just clutter. If your company still looks and moves the way it did in 2022, you have not really adopted AI — you have just bolted it onto a machine that was not built for it. The teams that redesign the machine itself are the ones who will still be here, shaping the market, when the rest are wondering why their impressive tools never quite delivered.

· 8 min read
Elisha Sterngold

Programming in the Age of AI Agents

When Experience Becomes a Liability: Programming in the Age of AI Agents

The Comfortable Lie About AI and Seniority

At the beginning of the AI‑agent era, the industry converged on a reassuring belief: AI would transform software development, but not its hierarchy. Juniors would be displaced first. Seniors would become more valuable than ever. Someone would need to supervise the AI agents, to judge their output, to understand when a confident answer was actually dangerous. Only engineers with real scars—production outages, failed rewrites, architectural dead ends—could possibly do that job.

This belief made sense at the time. Early agents were impressive but shallow. They wrote code fluently but reasoned poorly at the system level. Supervision meant knowing where they hallucinated, where abstractions leaked, where reality diverged from elegance. Architecture was still heavy, expensive to change, and deeply shaped by historical accidents. Experience mattered because memory mattered.

But that belief quietly depends on an assumption that is no longer holding: that AI agents will remain weaker than humans at global reasoning, and that supervision will always mean catching the model when it is wrong. Once agents become strong enough to ingest entire codebases, replay years of architectural evolution, simulate migrations, and explore alternative designs, supervision itself changes meaning. With foundation models at the level of Opus‑4.5, GPT-5.2-Codex, Gemini 3 Pro as raw reasoning engines—and with agent layers built on top of them such as Claude Code, GitHub Copilot, Cursor, Antigravity and high‑autonomy agent frameworks—we are no longer dealing with assistants. We are dealing with machines that can explore architectural space directly—and this direction is only accelerating, as models improve and agent layers grow more autonomous, more stateful, and more deeply embedded in real development workflows. 

The Collapse of Architectural Permanence

Architecture, in this world, stops being sacred. For decades it was heavy because changing it was dangerous. Decisions hardened not because they were optimal, but because revisiting them was too costly. Seniority emerged as authority because it carried memory: memory of why something was split, why it failed, why it was glued back together, and why certain areas were never to be touched again. Architecture was history embodied in code.

AI agents dissolve this monopoly on memory. When the past becomes machine‑readable—every commit, every incident, every failed experiment—history stops living only in human heads. It becomes something you can query, simulate, and challenge. Architecture becomes a version, not a monument. It can be branched, stress‑tested, rewritten, and rolled back. Once the cost of change collapses, experience as stored trauma loses its central power.

Experience as Emotional Debt

This is where the uncomfortable reversal begins. Senior engineers carry not only knowledge, but standards—deeply internalized ideas about how code should look, how systems should be structured, and what "good engineering" means. These standards were earned through hard experience: migrations that nearly killed the company, rewrites that went nowhere, abstractions that promised clarity and delivered outages. But the need to ensure that new code conforms to these standards slows everything down. Every change must be carefully shaped, reviewed, and aligned with an existing mental model of correctness.

That caution is wisdom, but it is also friction. It turns supervision into enforcement and progress into negotiation. Seniors are not only guarding the system from breaking; they are guarding it from deviating. And in a world where AI agents can rewrite, test, and validate entire systems quickly, this insistence on conformity becomes a form of inertia. It encodes a sense of what is too dangerous—or simply too unfamiliar—to try, even when the tools that once made deviation risky no longer exist.

The Junior Advantage

Juniors carry none of this. They have no architectural nostalgia, no sunk cost, no identity tied to the current shape of the system. They look at a codebase not as a legacy to be preserved, but as something provisional. In the pre‑agent world, this made them naïve. In the agent world, it can make them powerful.

Because now the agent carries the memory. Claude Code can reconstruct why a refactor failed five years ago. Codex can explore alternative implementations without touching production. Cursor can let you navigate, rewrite, and validate entire systems in hours instead of months. The junior no longer needs to remember the past—the system can replay it. What the junior brings instead is a willingness to ask whether the past should continue to define the future.

Vibe‑Coding in a Serious World

This is also where vibe‑coding stops being a joke. In the early days, vibe‑coding—iterating quickly with AI, trusting intuition, caring more about flow than formal design—looked irresponsible. But once you combine it with strong AI agents, deep test generation, and fast rollback, vibe‑coding becomes a way to explore reality rather than speculate about it. It is not anti‑discipline; it is discipline shifted from up‑front certainty to rapid validation.

Judgment Over Experience

The old claim was that only seniors could supervise AI agents. But as agents become capable of self‑critique, simulation, and multi‑path reasoning, supervision stops being about catching mistakes and starts being about choosing directions. The problem is no longer that the AI cannot reason deeply enough. It is that it can reason in too many directions at once. The scarce resource becomes judgment, not experience.

Judgment is not the same as experience. Experience is backward‑looking; it encodes what failed. Judgment is forward‑looking; it decides what is worth risking now. Experience says, “This was a disaster once.” Judgment asks, “Are the conditions still the same?” And judgment does not scale linearly with years of writing code. It scales with clarity of thought.

High‑Leverage Agent Tools and the End of the Apprenticeship

This is where high‑leverage agent tools matter. Anything that removes the weight of change—instant refactors, cheap rewrites, aggressive simulation, reversible decisions—changes who gets to participate in architectural decisions. When change becomes cheap, the right to propose change expands. The apprenticeship model, where you slowly earn permission to question the system, begins to crack.

The unsettling possibility is that in a mature AI‑agent world, some of the most effective system designers will be people with very little attachment to how things have always been done. Not because they know more, but because they are willing to dismantle and rebuild more. Not because they are reckless, but because the environment has shifted from one where mistakes were catastrophic to one where they are increasingly simulated, contained, and reversible.

The New Role of Seniors

This does not make seniors obsolete. It changes their role. Their value moves away from being living archives of architectural pain and toward defining invariants: what must never break, what constraints are non‑negotiable, what risks are existential regardless of tooling. But the monopoly on exploration dissolves.

What this looks like in practice: Seniors become guardians of boundaries rather than gatekeepers of change. They define the security model that cannot be compromised. They identify the data consistency guarantees that must hold across any refactor. They articulate the performance thresholds below which the product fails its users. They specify the regulatory and compliance rails that no amount of clever architecture can bypass. Crucially, they enforce the ground truth mechanisms that make rapid iteration safe: comprehensive unit tests, meaningful logging, observability pipelines, and monitoring that catches what code review cannot. These are the things AI agents cannot infer from code alone—they require understanding of the business, the users, and the consequences of failure that extend beyond the codebase.

How seniors must adapt: The shift requires letting go of ownership over how things are built and holding tightly to what must remain true. This means resisting the instinct to enforce stylistic preferences, to mandate familiar patterns, or to reject approaches simply because they feel foreign. It means learning to trust validation over intuition—if the tests pass, the system holds, and the invariants are preserved, the unfamiliar path may be the better one. It means becoming comfortable with code that looks nothing like what you would have written, because you did not write it. The senior who thrives in this world is not the one who insists on reviewing every line, but the one who defines the constraints so clearly that review becomes verification rather than negotiation.

The belief that only seniors can supervise AI agents belongs to a world where agents were weak and architecture was rigid. As agents grow stronger and architecture becomes fluid, that belief starts to look like a historical artifact. The hierarchy built on the cost of change erodes as that cost approaches zero.

Who Shapes the Future

The programmer who will shape the next decade of software may not be the one who remembers the most failures, but the one most willing to ask whether those failures should still define what is possible. Armed with Cursor, Claude Code, Codex, and a willingness to iterate fast, they treat architecture as something to be questioned rather than preserved. They are not reckless—they simply operate in a world where the cost of exploration has collapsed.

And that person may not be senior at all.