Agent Systems and the Collapse of Change-Based Incident Analysis

When There Is No Diff, No Author, and No Moment of Failure

This is the final post in a three-part series on generative systems and incident learning. The first examined PromptLock and how generative malware breaks artifact-based learning. The second explored how coding assistants blur the line between tool and participant. This post takes the argument to its conclusion.

Coding assistants blur the line between tool and participant—but there is still a human-authored change somewhere in the system. Code is generated, reviewed, merged, and deployed.

Agent systems remove that anchor entirely.

They introduce a world where:

And yet, most organizations are still trying to learn from these systems using a model built around diffs, commits, and root causes.

That model is collapsing.

Change-Based Incident Analysis Assumes a Stable World

Traditional incident analysis rests on a few quiet assumptions:

This is why postmortems ask:

Agent systems violate every one of these assumptions.

There may be no meaningful "change" to point to at all.

What an Agent System Actually Does (In Practice)

An agent is not a script with branches. It is a loop:

The plan is not stored. The steps are not pre-authored. The execution path is context-dependent.

Two runs of the same agent:

Looking for "the change" after an incident is like looking for the commit that caused a human to change their mind.

The Incident With No Diff

Consider a deployment failure caused by an infrastructure automation agent.

The agent:

Production degrades.

The post-incident questions arrive:

But nothing changed.

The agent behaved differently because:

There is no diff to inspect. There is no author to interview. There is no single moment where the system "went wrong."

The failure emerged.

Why Root Cause Analysis Collapses

Root cause analysis assumes causality can be traced backward to a defect, decision, or change.

Agent systems produce context-dependent behavior, not fixed logic.

The "cause" of the incident is not:

It is the interaction between:

Those elements recombine differently every time.

A postmortem that says "the agent failed to roll back" is true—and nearly useless.

Artifact-Oriented vs Capability-Oriented Capture (for Agents)

This is where most learning breaks down.

Artifact-oriented (typical):

Accurate. Exhaustive. Fragile.

Capability-oriented (durable):

Both describe the same incident.

Only one can prevent the next one.

The Real Unit of Learning: Decision Strategies

If artifacts and changes are unstable, what persists?

What persists are:

Examples of stable agent tendencies:

These do not live in a single file. They do not appear in diffs. They surface only under pressure.

That is where learning must attach.

This is the class of problem COEhub is designed to address—encoding decision tendencies rather than execution traces, and surfacing them before the next agent-driven deployment.

Why Traditional Postmortems Become Theater

After an agent-driven incident, teams still write postmortems.

They include:

But the document answers the wrong question.

It explains what happened, not what the system is inclined to do.

Without capturing agent decision tendencies:

Engineers sense this intuitively. Engagement drops. Postmortems become ritual instead of memory.

The Organizational Failure Is Predictable

Agent systems also fracture ownership:

No single group owns the behavior. No single postmortem captures the pattern.

Each incident is handled. None are remembered.

The Asymmetry Becomes Total

With agents, the asymmetry between systems and learning is complete.

Agents:

Organizations:

As long as learning remains artifact-bound, agents will outrun it.

Closing

Coding assistants blurred the line between tool and participant.

Agent systems erase it.

When behavior is generated dynamically, there may be no change to analyze, no author to question, and no diff to inspect. Incident analysis that depends on those constructs will continue to produce accurate explanations and ineffective prevention.

The organizations that succeed with agent systems will not be the ones that write better postmortems.

They will be the ones that stop treating incidents as changes gone wrong and start treating them as capabilities revealed.

Because when systems decide, learning must remember how they decide.

And change-based analysis is no longer enough.