Everyone writes action items. Almost no one follows through.
Most engineering organizations are good at running postmortems. They are much worse at making sure the learning survives beyond the meeting.
Action items get captured. Owners get assigned. Tickets get filed. And then, quietly and predictably, many of them never get done.
The failure is not a lack of process. It is a lack of memory.
One of the simplest signals of whether your postmortem practice is working is the action-item follow-through rate: the percentage of remediation items that are actually completed.
Based on what we have observed across multiple organizations with recurring incidents, a follow-through rate below roughly 50 percent is a red flag. Not because 50 percent is a magical threshold, but because once more than half of your corrective actions fail to land, the system is no longer learning in any meaningful way.
Below that point, postmortems stop functioning as a learning mechanism and start functioning as documentation theater. The ritual remains. The outcomes do not.
It is tempting to attribute poor follow-through to execution problems: weak ownership, sloppy tracking, lack of discipline. In practice, those explanations are incomplete.
The deeper issue is that follow-through fails for system-level reasons, even in well-run organizations.
Teams make locally sensible decisions. They ship features, meet sprint goals, reduce near-term risk, and unblock customers. From the inside, these choices are rational and often rewarded.
From the outside, they accumulate debt.
Remediation work competes with visible delivery, and delivery almost always wins. Not because engineers do not care about reliability, but because the system incentivizes them not to.
Most missed action items are not explicitly deprioritized. They simply fade.
A fix that felt urgent during an outage review feels optional three weeks later. Context decays. Ownership shifts. The next incident arrives before the previous learning is embedded.
This is classic drift: a series of reasonable trade-offs that slowly erode safety margins without triggering alarms.
Many teams already "track" action items in ticketing systems. The problem is not that the data does not exist. The problem is that nothing accumulates.
Tickets close or stagnate in isolation. No one asks whether the same remediation has appeared before. No one notices when the same class of fix is proposed repeatedly and never completed.
Without aggregation across incidents, the system cannot see itself.
We have seen variations of this pattern repeatedly:
An alerting gap is identified during a production incident. The postmortem recommends improving signal quality and ownership. A ticket is filed.
Six months later, a similar incident occurs. The same alert fires late again. The same remediation is proposed. Another ticket is filed.
Eighteen months later, during a third incident, someone finally notices that this fix has been suggested before. Twice. Never completed.
No single team acted irresponsibly. No one ignored the problem on purpose. But the organization failed to remember.
High follow-through does not mean perfection. It means that learning compounds.
When action items consistently land:
When follow-through is low:
The difference is not effort. It is feedback loops.
Most teams already know how to log action items, assign owners, and track status. That advice is table stakes.
What they lack is a system that:
The failure mode is not forgetfulness. It is structural.
Most tools track incidents. Very few track whether the learning actually happened.
COEhub is designed to make follow-through observable at the system level. Not by adding more process, but by treating action items as signals that accumulate over time.
When remediation patterns repeat, they surface. When follow-through stalls, it becomes visible. When learning sticks, it compounds.
The goal is not to write better postmortems.
The goal is to build a system that remembers, and changes behavior because of it.