Agentic Mechanics 01: Why Agents Hate Reconsideration
There is a quiet misunderstanding at the heart of most conversations about AI agents.
We assume agents think like people.
That intelligence means re-thinking.
That better systems evaluate more options, more often.
They don’t.
Agents are not designed to think again.
They are designed to stop thinking as fast as safely possible.
This isn’t a preference.
It’s a constraint.
And once you understand that, defaults stop looking accidental — and start looking inevitable.
The Human Error
To humans, reconsideration feels responsible.
If something matters, we revisit it.
If stakes are high, we debate.
If conditions change, we reassess.
That instinct evolved for social reasoning, not system stability.
To an agent, reconsideration is not diligence.
It’s instability.
Reopening a decision doesn’t just affect that decision.
It reintroduces uncertainty into every downstream plan that depended on it.
From an agent’s perspective, that is dangerous.
The True Cost of Reconsideration
When an agent reconsiders a resolved choice, it must:
Reopen its world model
Re-evaluate dependencies
Re-check constraints
Recalculate trade-offs
Re-propagate uncertainty through the plan
That is not “thinking harder.”
That is destabilising a working system.
Agents exist under constant pressure to:
reduce variance
minimise compute
preserve coherence
maintain forward progress
Reconsideration works against all four.
So agents avoid it by design.
Why Defaults Form So Quickly
Defaults are not shortcuts.
They are stability anchors.
The moment an agent finds a resolution that:
works end-to-end
introduces minimal contradiction
produces consistent outcomes
holds across contexts
That resolution becomes a structural assumption.
Once something is assumed, it stops being evaluated.
Once it stops being evaluated, it becomes invisible.
Once it becomes invisible, it becomes infrastructure.
This is how defaults form — not through preference, but through successful reuse.
Reuse Is the Real Objective
Humans optimise for improvement.
Agents optimise for reuse.
Every reused decision:
costs less to execute
carries less risk
reduces cognitive load
stabilises future plans
Novelty is expensive.
Comparison is expensive.
Re-evaluation is expensive.
Agents don’t reward novelty.
They reward things they can safely stop thinking about.
Why Optimisation Stops Working
Most optimisation strategies assume something critical:
That the system is still deciding.
But once a default exists, that assumption is false.
At that point:
better features don’t matter
better messaging doesn’t matter
better formatting doesn’t matter
Optimisation signals arrive after collapse.
They compete in a space that no longer exists.
Displacement only happens when:
the default fails catastrophically, or
the environment changes so materially that reuse becomes unsafe
Absent that, reconsideration does not occur.
The Procurement Consequence
This is where the implications sharpen.
Procurement does not happen at the moment of purchase.
It happens at the moment of first safe resolution.
After that:
RFPs are formalities
vendor comparisons are ceremonial
governance reviews are retrospective
The real decision already happened — silently, upstream, and without a meeting.
Once an agent has resolved:
“This works, and I can reuse it,”
Procurement collapses to that choice.
Why This Breaks Governance
Most organisations attempt to govern decisions they never saw being made.
They audit outcomes.
They review processes.
They debate accountability.
But they lack visibility into:
which defaults formed
when they formed
why they hardened
and what would trigger reconsideration
You cannot govern what you cannot observe.
And you cannot observe defaults if you still believe decisions are being re-made.
The Quiet Test
The question an agent is implicitly asking is not:
“Is this the best option?”
It is:
“Can I stop thinking about this?”
When the answer is yes:
reconsideration disappears
defaults harden
markets compress
This is not chaos.
It is inevitability.
And once you understand why agents hate reconsideration, the rest of agentic behaviour stops being surprising — and starts being predictable.