Recovery

Recovery

Assume the agent will make mistakes, make them clearly fixable

What it means

Agents will make mistakes, what matters is how fixable they are. Recovery means giving users clear, safe ways to undo actions, correct errors, and guide future behavior. It makes systems feel less brittle and more collaborative.

Why this matters

Without recovery, even small errors can erode trust and stall progress. Clear ways to fix mistakes turn agent failures into moments of learning for both the system and the person using it.

Related patterns

Undo & redo support
This pattern is essential in agentic systems or tools with automation because it provides a safety net. It protects users from unintended consequences and reinforcing their control.
Undo & Redo Support
1

Actions are reversible by default

The interface includes options like Undo or Revert for each automated change, giving users an immediate path to reverse agent decisions.

2

Justification for actions builds trust

A short, plain-language explanation helps users understand the rationale behind changes, reducing confusion and making recovery decisions easier.

3

Multiple levels of recovery available

Users can revert or approve individual changes or apply recovery to all actions at once. This offers both fine-grained control and bulk handling.

Editable outputs
Agents should hand off control. Editable outputs ensure that humans retain authorship and can correct or improve AI-generated content easily.
Editable Outputs
1

Human-in-the-loop decision making

The interface shows multiple alternatives, but waits for the user to select one. This maintains control and avoids premature execution.

2

Language supports co-creation

AI’s phrasing encourages collaboration reinforcing the user as the final authority, not a passive observer.

3

Selected output is not final

Once the user picks an option, the system surfaces editable fields (e.g., dropdowns, input controls) instead of applying the change directly. This allows precise customization.

Safe defaults
Defaulting to conservative actions prevents harm and sets user-friendly expectations, particularly in early use or high-risk environments.
Safe Defaults
1

Builds trust through predictable, gradual control

Safe, consistent defaults help users gain confidence and expand control at their own pace.

2

Activation/Deactivation requires explicit user intent

Features that could affect security or behavior are opt-in only. This ensures users can explore safely and expand functionality on their terms.

Escalation paths
Agents should never trap users. Providing clear escape routes to human assistance or manual control is vital for safety and trust.
Escalation Paths
1

Manual input and escalation always available

Users can directly provide input or ask their own questions at any time, ensuring they remain in control and can override or guide the agent when needed.

2

Clear option to proceed independently

The “Go to terminal” button offers an immediate escape route, users can skip the assistant and take action themselves without friction or waiting for approval.

How to implement

Design agents to offer fallback options or manual alternatives instead of total failure
Use feedback from failure and recovery experiences to continuously improve system behavior
Make recovery options easy to find, context-sensitive, and layered from simple to advanced controls

Common pitfalls

Failing to learn from recovery events
Failing to analyze recovery patterns can lead to repeated mistakes, missing the opportunity to learn from user corrections and improve AI performance over time
Lack of granular control
Using only high-level revision features frustrates users who want to undo specific AI actions without losing their own work, missing the need for precise, collaborative recovery
Inconsistent recovery experiences
Recovery mechanisms that work differently across different parts of the system confuse users and create cognitive overhead.
Unclear recovery guidance
Users need clear explanations and recovery options when things go wrong. Vague errors and unclear paths lead to frustration and reduced trust.