Clarity

Clarity

Show the agent’s reasoning

What it means

Agents should make their reasoning, context, and confidence visible. Instead of acting like black boxes, they show how decisions are made so users can understand, question, or adjust them.

Why this matters

People make better decisions when they understand how an agent reached its conclusion. Clarity helps users spot errors, judge reliability, and decide when to lean in or push back.

Related patterns

Inline rationale
Agents articulate why they made recommendations. Rationale should be accessible, understandable, and relevant to help users make sense of the thinking.
Inline Rationale
1

Clear prioritization logic

The system openly communicates that severity and impact guide prioritization. This gives users clarity on how decisions are made, no hidden logic.

2

Labeled severity & scope

Each incident includes tags like "High" or "Medium" severity, and explicit impact stats (e.g., “233 Devices”). This provides transparent justification for prioritization choices.

3

Embedded justification in descriptions

Each incident explanation includes why it matters (e.g., “single point of failure,” or “client deliverables are delayed”). This prevents second-guessing and builds trust in automated triage.

Confidence & uncertainty displays
Disclosing confidence levels helps users interpret outcomes effectively and calibrate trust appropriately for transparent decision support.
Confidence & Uncertainty Displays
1

Reasoning linked to confidence

Explanations are paired with certainty indicators, showing why the system believes something and how strongly it believes it. This helps users validate or challenge the logic.

2

Confidence levels made visible

Each system insight or recommendation is accompanied by a clear degree of certainty, often represented with a percentage or visual indicator. This helps users assess how much trust to place in each suggestion.

3

Actions calibrated to certainty

The system tailors its suggested actions based on its confidence level more assertive steps when certainty is high, and more exploratory or cautious ones when confidence is low.

4

Low confidence is still shown

Even uncertain insights are surfaced, not hidden — but clearly marked. This promotes transparency and allows human judgment to guide next steps, especially when automated logic is unsure.

Source attribution
Revealing where information came from helps users verify and contextualize outputs, supporting accountability and enabling further inquiry.
Source Attribution
1

Source labels are visually distinct and clickable

The source elements are styled for immediate recognition and likely interactive (e.g., tags, badges, or buttons), improving usability and clarity.

2

Claims supported by cited references

Each recommendation is backed by named sources, allowing users to verify the rationale and explore more details independently.

Alternatives & trade-offs
Showing what the agent didn’t choose and why helps users understand trade-offs. It creates transparency and supports participatory decision-making.
Alternatives & Trade-Offs
1

Consequences and benefits are explicit

Includes options with a clear summary of what it changes and what the effect will be, allowing the user to compare outcomes at a glance.

2

Multiple actions presented side-by-side

The interface surfaces more than one possible action instead of a single automated path. This supports user agency and accommodates different risk tolerances or priorities.

3

Supports informed trade-off decisions

By presenting pros and cons transparently, the system helps users make context-aware decisions, especially when no option is perfect.

4

Labels indicate duration and reversibility

Visual tags communicate whether an option is temporary, reversible, or long-term. This helps users understand not just the effect, but also the scope and risk.

How to implement

Include reasoning explanations alongside every recommendation or decision
Make explanations accessible through plain language and visual aids
Use progressive disclosure to offer both quick summaries and detailed explanations
Show alternative options considered and why they were not chosen
Provide clear source citations and links for verification
Display confidence levels and uncertainty ranges where relevant

Common pitfalls

Opaque decision logic
Users can’t tell why the agent made a choice
Over explanation
Flooding users with too much technical detail and overwhelming them with too much detail