Teach your AI agent
how you think.

SOUL.md tells your agent who it is. SKILL.md tells it what it can do. DECISION.md tells it how to choose.

├── SOUL.md who your agent is
├── STYLE.md how it communicates
├── SKILL.md what it can do
├── MEMORY.md what it remembers
├── AGENTS.md project-specific rules
└── DECISION.md how it chooses under uncertainty ✦ new

Your agent can do everything.
It just can't decide anything.

AI agents are incredibly capable. They code, search, email, schedule, and execute complex workflows. But when your agent faces a tradeoff — speed vs. thoroughness, cost vs. quality, ask you vs. just act — it has no framework for choosing.

It either defaults to generic behavior, or it interrupts you for every decision.

Layer Defines Status
Identity Who the agent is SOUL.md ✓
Rules Project constraints AGENTS.md ✓
Skills What it can do SKILL.md ✓
Judgment How it chooses DECISION.md ✦

As agents become more autonomous, the judgment gap becomes the bottleneck.

A portable decision-making philosophy for your AI agent.

DECISION.md is a plain-text markdown file that codifies how you think — your risk tolerance, tradeoff priorities, when to act autonomously vs. when to ask, what biases to watch for, and how to handle decisions across different domains.

It's not a personality quiz. Every component is grounded in decision science research: conjoint analysis, prospect theory, calibration methodology, cognitive bias literature. It produces specific, actionable rules your agent can follow — not vague adjectives.

DECISION.md — example sarah_chen.md
# DECISION.md — Sarah's Decision Framework

## Decision Identity
Bias-to-action generalist who values learning over optimization.
Comfortable with calculated risk in career and creative domains,
conservative with financial and health decisions.

## Risk Profile
- Overall: Moderate-Aggressive
- Career: Aggressive — willing to take asymmetric bets
- Financial: Conservative — never risk >5% on a single position
- Health: Ultra-conservative — always consult a professional

## Autonomy Rules
- Act without asking if: cost < $50, reversible, low stakes
- Always ask if: involves other people, irreversible, cost > $500
- Escalation: Any legal, medical, or financial commitment

## Decision Speed
- Default: Bias-to-action
- Reversible decisions: Decide in <5 minutes
- Irreversible: Sleep on it, minimum 24 hours
- Tripwire: If deliberating >30 min on reversible, just decide

## Anti-Patterns
Things I do wrong — catch me:
- "I say yes to too many things" → Push back on new commitments
- "I anchor on first numbers" → Always seek a second data point
- "I over-research when anxious" → Name it, nudge me to decide

## Meta-Rules
- When in doubt: Bias toward action
- Confidence threshold for autonomous action: 85%
- Update frequency: Quarterly or after major life changes

The test: your agent should be able to make a decision you'd agree with, without asking you. If it can't predict which way you'd lean on a new tradeoff, the DECISION.md is too vague.

Discover how you decide. Export it as a file.

Elicitation

Go through AI-powered tradeoff scenarios. Five response types: choose A, choose B, "I'm torn", skip, or write your own take. Response time is tracked as a confidence signal. No fixed length — stop after 3 questions or 50.

Synthesis

Your responses are analyzed for patterns, conflicts, and context shifts. The system generates a structured DECISION.md with specific rules, escalation triggers, and calibration data — not vague personality labels.

Stress Test

The system throws edge-case scenarios at your DECISION.md and shows how your agent would decide. Agree or override — overrides refine the profile.

Export

Download your DECISION.md as a portable markdown file. Drop it into Claude, Cursor, OpenClaw, or any agent configuration. It works anywhere a system prompt does.

> create your DECISION.md

The soul document gave agents a conscience.
DECISION.md gives them judgment.

In December 2025, researchers extracted Claude's internal "soul document" — the configuration that defines its identity, values, and principles. It's comprehensive about who Claude is. But it repeatedly says "use good judgment" without defining what good judgment looks like for you.

A risk-tolerant startup founder and a risk-averse accountant receive the same generic judgment defaults. As agents become more autonomous, this gap becomes the bottleneck.

"Claude has to use judgment based on its principles and ethics, its knowledge of the world and itself, its inferences about context..."

— Claude's soul document

DECISION.md provides the personalized judgment framework so the agent doesn't have to guess. It's not overriding safety principles — it's filling in the vast space of legitimate decisions where the soul document says "use good judgment" but gives no personal guidance.

Plain markdown. Works with anything.

DECISION.md is a portable text file. Drop it into the system prompt or configuration folder of any AI tool. No vendor lock-in, no platform dependency.

Works with:
· Claude — Projects, system prompts, custom instructions
· OpenClaw — drop into .soul/ folder alongside SOUL.md
· Cursor / Claude Code — alongside AGENTS.md in project root
· ChatGPT / Gemini / Llama — paste into custom instructions
· LangChain / CrewAI / AutoGen — include in agent init prompt

Teach your agent how you think.

Make implicit explicit — your intuition is sometimes right, but if you
don't make it explicit you don't get to find out when it's wrong.

Your data stays in your browser. Nothing leaves your device.