SOUL.md tells your agent who it is. SKILL.md tells it what it can do. DECISION.md tells it how to choose.
AI agents are incredibly capable. They code, search, email, schedule, and execute complex workflows. But when your agent faces a tradeoff — speed vs. thoroughness, cost vs. quality, ask you vs. just act — it has no framework for choosing.
It either defaults to generic behavior, or it interrupts you for every decision.
| Layer | Defines | Status |
|---|---|---|
| Identity | Who the agent is | SOUL.md ✓ |
| Rules | Project constraints | AGENTS.md ✓ |
| Skills | What it can do | SKILL.md ✓ |
| Judgment | How it chooses | DECISION.md ✦ |
As agents become more autonomous, the judgment gap becomes the bottleneck.
DECISION.md is a plain-text markdown file that codifies how you think — your risk tolerance, tradeoff priorities, when to act autonomously vs. when to ask, what biases to watch for, and how to handle decisions across different domains.
It's not a personality quiz. Every component is grounded in decision science research: conjoint analysis, prospect theory, calibration methodology, cognitive bias literature. It produces specific, actionable rules your agent can follow — not vague adjectives.
The test: your agent should be able to make a decision you'd agree with, without asking you. If it can't predict which way you'd lean on a new tradeoff, the DECISION.md is too vague.
Go through AI-powered tradeoff scenarios. Five response types: choose A, choose B, "I'm torn", skip, or write your own take. Response time is tracked as a confidence signal. No fixed length — stop after 3 questions or 50.
Your responses are analyzed for patterns, conflicts, and context shifts. The system generates a structured DECISION.md with specific rules, escalation triggers, and calibration data — not vague personality labels.
The system throws edge-case scenarios at your DECISION.md and shows how your agent would decide. Agree or override — overrides refine the profile.
Download your DECISION.md as a portable markdown file. Drop it into Claude, Cursor, OpenClaw, or any agent configuration. It works anywhere a system prompt does.
In December 2025, researchers extracted Claude's internal "soul document" — the configuration that defines its identity, values, and principles. It's comprehensive about who Claude is. But it repeatedly says "use good judgment" without defining what good judgment looks like for you.
A risk-tolerant startup founder and a risk-averse accountant receive the same generic judgment defaults. As agents become more autonomous, this gap becomes the bottleneck.
"Claude has to use judgment based on its principles and ethics, its knowledge of the world and itself, its inferences about context..."
DECISION.md provides the personalized judgment framework so the agent doesn't have to guess. It's not overriding safety principles — it's filling in the vast space of legitimate decisions where the soul document says "use good judgment" but gives no personal guidance.
DECISION.md is a portable text file. Drop it into the system prompt or configuration folder of any AI tool. No vendor lock-in, no platform dependency.
Make implicit explicit — your intuition is sometimes right, but if you
don't make it explicit you don't get to find out when it's wrong.
Your data stays in your browser. Nothing leaves your device.