Ch 3 — Constraint Documents & CLAUDE.md

Writing the instruction files that guide agent behavior
High Level
home
Root
arrow_forward
auto_stories
Skills
arrow_forward
menu_book
Guides
arrow_forward
edit_note
Write
arrow_forward
rule
Test
arrow_forward
sync
Evolve
-
Click play or press Space to begin...
Step- / 8
description
What Are Constraint Documents?
The files that tell agents how to behave
Definition
Constraint documents are structured text files placed in your repository that AI agents read before starting work. They contain rules, conventions, architectural decisions, and behavioral guidelines. Different platforms use different file names: CLAUDE.md (Anthropic/Claude Code), AGENTS.md (Cursor), .cursorrules (Cursor legacy), Codex rules (OpenAI), Jules rules (Google).
Why They Work
Constraint documents work because they become part of the agent’s system context — loaded before any user interaction. The agent treats them as authoritative instructions. Well-written constraints dramatically reduce the frequency of architectural violations, style inconsistencies, and pattern mismatches. They’re the simplest and most impactful harness component.
Key insight: A single well-written CLAUDE.md file is the highest-ROI harness investment. It takes hours to write and saves weeks of correcting agent mistakes. Start here.
account_tree
The 3-Tier Architecture
Root rules, skills, and deep reference guides
Three Tiers
Tier 1: Root Rules (always loaded) CLAUDE.md / AGENTS.md Global conventions, architecture ~500-2000 tokens Tier 2: Task Skills (loaded on demand) .claude/skills/*.md .cursor/skills/*.md Task-specific instructions ~500-1500 tokens each Tier 3: Deep Reference (fetched when needed) docs/agent-guides/*.md Detailed implementation guides ~2000-5000 tokens each
Why Three Tiers
Tier 1 is always in context — it must be concise. Global rules that apply to every task.

Tier 2 loads when the agent recognizes a relevant task. Progressive disclosure keeps context lean.

Tier 3 is fetched only when the agent needs deep implementation details. Too large for routine loading.
Why it matters: Without tiers, you either overload context with everything (expensive, dilutes attention) or include too little (agent misses important rules). Tiers give you both coverage and efficiency.
edit_note
Writing Effective Root Rules
What belongs in CLAUDE.md
Structure
An effective root constraint document covers:

Project identity: What this project is, its tech stack, key dependencies.

Architecture rules: Layer boundaries, dependency direction, forbidden imports.

Code style: Naming conventions, file organization, comment policy.

Testing requirements: What must be tested, coverage expectations, test patterns.

Forbidden patterns: Explicit “never do this” rules based on past failures.
Example
# CLAUDE.md ## Architecture - Layered: Types → Config → Repo → Service → UI - Services NEVER import from UI layer - All DB access through repository layer ## Code Style - TypeScript strict mode, no `any` - Functions under 30 lines - Named exports only (no default exports) ## Testing - Every public function has a test - Use `vitest`, not `jest` - Mock external services, never DB ## NEVER - Never use `console.log` in production - Never commit `.env` files - Never add dependencies without approval
auto_stories
Task-Specific Skills
Tier 2: Instructions that load on demand
Skill Files
Skill files are task-specific instruction sets stored in a skills directory. When the agent recognizes a task that matches a skill (e.g., “create a new API endpoint”), it loads the corresponding skill file. This gives the agent detailed, task-specific guidance without bloating the root context with instructions for every possible task.
Skill Format
# .claude/skills/new-api-endpoint.md --- name: Create API Endpoint trigger: new endpoint, new route, add API --- ## Steps 1. Create route in src/routes/ 2. Create controller in src/controllers/ 3. Create service in src/services/ 4. Add validation schema 5. Add integration test 6. Update OpenAPI spec
When to Create Skills
Create a skill when you notice the agent repeatedly making the same mistakes on a specific type of task. If the agent always forgets to update the OpenAPI spec when creating endpoints, that’s a skill. If it always puts validation in the wrong layer, that’s a skill. Skills are born from observed failures.
Key insight: Skills are the progressive disclosure mechanism for agent instructions. They keep the root context lean while ensuring the agent has detailed guidance when it needs it. Think of them as just-in-time documentation.
warning
Common Mistakes
What goes wrong with constraint documents
Too Vague
“Write clean code” is useless. The agent doesn’t know what “clean” means in your context. “Functions must be under 30 lines, use named exports, no type assertions” is actionable. Constraints must be specific enough that you could write a linter rule for them.
Too Long
A 5,000-token CLAUDE.md consumes significant context on every request. The agent’s attention is finite — burying critical rules in a wall of text means they get ignored. Keep root rules under 2,000 tokens. Move detailed guidance to skills and reference docs.
Contradictory
When constraints contradict each other, the agent picks one arbitrarily. “Always use functional components” plus “extend BaseComponent for all UI elements” creates confusion. Review constraints for internal consistency before deploying.
Critical in AI: The most common mistake is writing constraints for humans, not agents. Agents are literal. “Prefer X over Y” is ambiguous. “Always use X. Never use Y.” is clear. Write constraints as if you’re writing code: precise, unambiguous, testable.
compare_arrows
Platform Differences
CLAUDE.md vs AGENTS.md vs .cursorrules
File Comparison
CLAUDE.md (Anthropic/Claude Code) Root of repo, always loaded Supports directory-level overrides Markdown format AGENTS.md (Cursor) Root of repo, always loaded Can reference .cursor/rules/ Markdown format .cursorrules (Cursor legacy) Root of repo Being replaced by AGENTS.md Plain text or markdown Codex rules (OpenAI) Project configuration JSON-based constraints Integrated with Codex CLI
Convergence
Despite different file names, all platforms have converged on the same underlying pattern: structured markdown files in the repository root that define agent behavior. The content structure is nearly identical across platforms. A well-written CLAUDE.md can be adapted to AGENTS.md with minimal changes.
Rule of thumb: Write your constraints in platform-agnostic markdown. Keep a single source of truth and generate platform-specific files from it. This prevents drift when you use multiple AI tools on the same codebase.
rule
Testing Your Constraints
How to verify that agents actually follow them
The Problem
Writing constraints is easy. Knowing whether agents follow them is hard. Without testing, you’re guessing. A constraint that the agent ignores 30% of the time is worse than no constraint — it creates a false sense of security.
Testing Methods
Probe tasks: Give the agent tasks designed to trigger specific constraints. Does it follow the rule?

Linter enforcement: Back up constraints with automated linting. If the constraint says “no default exports,” add an ESLint rule.

Review audits: Periodically review agent output against the constraint document. Track compliance rates per rule.
Compliance Tracking
Track which constraints are followed consistently and which are ignored. Constraints with low compliance rates need to be either rewritten (maybe they’re ambiguous), reinforced (backed by a linter), or removed (maybe they’re unreasonable). A constraint document is a living document that improves through measurement.
Key insight: Every constraint in your document should be testable. If you can’t verify whether the agent followed it, the constraint is too vague. Rewrite it until it’s binary: followed or not followed.
sync
Evolving Your Constraints
The constraint lifecycle
The Lifecycle
1. Observe failure: The agent makes a specific mistake.

2. Write constraint: Add a rule that prevents the mistake.

3. Test constraint: Verify the agent follows it.

4. Monitor compliance: Track whether the rule holds over time.

5. Refine or retire: Improve unclear rules, remove obsolete ones.
Constraint Hygiene
Constraint documents accumulate cruft over time. Rules added for old patterns, constraints for deprecated features, duplicates from different authors. Review your constraint document quarterly. Remove rules that no longer apply. Consolidate duplicates. Verify that every rule addresses a real, current failure mode.
Key insight: The best constraint documents are short, specific, and constantly evolving. They’re not written once — they’re maintained like code. Every rule earns its place by preventing a documented failure. Rules that don’t prevent failures are noise that dilutes the signal.