Ch 8 — Prompt Patterns & Reusable Templates

The Critic, Persona Chain, Decomposer, and more — battle-tested patterns that work across any domain
Control
psychology
Why Patterns
arrow_forward
rate_review
The Critic
arrow_forward
storage
Critic: SQL
arrow_forward
group
Persona Chain
arrow_forward
call_split
Decomposer
arrow_forward
flip_camera_android
Flip Pattern
arrow_forward
layers
Combining
arrow_forward
library_books
Library
-
Click play or press Space to begin...
Step- / 8
psychology
Why Prompt Patterns?
Design patterns for prompts — reusable structures that work across domains
The Concept
Software engineering has design patterns (Singleton, Observer, Factory). Prompt engineering has them too. A prompt pattern is a reusable prompt structure that reliably produces better output for a specific type of task. Learn the pattern once, apply it everywhere.

You’ve already seen some patterns implicitly: decomposition (Ch 5), self-reflection (Ch 5), and few-shot examples (Ch 3). This chapter formalizes the most powerful ones into copy-paste templates.
The Patterns We’ll Cover
1. The Critic: Generate → Critique → Rewrite. Three passes for dramatically better output.

2. The Persona Chain: Multiple personas in sequence — one generates, another reviews, a third fixes.

3. The Decomposer: Break a complex task into subtasks, solve each, combine.

4. The Flip: Ask the model to argue the opposite side, then synthesize.

Each pattern is a prompt architecture — a way of structuring the interaction that consistently produces better results than a single prompt.
Key insight: The difference between a junior and senior prompt engineer isn’t vocabulary — it’s pattern recognition. Seniors see a task and immediately know which pattern to apply. This chapter gives you that pattern library.
rate_review
Pattern 1: The Critic (Generate → Critique → Rewrite)
The single most impactful pattern for quality-critical output
The Pattern
Pass 1 — Generate: "Write [the thing]." Pass 2 — Critique: "Now review what you wrote. Check for: - [specific quality criterion 1] - [specific quality criterion 2] - [specific quality criterion 3] List every issue you find." Pass 3 — Rewrite: "Now rewrite, fixing every issue from your critique."
Why It Works
The model is better at evaluating than generating. When it writes the first draft, it’s optimizing for “plausible next token.” When it critiques, it’s evaluating against specific criteria. The rewrite combines both: it has the original content and the critique to guide improvements.

This is the same reason code reviews work — a fresh perspective (even from the same person) catches things the author missed.
Example: Technical Blog Post
Pass 1: "Write a 500-word blog post explaining database indexing for junior developers." Pass 2: "Critique this post for: - Technical accuracy - Missing important concepts - Unclear explanations - Jargon without definitions - Missing practical examples" Model critique finds: 1. Didn't explain what a B-tree is 2. Used "cardinality" without defining it 3. No example of CREATE INDEX syntax 4. Didn't mention when NOT to index 5. Missing: composite indexes Pass 3: "Rewrite fixing all 5 issues."
Key insight: The critique step is where the magic happens. Be specific about what to check — “review for quality” is too vague. “Check for technical accuracy, missing concepts, and jargon without definitions” gives the model a concrete checklist to evaluate against.
storage
The Critic in Action: Optimizing a SQL Query
First draft has an N+1 problem — the critique catches it — the rewrite uses JOINs
Pass 1: Generate
Prompt: "Write a SQL query to get all orders with customer names and product details for orders placed in the last 30 days. Tables: orders, customers, products, order_items." First draft: SELECT o.id, o.created_at, (SELECT name FROM customers WHERE id = o.customer_id) as customer, (SELECT name FROM products WHERE id = oi.product_id) as product FROM orders o JOIN order_items oi ON oi.order_id = o.id WHERE o.created_at > NOW() - INTERVAL '30 days';
Pass 2: Critique
Prompt: "Review this SQL for: - Performance issues (N+1, missing indexes) - Correctness (edge cases, NULL handling) - Readability (naming, formatting)" Critique: 1. PERFORMANCE: Two correlated subqueries execute per row — this is an N+1 pattern. With 10K orders, that's 20K extra queries. Use JOINs instead. 2. CORRECTNESS: No handling for deleted customers (customer_id exists but customer row doesn't). Need LEFT JOIN. 3. CORRECTNESS: No ORDER BY — results are in arbitrary order. 4. READABILITY: Aliases are fine but column names could be more explicit.
Pass 3: Rewrite
SELECT o.id AS order_id, o.created_at AS order_date, COALESCE(c.name, 'Deleted Customer') AS customer_name, p.name AS product_name, oi.quantity, oi.unit_price FROM orders o JOIN order_items oi ON oi.order_id = o.id LEFT JOIN customers c ON c.id = o.customer_id JOIN products p ON p.id = oi.product_id WHERE o.created_at > NOW() - INTERVAL '30 days' ORDER BY o.created_at DESC;
First Draft
Correlated subqueries (N+1), no NULL handling, no ORDER BY. Works but performs terribly at scale.
After Critic Pattern
Proper JOINs, LEFT JOIN for deleted customers, COALESCE for NULLs, explicit ORDER BY. Production-ready.
Key insight: The Critic pattern turned a “works but slow” query into a production-ready one in 3 passes. The model knew about JOINs all along — it just didn’t apply that knowledge until the critique forced it to evaluate its own work.
group
Pattern 2: The Persona Chain
Multiple expert personas in sequence — each one adds a different perspective
The Pattern
Step 1 — Persona A generates: "As a [role A], [do the task]." Step 2 — Persona B reviews: "Now, as a [role B], review what Persona A produced. Focus on [B's area of expertise]." Step 3 — Persona A fixes: "Now, as [role A] again, incorporate the feedback from [role B] and produce the final version."
Example: Secure API Endpoint
Step 1: "As a backend developer, write a Flask endpoint for user registration that accepts name, email, and password." Step 2: "As a security engineer, audit this endpoint for vulnerabilities. Check for: input validation, password handling, SQL injection, rate limiting, error information leakage." Step 3: "As the backend developer, fix every security issue found and produce the final, secure version."
What the Security Persona Finds
Security audit findings: 1. CRITICAL: Password stored in plaintext → Use bcrypt with salt rounds ≥ 12 2. HIGH: No input validation on email → Validate format, check length 3. HIGH: No rate limiting → Add per-IP rate limit (5 reg/hour) 4. MEDIUM: Error returns full traceback → Return generic error to client, log details server-side 5. LOW: No CORS configuration → Restrict to your frontend domain
Why Personas Work
Each persona activates a different “knowledge cluster” in the model. The developer persona optimizes for functionality. The security persona optimizes for safety. By chaining them, you get code that’s both functional and secure — something a single prompt rarely achieves.
Key insight: The Persona Chain is like having a team review. Developer writes, security reviews, developer fixes. You can extend this: add a “performance engineer” persona, a “UX designer” persona, or a “tech lead” persona. Each adds a perspective the others miss.
call_split
Pattern 3: The Decomposer
Break a complex task into subtasks, solve each independently, then combine
The Pattern
Step 1 — Decompose: "I need to [complex task]. Break this into a numbered list of subtasks that can be solved independently." Step 2 — Solve each: "Now solve subtask 1: [description]" "Now solve subtask 2: [description]" ... Step 3 — Combine: "Now combine all subtask solutions into a coherent final output."
Example: API Test Plan
Prompt: "Write a comprehensive test plan for our /api/checkout endpoint." Model decomposes into: 1. List all input parameters and their valid ranges 2. Identify happy path scenarios 3. Identify error/edge case scenarios 4. Define expected responses for each 5. Prioritize by risk and frequency Each subtask gets full attention instead of the model trying to do everything at once.
Single Prompt vs Decomposed
Single Prompt
“Write a test plan for /api/checkout”

Gets 8–10 generic test cases. Misses edge cases, doesn’t consider error states, no prioritization.
Decomposed
Step 1 finds 12 input parameters. Step 3 identifies 23 edge cases including race conditions and partial failures. Step 5 prioritizes by business impact. 40+ test cases total.
When to Use
Use the Decomposer when:
• The task has multiple distinct aspects (testing, migration, architecture)
• A single prompt gives shallow results
• You need comprehensive coverage, not just a quick answer
• The task would take a human multiple hours to do well
Key insight: The Decomposer works because it converts one hard problem into several easy problems. The model can handle each subtask with full attention and context, rather than splitting its “cognitive budget” across everything at once.
flip_camera_android
Pattern 4: The Flip (Argue the Opposite)
Force the model to consider the other side — then synthesize a balanced answer
The Pattern
Step 1 — Argue FOR: "Make the strongest case for [option A]." Step 2 — Argue AGAINST: "Now make the strongest case against [option A] and for [option B]." Step 3 — Synthesize: "Now, considering both arguments, give your balanced recommendation with specific conditions for when each option is better."
Example: Monolith vs Microservices
Step 1: "Make the strongest case for migrating our Django monolith to microservices. We have 15 developers, 500K daily users, and deploy 3x/week." Step 2: "Now make the strongest case for keeping the monolith and improving it instead." Step 3: "Given both arguments, what should we actually do? Be specific about our team size and scale."
Why This Beats a Direct Question
If you ask “Should we use microservices?” directly, the model tends to give a balanced-but-wishy-washy answer: “It depends on your needs...” The Flip pattern forces it to:

1. Build the strongest possible case for each side (no hedging)
2. Identify the specific trade-offs (not generic pros/cons)
3. Make a concrete recommendation based on your context

The synthesis in Step 3 is dramatically better because the model has already explored both sides deeply.
Best For
• Architecture decisions (monolith vs microservices, SQL vs NoSQL)
• Technology choices (React vs Vue, Python vs Go)
• Strategy decisions (build vs buy, hire vs outsource)
• Any decision where both options have legitimate merit
Key insight: The Flip pattern exploits a weakness in LLMs: they tend to agree with the framing of the question. By forcing both sides, you get a genuinely balanced analysis instead of confirmation bias. The synthesis step is where the real value emerges.
layers
Combining Patterns: The Power Stack
Real-world tasks often need multiple patterns chained together
Pattern Combinations
Decomposer + Critic:
Break the task into subtasks → solve each → critique the combined result → rewrite. Best for comprehensive documents (RFCs, design docs).

Persona Chain + Critic:
Developer writes → Security reviews → Developer fixes → Tech lead critiques the whole thing. Best for production code.

Flip + Decomposer:
Argue both sides → decompose the winning argument into action items → solve each. Best for strategic decisions that need execution plans.
Example: Design Document
# Decomposer + Critic for a design doc 1. Decompose: "Break this design doc into sections: Problem, Constraints, Options, Recommendation, Risks, Implementation Plan." 2. Solve each section individually with full context and detail. 3. Combine into a single document. 4. Critique: "Review this design doc as a staff engineer. Check for: missing failure modes, unrealistic timelines, unaddressed scalability concerns, missing rollback plan." 5. Rewrite incorporating the critique.
The Cost-Quality Trade-off
Each pattern pass costs tokens. A Decomposer + Critic pipeline might use 5–10x more tokens than a single prompt. Is it worth it?

For a one-off design doc: Absolutely. The extra $0.50 in API costs saves hours of revision.

For 10,000 daily API calls: Probably not. Use the simplest pattern that meets your quality bar.

Rule of thumb: Use multi-pattern pipelines for high-stakes, low-volume tasks. Use single-pass prompts for high-volume, lower-stakes tasks.
Key insight: Patterns are composable building blocks. The best prompt engineers don’t memorize complex mega-prompts — they combine simple patterns like LEGO bricks. Decomposer for breadth, Critic for quality, Persona Chain for perspective, Flip for balance.
library_books
Your Prompt Pattern Library
Quick reference — which pattern for which task
Pattern Quick Reference
THE CRITIC Generate → Critique → Rewrite Use for: writing, code, queries, docs Cost: 3x tokens, high quality gain THE PERSONA CHAIN Role A generates → Role B reviews → Fix Use for: code + security, content + SEO Cost: 3x tokens, catches blind spots THE DECOMPOSER Break into subtasks → Solve each → Combine Use for: test plans, migrations, audits Cost: 2-5x tokens, much better coverage THE FLIP Argue for → Argue against → Synthesize Use for: architecture, strategy, tech choice Cost: 3x tokens, eliminates bias THE VERIFIER (from Ch 5) Solve → "Check your work" Use for: math, planning, code, estimates Cost: 1.2x tokens, catches errors
Decision Tree
# What am I trying to improve? Output quality → The Critic Coverage / completeness → Decomposer Multiple perspectives → Persona Chain Balanced decision → The Flip Correctness → The Verifier # How critical is this output? Low stakes → Single prompt + Verifier Medium stakes → One pattern High stakes → Combine 2-3 patterns
Building Your Own Patterns
These five patterns cover most use cases, but you can create your own. The formula:

1. Identify a task where single prompts consistently underperform
2. Figure out why (missing perspective? shallow coverage? no verification?)
3. Design a multi-pass structure that addresses the specific weakness
4. Test it on 5+ examples to verify it consistently improves output
Key insight: Prompt patterns are your multiplier. A single well-chosen pattern can turn a mediocre output into an excellent one. Build a personal library of patterns that work for your specific domain, and you’ll spend less time fighting the model and more time shipping.