Ch 8 — Prompt-Driven Development

From vague requests to precise specifications — the craft of telling AI what to build
High Level
lightbulb
Mindset
arrow_forward
layers
3 Layers
arrow_forward
call_split
Decompose
arrow_forward
compare
Patterns
arrow_forward
replay
Iterate
arrow_forward
star
Mastery
-
Click play or press Space to begin...
Step- / 8
lightbulb
The Prompt-Driven Mindset
You are no longer just a coder — you are a specification writer
What Changes
In prompt-driven development, your primary output shifts from code to specifications. You describe what needs to be built, the constraints it must satisfy, and the patterns it should follow. The AI generates the implementation. Your job becomes defining the what and why, while the AI handles the how.
Why Precision Matters
A vague prompt produces code that needs ~60% rewriting. A structured, precise prompt produces code that needs only ~10% modification. The difference isn’t the model — it’s the quality of the instruction. Prompt precision correlates directly with output quality.
State the Outcome, Not the Steps
Tell the AI what you want to achieve, not how to achieve it. Let the agent determine the best implementation path. Micromanaging steps constrains the model and often produces worse results than letting it reason about the approach.
Micromanaging
“Create a variable called users. Then loop through the array. Inside the loop, check if the user is active. If so, push to a new array.”
Outcome-Focused
“Filter the users array to only include active users. Return a new array without mutating the original.”
Key insight: The best prompts read like a product specification, not a tutorial. You define acceptance criteria; the AI writes the implementation. This is a fundamentally different skill from writing code yourself.
layers
The Three-Layer Prompt Framework
Context → Task → Constraints — the structure that works
Layer 1: Context
Tell the AI about your project, stack, and environment. What framework are you using? What does the existing code look like? What patterns does the codebase follow? This layer is often handled by rules files (Ch 7), but you can supplement it in the prompt.
Layer 2: Task
Be specific about what needs to happen. Not “add login” but “implement email/password login with JWT tokens, returning a 200 with the token on success and a 401 with an error message on failure.” Include expected behavior, edge cases, and error handling.
Layer 3: Constraints & Format
Specify boundaries: what not to do, style requirements, performance expectations, and output format. “Don’t use any external libraries. Follow the existing error handling pattern in @src/utils/errors.ts. Include unit tests.”
The Framework in Action
// LAYER 1: CONTEXT “We use Next.js 15 with App Router, Prisma ORM, and Zod for validation. Auth uses JWT stored in httpOnly cookies. // LAYER 2: TASK Create a POST /api/users endpoint that: - Accepts { email, name, role } in the body - Validates with Zod (email format, name 2-50 chars, role: admin|editor|viewer) - Creates user in DB via Prisma - Returns 201 with the created user - Returns 409 if email already exists // LAYER 3: CONSTRAINTS Follow the pattern in @src/app/api/posts/route.ts. Use the existing ApiResponse type from @src/types. Require admin role (check auth middleware). Add tests in @tests/api/users.test.ts.”
Why it works: Each layer narrows the solution space. Context eliminates wrong-framework answers. Task eliminates wrong-behavior answers. Constraints eliminate wrong-style answers. What remains is exactly what you need.
call_split
Task Decomposition: Break It Down
Why small, focused prompts beat monolithic requests
The Decomposition Principle
Large tasks produce large, error-prone outputs. The model loses track of requirements, introduces inconsistencies, and generates code that’s harder to review. Decompose by natural code boundaries: layers, concerns, or dependency order. Generate each unit separately with focused context, then integrate.
When to Decompose
• Task touches 3+ files
• Task has multiple independent concerns (UI + API + DB)
• Task requires different expertise (security, performance, UX)
• AI produces partial solutions or loses track of requirements
• You can’t review the output in one pass
Decomposition Example
// Instead of: “Build user registration” // Decompose into: Step 1: Create Zod validation schema for registration (email, password, name) → Review & approve Step 2: Create Prisma migration for users table with email unique constraint, hashed password → Review & approve Step 3: Create POST /api/auth/register endpoint using schema from Step 1, model from Step 2 → Review & approve Step 4: Create registration form component with validation, loading states, error display → Review & approve Step 5: Add tests for endpoint and form → Review & approve
The review checkpoint: Each step ends with a review. This is critical. Errors caught at Step 2 are cheap to fix. Errors caught after all 5 steps are expensive. Decomposition creates natural review points that prevent cascading mistakes.
compare
Before & After: Prompt Rewrites
Real examples showing how prompt quality transforms output quality
Example 1: API Endpoint
Before
“Add a notifications endpoint.”
After
“Create GET /api/notifications for authenticated users. Return { id, type, message, read, createdAt } with cursor-based pagination (limit, cursor params). Return 401 if unauthenticated. Add unit tests covering pagination and auth.”
Example 2: Bug Fix
Before
“The form is broken, fix it.”
After
“The registration form in @src/components/RegisterForm.tsx submits successfully but doesn’t redirect. Expected: redirect to /dashboard after 201 response. Actual: stays on /register. The API call succeeds (confirmed in Network tab).”
Example 3: Refactoring
Before
“Refactor this code.”
After
“Extract the validation logic from @src/routes/orders.ts lines 45-89 into a separate validateOrder function in @src/validators/order.ts. Keep the same behavior. The function should accept an OrderInput and return { valid: boolean, errors: string[] }.”
The Pattern
Every good prompt rewrite adds the same things: specific files, expected behavior, data shapes, error cases, and verification criteria. The “after” prompts aren’t longer for the sake of length — every word narrows the solution space.
The 4 missing elements: When a prompt fails, it’s almost always missing one of: task scope, file locations, constraints, or verification steps. Add whichever is missing and retry.
psychology
Prompt Patterns That Work
Reusable patterns for common development tasks
The Exemplar Pattern
“Follow the pattern in [existing file].”

The most powerful prompt technique for consistency. Point the AI at an existing, well-written piece of code and ask it to follow the same pattern. The AI infers naming conventions, error handling, response formats, and architectural decisions from the example.
The Constraint Pattern
“Do X, but don’t do Y.”

Explicitly state what the AI should avoid. “Implement caching, but don’t use Redis — use in-memory Map with TTL.” Constraints prevent the AI from making assumptions that conflict with your architecture.
The Role Pattern
“Act as a [specialist] and review/build X.”

“Act as a security auditor and review this authentication flow for vulnerabilities.” Assigning a role activates domain-specific knowledge and changes the model’s evaluation criteria.
The Verification Pattern
“After implementing, verify by [criteria].”

“After adding the endpoint, run the test suite and fix any failures.” This turns a single-shot generation into an iterative loop where the agent checks its own work. Properly structured, this reduces manual debugging time by 6.5x.
The Incremental Pattern
“Start with [minimal version], then add [feature].”

“First create a basic CRUD endpoint with no auth. Once that works, add JWT authentication. Then add rate limiting.” Building incrementally lets you verify each layer before adding complexity.
Combine patterns: The best prompts use 2–3 patterns together. “Follow the pattern in @src/routes/posts.ts (exemplar). Add input validation with Zod (constraint). After implementing, run npm test and fix failures (verification).”
replay
The Iterative Refinement Loop
First draft is never final — how to steer the AI toward what you want
Expect Iteration
Even with a perfect prompt, the first output rarely matches exactly what you want. This is normal. Prompt-driven development is an iterative conversation, not a one-shot generation. Plan for 2–3 rounds of refinement on complex tasks.
Effective Correction Prompts
// Be specific about what’s wrong: Bad: “That’s not right, try again.” Good: “The error handling is wrong. On 404, return { error: ‘Not found’ } not throw. See how @src/routes/posts.ts handles it.” Bad: “Make it better.” Good: “Extract the DB query into a separate function in @src/db/queries.ts so it can be reused by the admin endpoint.” Bad: “This is too slow.” Good: “This queries the DB inside a loop (N+1). Batch the user lookups into a single WHERE id IN (...) query instead.”
The Refinement Workflow
Round 1: Generate the initial implementation. Review structure and approach. Is the overall direction right?

Round 2: Fix specific issues. Wrong error handling? Missing edge case? Point to the exact problem and the desired behavior.

Round 3: Polish. Naming, comments, test coverage, performance. These are the details that make code production-ready.
When to Start Over
If Round 2 corrections are making things worse, the fundamental approach is wrong. Don’t iterate on a bad foundation. Revert, rethink your prompt, and start fresh. A new prompt with better context is faster than fixing a broken implementation.
Key insight: The skill isn’t writing the perfect first prompt. It’s recognizing what’s wrong with the output and writing a precise correction. Good developers iterate faster because they diagnose problems faster.
groups
Prompts as Team Artifacts
Shared prompt patterns accelerate the entire team
Prompt Libraries
When you find a prompt pattern that works well for your project, save it. Create a shared library of prompt templates for common tasks: new endpoint, new component, migration, test suite, code review. New team members get productive faster because they inherit proven patterns.
Prompts in Code Review
Include the prompt that generated the code in the PR description. This gives reviewers context about intent: what was asked for, what constraints were specified, and what the AI was told to do. It makes review faster and catches specification gaps.
Shared Vocabulary
Teams that use consistent prompt patterns produce more consistent code. When everyone uses the same three-layer structure, the same decomposition approach, and the same verification patterns, the codebase stays coherent even with multiple developers using AI independently.
Template Example
// Team prompt template: New API Endpoint Context: [stack from rules file] Endpoint: [METHOD] [path] Auth: [required role or public] Input: [request body/params with types] Output: [response shape with status codes] Errors: [error cases and responses] Pattern: Follow @[existing similar endpoint] Tests: Add to @[test file location] Verify: Run npm test after implementation
The multiplier: A good prompt template used by 5 developers saves more time than one developer writing perfect prompts. Invest in templates that encode your team’s standards, patterns, and quality expectations.
star
The Prompt-Driven Developer’s Checklist
A practical reference for every AI coding session
Before You Prompt
Is the task small enough for one prompt? If not → decompose first Do I have an example to reference? If yes → use the exemplar pattern Are my rules files up to date? If not → update before starting Am I in the right mode? Completion / Chat / Agent (Ch 5)
While You Prompt
Layer 1: Context provided? Stack, framework, relevant files Layer 2: Task specified? Expected behavior, data shapes, edge cases Layer 3: Constraints stated? What not to do, patterns to follow, tests File references included? @ mentions for all relevant files
After You Get Output
Does the approach make sense? Review structure before details Are there obvious errors? Wrong imports, missing error handling Does it match the constraints? Correct patterns, no forbidden elements Do I understand every line? Never accept code you can’t explain
The meta-skill: Prompt-driven development isn’t about memorizing frameworks. It’s about developing the habit of thinking precisely about what you want before asking for it. This precision transfers to writing better tickets, better documentation, and better code — even without AI.