Ch 4 — Use Case Selection & Prioritization

Automation vs augmentation, complexity scoring, the first use case trap, and anti-patterns that guarantee failure
High Level
lightbulb
Identify
arrow_forward
balance
Score
arrow_forward
compare_arrows
Auto vs Aug
arrow_forward
block
Anti-Pattern
arrow_forward
verified
Validate
arrow_forward
rocket_launch
Launch
-
Click play or press Space to begin...
Step- / 8
crisis_alert
The First Use Case Trap
Why 74% of enterprises want AI revenue impact but only 20% achieve it
The Gap
As of 2026, 74% of enterprises want revenue impact from AI, but only 20% have achieved it. Just 29% of executives can confidently measure AI returns, and only 16% have scaled initiatives enterprise-wide. The gap almost always traces back to the first use case. Teams pick the wrong one — too broad, too complex, too dependent on messy data — and the pilot fails. That failure poisons the organization's appetite for AI for 12–18 months. The first use case isn't just a technical decision; it's a political and organizational one. It must succeed visibly enough to earn the right to do a second.
The Reality Check
Enterprise AI adoption (2026): Want revenue impact: 74% Actually achieved it: 20% Can measure returns: 29% Scaled enterprise-wide:16% AI budget trend: Doubling in 2026 30% targeting agentic AI // Source: Olakai ROI Playbook, 2026
Why it matters: The first use case is an audition, not a project. If it fails, the organization doesn't learn from it — it concludes "AI doesn't work here" and moves on.
balance
The Five-Dimension Scoring Framework
How to evaluate use cases before committing resources
Scoring Criteria
Every candidate use case should be scored across five dimensions before any development begins. Business value: what's the measurable impact in dollars, time saved, or error reduction? Data readiness: is the required data accessible, clean, and fresh enough? (See Ch 3 scorecard.) Technical feasibility: can current models and tools handle this task at the required accuracy? Integration complexity: how many systems must the agent connect to, and what's the API maturity? Risk exposure: what happens when the agent is wrong — is it a minor inconvenience or a compliance violation? Score each 1–5. Multiply business value by the average of the other four. The result is your adjusted priority score.
Scoring Matrix
Dimension Weight Score (1-5) Business value Multiplier ___ Data readiness Averaged ___ Technical feasibility Averaged ___ Integration complexity Averaged ___ Risk exposure Averaged ___ Priority = Value × avg(other four) Score > 15: strong candidate Score 10-15: needs mitigation plan Score < 10: defer or redesign
Rule of thumb: If any single dimension scores 1, the use case is a no-go regardless of total score. A 5 in business value can't compensate for a 1 in data readiness.
compare_arrows
Automation vs Augmentation
The most important distinction in enterprise AI
Two Modes
Automation replaces a human step entirely: the agent receives input, processes it, and produces output with no human in the loop. Augmentation assists a human: the agent drafts, suggests, or pre-processes, but a human makes the final decision. The distinction matters because automation requires much higher accuracy thresholds (typically 98%+), comprehensive error handling, and regulatory approval in many industries. Augmentation can tolerate lower accuracy (85%+) because the human catches errors. Most enterprises should start with augmentation — it delivers value immediately, builds trust, and generates the training data needed to eventually automate.
Decision Guide
Automation (agent decides): Accuracy needed: ≥ 98% Error cost: must be low or recoverable Regulatory: pre-approved Best for: high-volume, low-stakes Example: invoice data extraction Augmentation (agent assists): Accuracy needed: ≥ 85% Error cost: human catches mistakes Regulatory: human remains accountable Best for: complex, high-stakes Example: contract review drafting
Key insight: The path to automation runs through augmentation. Start by assisting humans, measure accuracy over thousands of decisions, and automate only when the data proves the agent is reliable enough.
block
Four Anti-Patterns That Guarantee Failure
Patterns that look reasonable but reliably kill projects
The Killers
1. Boil the ocean: "Automate the entire customer service department." No scope boundary, no measurable milestone, no chance of shipping in 90 days. 2. Demo-driven selection: The CEO saw a demo at a conference and wants "that." The use case looks impressive but has no process owner, no data pipeline, and no success metric. 3. Solution looking for a problem: "We bought the platform, now find use cases." Technology-first selection inverts the value chain. 4. Moonshot first: Starting with the hardest, most cross-functional problem to "prove AI works." If it fails (and it will), the organization concludes AI doesn't work — when the real conclusion is that the use case was wrong.
Pattern Recognition
"Automate all of customer service" → No scope = no ship date "CEO saw it at Davos" → No owner = no accountability "We bought Copilot, find uses" → Solution-first = value-last "Let's do the hardest one first" → Guaranteed failure = AI is dead "Extract dates from 200 contracts/week" → Scoped, measurable, shippable
Rule of thumb: If you can't describe the use case, its success metric, and its ship date in three sentences, it's not a use case — it's a wish.
checklist
The Three Prerequisites
Actionable outputs, available data, measurable outcomes
Non-Negotiables
A valid enterprise AI use case requires three foundational elements. Actionable outputs: the agent's output must plug directly into an existing decision or workflow. If someone has to manually interpret or reformat the output, you've built a report, not an agent. Available data: the data the agent needs must be obtainable within a reasonable timeframe — not "we'll build a data lake first." If the data doesn't exist or can't be accessed via API, the use case is blocked regardless of model capability. Measurable outcomes: there must be a specific KPI that changes when the agent works correctly — processing time, error rate, cost per transaction, customer satisfaction score. "Improve efficiency" is not a KPI.
Validation Checklist
Prerequisite 1: Actionable output □ Output feeds directly into workflow □ No manual reformatting needed □ Clear handoff point defined Prerequisite 2: Available data □ Data exists today (not "planned") □ Accessible via API or export □ Quality score ≥ 3 (Ch 3 scorecard) Prerequisite 3: Measurable outcome □ Specific KPI identified □ Baseline measurement exists □ Target improvement defined
Key insight: These three prerequisites are binary gates, not sliding scales. If any one is missing, the use case isn't ready — no matter how exciting the technology or how enthusiastic the sponsor.
category
The Safe First Bets
Use case categories with the highest success rates
Proven Categories
Across enterprise deployments, certain use case categories consistently succeed as first bets. Document processing (invoice extraction, contract analysis, claims handling) works because the input is structured, the output is verifiable, and accuracy is measurable per field. Internal knowledge Q&A (policy lookup, HR FAQ, IT troubleshooting) works because the data is owned, the stakes are low, and employee tolerance for imperfection is higher than customer tolerance. Data enrichment (lead scoring, categorization, tagging) works because it augments rather than replaces, and errors are caught downstream. The common thread: narrow scope, verifiable output, low blast radius.
Success Rate by Category
High success rate: Document processing (Ch 6) Internal knowledge Q&A Data enrichment & tagging Moderate success rate: Customer service triage Code generation / review Report summarization Low success rate (as first bet): Full customer service automation Cross-functional workflow orchestration Strategic decision support
Rule of thumb: Your first use case should be something where a wrong answer is annoying, not catastrophic. Save the high-stakes automation for use case #3 or #4, after you've built organizational trust.
timer
The 90-Day Rule
If it can't show value in 90 days, it's the wrong first use case
Time-Boxing
Enterprise AI projects that don't show measurable value within 90 days almost never recover. The organizational patience window is short: sponsors lose interest, budgets get reallocated, and skeptics gain ammunition. The 90-day rule forces discipline: Week 1–2: scope definition, data audit, success metric baseline. Week 3–6: build MVP agent with 2–3 core capabilities. Week 7–10: pilot with a small user group, measure against baseline. Week 11–12: present results, decide go/no-go for production. If the use case can't fit this timeline, it's either too broad (split it) or too dependent on prerequisites that aren't met (fix those first).
90-Day Sprint
Week 1-2: Scope & baseline Define success metric Audit data readiness Identify 5-10 pilot users Week 3-6: Build MVP 2-3 core agent capabilities Basic observability Human escalation path Week 7-10: Pilot Small group, real workflows Measure vs baseline daily Week 11-12: Decide Present results to sponsors Go/no-go for production
Key insight: The 90-day constraint isn't arbitrary — it's calibrated to organizational attention spans. A brilliant agent that takes 9 months to deliver is worth less than a good-enough agent that proves value in 12 weeks.
stacked_line_chart
Building the Use Case Portfolio
From first win to enterprise-wide AI strategy
The Portfolio Approach
One successful use case isn't a strategy — it's a proof point. The goal is to build a portfolio of use cases that compound in value. After the first win, expand in two directions: horizontal (apply the same pattern to adjacent teams — if invoice extraction worked for AP, try it for procurement) and vertical (deepen the same workflow — from extraction to validation to routing). Each new use case benefits from the infrastructure, integrations, and organizational trust built by previous ones. The SEE-MEASURE-DECIDE-ACT framework from Olakai provides a structured approach: map the AI ecosystem, define success metrics, prioritize ruthlessly, and execute in time-boxed sprints.
Expansion Strategy
Use case #1: Invoice extraction (AP) ↓ horizontal Use case #2: PO extraction (Procurement) ↓ vertical Use case #3: Invoice validation + routing ↓ horizontal Use case #4: Contract date extraction (Legal) ↓ vertical Use case #5: Contract risk flagging // Each builds on previous infrastructure
Key insight: The best enterprise AI strategies look boring from the outside — incremental, compounding, and relentlessly focused on measurable value. The flashy moonshot strategies make great conference talks and terrible business outcomes.