Ch 2 — The AI Product Landscape

Where AI products actually work today — and the taxonomy every PM needs to navigate the market.
High Level
layers
Stack
arrow_forward
tune
Enhanced
arrow_forward
auto_awesome
AI-Native
arrow_forward
smart_toy
Autonomy
arrow_forward
swap_horiz
H vs V
arrow_forward
map
Navigate
-
Click play or press Space to begin...
Step- / 8
layers
The AI Technology Stack
Three layers that every AI product sits on top of
Infrastructure Layer
At the bottom sits compute infrastructure — the GPUs, TPUs, and cloud services that power everything. NVIDIA dominates with roughly 95% of the AI GPU market. Cloud providers (AWS, Azure, GCP) package this compute into managed services.

This layer is a commodity play. Unless you’re building chips or running data centers, you’re a customer here, not a competitor. As a PM, your decisions at this layer are about cost, latency, and vendor lock-in — not differentiation.
Model Layer
The middle layer is foundation models and APIs. OpenAI (GPT-4o, o3), Anthropic (Claude), Google (Gemini), Meta (Llama), and others provide the core intelligence. You access them via API or deploy open-weight models on your own infrastructure.

This is the most dynamic layer. Model capabilities improve quarterly. Pricing drops by 10–50x per year. What was impossible 12 months ago is now a commodity API call.
Application Layer
The top layer is where products live. ChatGPT (200M+ users), GitHub Copilot ($5B+ ARR), Salesforce Einstein, Notion AI. This is where PMs operate — combining model capabilities with user needs, data, and business logic to create value.

The AI software market is approximately $120 billion, split roughly between enterprise AI ($60B), consumer AI ($30B), and developer tools ($30B).
tune
AI-Enhanced Products
Adding AI to existing products — the most common pattern today
What AI-Enhanced Means
An AI-enhanced product is an existing product that adds AI capabilities on top of its core value proposition. Remove the AI and the product still works — it just works less well.

Examples:
Gmail Smart Compose — Gmail worked fine before autocomplete. The AI makes composing faster but isn’t the reason you use Gmail.
Notion AI — Notion is a workspace tool. AI summarization and writing assistance are add-ons to the core note-taking product.
Salesforce Einstein — The CRM existed for 20 years. AI-powered lead scoring and forecasting are enhancements, not the product itself.
PM Implications
Advantages:
• Built-in distribution — existing user base gets the feature immediately
• Lower risk — if the AI fails, the core product still works
• Incremental value — easier to measure lift (before/after AI)

Challenges:
• AI is a feature, not a moat — competitors add the same capability quickly
• Integration complexity — AI must fit existing workflows without disruption
• User expectations — existing users expect the product to work the same way; AI surprises can frustrate
PM decision: AI-enhanced products succeed when the AI feature reduces friction in an existing workflow. The question is: “What existing user action can AI make 10x faster or eliminate entirely?” If the answer is “nothing meaningful,” the AI feature is a gimmick, not a product improvement.
auto_awesome
AI-Native Products
Products where AI IS the product — remove it and nothing remains
What AI-Native Means
An AI-native product is built from the ground up with AI as the core value proposition. Remove the AI and the product ceases to exist. There is no “before AI” version.

Examples:
ChatGPT / Claude — The entire product is a language model interface. No model, no product.
Midjourney — Image generation is the product. Without the diffusion model, there’s nothing.
GitHub Copilot — Code completion powered by models. The editor is a wrapper; the AI is the value.
Perplexity — AI-native search. The product is the model’s ability to synthesize answers from the web.
PM Implications
Advantages:
• Designed around AI strengths and limitations from day one
• UX can embrace probabilistic behavior natively (confidence indicators, regeneration, feedback)
• Faster iteration — no legacy constraints
• Stronger moat if you build proprietary data loops

Challenges:
• No fallback — if the AI fails, the entire product fails
• Model dependency — your product quality is bounded by model capability
• Commoditization risk — if the model provider launches a competing product (OpenAI launching ChatGPT competed with every GPT-wrapper startup)
The wrapper trap: An AI-native product that is just a thin UI over a foundation model API has no moat. The moat comes from proprietary data, unique workflows, domain-specific evaluation, or network effects. If your entire product can be replicated by a competitor in a weekend with the same API key, you don’t have a product — you have a demo.
smart_toy
The Autonomy Spectrum
From copilots to fully autonomous agents — three levels of AI product autonomy
Level 1 — Assisted (Copilots)
AI acts as a digital assistant that requires a human trigger for every action. The human is in full control; the AI suggests, the human decides.

Examples: GitHub Copilot (suggests code, developer accepts/rejects), Grammarly (suggests edits, writer decides), Smart Compose (suggests text, user accepts).

PM focus: Suggestion quality, acceptance rate, time saved per interaction. The key metric is how often users accept the AI’s suggestion.
Level 2 — Augmented (Collaborators)
AI handles complex sub-tasks with human oversight. The AI does meaningful work autonomously but a human reviews critical decisions.

Examples: AI that drafts insurance quotes for human review, fraud detection systems that flag transactions for analyst review, AI that generates marketing copy for editor approval.

PM focus: Automation rate (% of cases handled without escalation), false positive rate, review burden on humans. The key metric is how much human effort the AI eliminates while maintaining quality.
Level 3 — Agentic (Autonomous)
AI perceives, plans, and executes multi-step workflows independently. Minimal human intervention. The AI sets sub-goals, uses tools, and adapts to outcomes.

Examples: AI agents that manage ad campaigns end-to-end, customer service agents that resolve tickets autonomously, coding agents that implement features from issue descriptions.

PM focus: Task completion rate, error recovery, cost per resolved task, escalation rate. The key metric is end-to-end task resolution without human intervention.

Agentic AI is predicted to replace 20–30% of standard SaaS UI interactions by late 2026. This is the frontier — and the highest risk.
The autonomy trade-off: Higher autonomy means higher leverage but higher risk. A copilot that suggests wrong code wastes 5 seconds. An autonomous agent that deploys wrong code to production causes an outage. As a PM, you choose the autonomy level based on error cost, not technical capability. Just because you can automate fully doesn’t mean you should.
swap_horiz
Horizontal vs. Vertical AI
General-purpose tools vs. industry-specific solutions — two very different product strategies
Horizontal AI
Horizontal AI products solve problems that exist across every industry. Writing, coding, search, data analysis, customer support — these needs are universal.

Examples: ChatGPT, Jasper (writing), Cursor (coding), Intercom Fin (support), Tableau AI (analytics).

Characteristics:
• Massive addressable market
• Intense competition (every AI lab targets these use cases)
• Winner-take-most dynamics
• Differentiation through UX, integrations, and distribution
• Vulnerable to platform plays (OpenAI, Google can launch competing products overnight)
Vertical AI
Vertical AI products go deep into a single industry. They combine AI with domain-specific data, workflows, regulations, and terminology.

Examples: Harvey (legal AI), Abridge (medical documentation), Vanta (compliance), Viz.ai (radiology), Hebbia (financial analysis).

Characteristics:
• Smaller addressable market per vertical
• Higher barriers to entry (domain expertise, regulatory knowledge, specialized data)
• Stickier products (deeply integrated into industry workflows)
• Higher willingness to pay (solving expensive industry-specific problems)
• Harder for generalist AI labs to replicate
PM strategy: Horizontal AI competes on breadth and speed. Vertical AI competes on depth and trust. If you’re building horizontal, you need distribution advantages or a data moat. If you’re building vertical, you need domain expertise and regulatory understanding that generalist competitors lack. The worst position is “horizontal product with vertical pricing.”
category
The Seven Product Categories
A practical taxonomy of what AI products actually do
Generation & Creation
Text generation (ChatGPT, Jasper), image generation (Midjourney, DALL-E), code generation (Copilot, Cursor), video generation (Sora, Runway), audio generation (ElevenLabs, Suno).

PM challenge: Quality control, brand safety, copyright, and the “uncanny valley” of almost-but-not-quite-right outputs.
Analysis & Insights
Predictive analytics (demand forecasting, churn prediction), anomaly detection (fraud, security threats), pattern recognition (medical imaging, quality inspection).

PM challenge: Explainability. Users need to understand why the AI flagged something, not just that it did.
Search & Retrieval
Semantic search (Perplexity, Glean), recommendation engines (Netflix, Spotify, Amazon), knowledge retrieval (enterprise RAG systems).

PM challenge: Relevance vs. serendipity. Too-precise search creates filter bubbles; too-broad search frustrates users.
Conversation & Interaction
Customer service bots (Intercom Fin, Ada), virtual assistants (Alexa, Siri), sales assistants (Drift, Qualified).

PM challenge: Knowing when to hand off to humans. The worst experience is an AI that insists on helping when it clearly can’t.
Automation & Workflow
Document processing (invoice extraction, contract review), workflow automation (Zapier AI, Make), robotic process automation (UiPath, Automation Anywhere).

PM challenge: Exception handling. Automation works for the 80% case; the 20% of exceptions determines whether users trust the system.
Decision Support & Agents
Decision support (clinical decision systems, financial advisors) and autonomous agents (Devin for coding, AI SDRs for sales outreach).

PM challenge: Accountability. When an AI makes a decision, who is responsible for the outcome? This is both a product design and a legal question.
Key insight: Most successful AI products combine 2–3 of these categories. A customer service bot (conversation) that searches a knowledge base (retrieval) and processes refunds (automation) is more valuable than any single capability alone.
warning
Where AI Products Fail
The categories where AI consistently under-delivers — and why
High-Stakes Decisions Without Oversight
AI products that make irreversible, high-consequence decisions without human review consistently fail or cause harm:

Autonomous hiring/firing — Bias in training data leads to discriminatory outcomes. Amazon scrapped its AI recruiting tool in 2018 after it penalized resumes containing the word “women’s.”
Autonomous medical diagnosis — AI can assist radiologists (augmented), but fully autonomous diagnosis without physician review remains too risky for deployment.
Autonomous financial trading — Flash crashes and cascading failures when AI systems interact unpredictably.
Common Failure Patterns
1. Solution looking for a problem — “We have AI, what should we do with it?” instead of “We have a problem, can AI solve it?”

2. Underestimating data requirements — The model needs 100K labeled examples but the company has 500.

3. Ignoring the last mile — The model works in the lab but the integration into existing workflows is so clunky that users bypass it.

4. Overpromising autonomy — Marketing says “fully automated” but reality requires constant human babysitting.

5. No feedback mechanism — The product launches, users interact, but there’s no way for the model to learn from those interactions. Performance stagnates.
PM guardrail: Before greenlighting any AI product, ask: “What happens when the AI is wrong?” If the answer is “nothing recoverable” — a wrong medical diagnosis, a discriminatory hiring decision, a financial loss — then the product needs human-in-the-loop by design, not as an afterthought.
map
Navigating the Landscape as a PM
A decision framework for positioning your AI product
The Four Questions
Before building any AI product, answer these four positioning questions:

1. Enhanced or Native?
Are you adding AI to an existing product or building something new? This determines your distribution strategy, risk profile, and competitive dynamics.

2. What autonomy level?
Copilot, collaborator, or autonomous agent? This is driven by error cost, not technical capability. Start lower than you think.

3. Horizontal or Vertical?
General-purpose or industry-specific? This determines your go-to-market, pricing, and competitive moat.

4. Which product categories?
Generation, analysis, search, conversation, automation, or decision support? Most products combine 2–3.
The Moat Checklist
In a landscape where model capabilities are commoditizing rapidly, your moat must come from somewhere else:

✓ Proprietary data — Do you have data competitors can’t access?
✓ Feedback loops — Does usage make your product better?
✓ Workflow integration — Are you embedded in a process that’s painful to switch away from?
✓ Domain expertise — Do you understand the industry better than generalist competitors?
✓ Network effects — Does each user make the product more valuable for others?
✓ Distribution — Can you reach customers that competitors can’t?

If you can’t check at least two of these, you’re building on borrowed time.
The bottom line: The AI product landscape is vast and evolving quarterly. Your job as a PM is not to chase every new capability but to find the intersection of genuine user need, defensible competitive position, and realistic technical feasibility. The chapters ahead will show you how to evaluate each of these dimensions systematically.