Ch 21 — The AI Landscape: Who Builds What and Why It Matters

Mapping the players, platforms, and power dynamics shaping the AI industry
High Level
memory
Chips
arrow_forward
cloud
Cloud
arrow_forward
model_training
Models
arrow_forward
build
Tools
arrow_forward
apps
Apps
arrow_forward
business
Enterprise
-
Click play or press Space to begin...
Step- / 8
layers
The AI Stack: Six Layers of Value
Understanding who captures value at each layer of the AI industry
The Stack
The AI industry is structured in layers, each building on the one below:

Layer 1: Chips — NVIDIA, AMD, custom silicon (Google TPUs, Amazon Trainium). The physical foundation. $50B+ TAM.
Layer 2: Cloud infrastructure — AWS, Azure, GCP. The compute layer. $20B+ AI revenue combined.
Layer 3: Foundation models — OpenAI, Anthropic, Google, Meta, Mistral. The intelligence layer.
Layer 4: Developer tools — LangChain, Hugging Face, vector databases. The building layer.
Layer 5: Applications — ChatGPT, Cursor, Jasper, Harvey. The product layer. $120B+ TAM.
Layer 6: Enterprise platforms — Salesforce Einstein, Microsoft Copilot. The integration layer.
Where Value Concentrates
The AI platform market exceeded $170 billion in 2026 revenue, projected to double by 2030. But value is not distributed evenly. The infrastructure layer (chips + cloud) captures the most reliable revenue because everyone needs compute. The model layer captures the most attention but faces commoditization pressure. The application layer captures the most enterprise value because it solves specific business problems.
Key insight: Understanding the stack helps you make better vendor decisions. When you buy an AI product, you’re implicitly choosing a position in this stack. A Salesforce Einstein deployment locks you into the Salesforce ecosystem. A custom LangChain application gives you flexibility but requires engineering talent. Know which layer you’re buying into and what that means for your optionality.
model_training
The Foundation Model Players
OpenAI, Anthropic, Google, Meta, Mistral — and what differentiates them
The Big Three (Closed-Source)
OpenAI — ~$850B valuation, ~$4B ARR targeting $30B in 2026. Market pioneer with GPT-5. Largest ecosystem (ChatGPT: 200M+ users, $500M+ revenue). Strongest brand recognition and developer adoption. Deepest Microsoft partnership.

Anthropic — ~$380B valuation, ~$1B ARR targeting $12B. Safety-focused. Claude leads in document analysis (94.2% accuracy), code review, and hallucination resistance. Largest effective context window. Growing enterprise adoption, especially in regulated industries.

Google DeepMind — $5B+ AI revenue. Gemini models with native multimodal architecture and 1M token context. Deepest integration with Google Cloud. Cost leader with Gemini Flash ($0.075/M tokens).
The Challengers
Meta (LLaMA) — Open-source leader. LLaMA 4 is free to use and fine-tune. Strategic play: commoditize the model layer to drive engagement on Meta’s platforms. Rapidly closing the gap with closed-source models.

Mistral AI — $10B+ valuation. European open-weight models known for efficiency. Strong performance relative to size. Differentiates on sovereignty for European enterprises.

DeepSeek — Chinese open-source challenger that introduced dramatic training cost efficiencies. Demonstrated that frontier-quality models can be built for a fraction of the cost, challenging the “scale requires billions” narrative.

xAI (Elon Musk) — $50B+ valuation. Grok models with real-time data access through X (Twitter) integration.
Key insight: The model layer is commoditizing faster than expected. DeepSeek proved that training costs can be 10–20× lower than assumed. Meta is giving away frontier-quality models for free. For enterprise buyers, this means model choice is becoming less important than the surrounding infrastructure — your RAG pipeline, your data quality, your prompt engineering, and your integration architecture matter more than which model you use.
cloud
The Cloud AI Platforms
AWS, Azure, GCP — and why your cloud choice is now an AI choice
The Big Three Clouds
AWS — ~$10B AI revenue, ~40% cloud AI market share. Broadest service catalog (Bedrock for model access, SageMaker for ML ops, custom Trainium chips). Most enterprise customers. Offers access to multiple foundation models through a single API.

Microsoft Azure — ~$5B AI revenue, ~25% share. Deepest OpenAI integration (exclusive cloud partner). Copilot ecosystem across Office, GitHub, and Dynamics. Strongest play for organizations already in the Microsoft ecosystem.

Google Cloud (GCP) — ~$5B AI revenue, ~25% share. Native Gemini integration. Custom TPU chips. Vertex AI platform. Strongest for organizations with heavy data analytics workloads (BigQuery + AI).
Why It Matters
Your cloud provider is increasingly your default AI provider. AWS customers naturally gravitate to Bedrock. Azure customers to OpenAI. GCP customers to Gemini. Switching costs are high because AI workloads are deeply integrated with data storage, networking, identity management, and compliance infrastructure. The cloud choice you made five years ago is now constraining your AI choices.
Key insight: Multi-cloud AI strategies are theoretically appealing but operationally expensive. Most enterprises will standardize on one primary cloud for AI and use a secondary for specific use cases (e.g., GCP for data analytics, Azure for Copilot). The exception: organizations with strict data sovereignty requirements may need region-specific cloud deployments. Evaluate your cloud AI strategy as part of your broader cloud strategy, not in isolation.
lock_open
Open Source vs. Closed Source
The strategic choice that defines your AI architecture
Closed-Source Advantages
Highest capability — GPT-5, Claude 4.6, and Gemini 3 remain the most capable models for complex reasoning, creative tasks, and broad general knowledge.
Managed infrastructure — No GPU procurement, no model serving, no operational overhead. Pay per token and scale instantly.
Continuous improvement — Models improve automatically without your intervention. Last year’s limitations may be solved in the next update.
Enterprise features — SOC 2 compliance, data isolation guarantees, SLAs, and dedicated support.
Open-Source Advantages
Data sovereignty — No data leaves your infrastructure. Critical for regulated industries (healthcare, defense, financial services).
Cost at scale — At high volume (>10M tokens/day), self-hosted open-source models are 5–10× cheaper than closed-source APIs (Chapter 15).
Customization — Full control over fine-tuning, quantization, and serving configuration.
No vendor lock-in — Switch between LLaMA, Mistral, and other models without changing your infrastructure.
Transparency — Inspect model weights, understand behavior, and audit for compliance.
Key insight: This is not an either/or decision. The winning enterprise strategy is hybrid: closed-source APIs for complex reasoning and general-purpose tasks where capability matters most, open-source models for high-volume production tasks, sensitive data processing, and use cases requiring full control. The ratio shifts over time as open-source models improve — what required GPT-4 last year may run on LLaMA today.
attach_money
The Investment Landscape
Where the money is going — and what it signals
Unprecedented Capital Concentration
AI venture funding reached $226 billion in 2025 — 48% of all global venture capital. In February 2026 alone, $189 billion was invested, a 780% year-over-year jump. But the concentration is extreme: three companies captured 83% of February 2026 funding — OpenAI ($110B), Anthropic ($30B), and Waymo ($16B). The U.S. captured 92% of global AI funding.
The Barbell Market
The AI investment landscape is a barbell: massive capital at the infrastructure/foundation model layer ($61.5B in mega-rounds of $100M+) and disciplined smaller checks at the application layer (median round: $7.5M). The middle is hollowing out. This signals that investors believe the infrastructure race requires enormous capital, while application-layer success depends more on execution than funding.
What This Means for Enterprises
Foundation model providers are well-funded — OpenAI, Anthropic, and Google have the capital to sustain multi-year R&D investments. They’re not going away.
Application-layer startups are fragile — Many AI startups have thin margins and limited runway. Evaluate vendor viability carefully before building dependencies.
Seed funding is declining — Down 11% to $2.6B. Fewer new AI startups are being funded, suggesting the market is consolidating around established players.
Key insight: The capital concentration tells you where the industry believes value will accrue: infrastructure and foundation models. For enterprise buyers, this means betting on well-funded platforms (OpenAI, Anthropic, Google, AWS, Azure) for critical workloads, while being cautious about dependencies on smaller application-layer startups that may not survive the consolidation.
apps
The Application Layer: Where Enterprise Value Lives
AI-native products that are reshaping industries
Breakout Applications
ChatGPT (OpenAI) — 200M+ users, $500M+ revenue. The consumer gateway to AI. Enterprise version (ChatGPT Enterprise/Team) growing rapidly.
Cursor — $500M–$1B ARR. AI-native code editor that has redefined developer productivity. The fastest-growing developer tool in history.
Salesforce Einstein — $2B+ ARR. AI embedded into the world’s largest CRM platform. Agentforce for autonomous customer service.
Microsoft Copilot — AI integrated across Office 365, GitHub, Dynamics 365. The broadest enterprise AI deployment by user count.
Vertical AI Leaders
Harvey — AI for legal professionals. Contract review, legal research, document drafting.
Abridge — AI for healthcare. Clinical documentation and patient encounter summarization.
Glean — Enterprise search and knowledge management powered by AI.
Writer — Enterprise content generation with brand voice consistency and compliance controls.
Key insight: The most successful AI applications are not “AI for everything” — they’re AI deeply integrated into specific workflows. Cursor succeeds because it understands code context. Harvey succeeds because it understands legal context. When evaluating AI products, prioritize those built for your industry and workflow over general-purpose tools with AI bolted on.
public
The Geopolitical Dimension
AI as a strategic asset — the U.S.-China race and its implications
U.S. Dominance
The United States dominates AI across every dimension: 92% of global AI funding, home to the top foundation model companies (OpenAI, Anthropic, Google, Meta), controls the chip supply chain (NVIDIA, AMD), and hosts the major cloud platforms (AWS, Azure, GCP). U.S. export controls restrict China’s access to advanced AI chips, creating a significant compute disadvantage for Chinese AI labs.
China’s Response
Despite restrictions, China has responded with efficiency-driven innovation. DeepSeek demonstrated that frontier-quality models can be trained at a fraction of the cost, challenging the assumption that AI leadership requires unlimited compute. Chinese companies (Alibaba’s Qwen, Baidu’s ERNIE, ByteDance) are competitive in specific domains, particularly for Chinese-language applications and cost-optimized deployments.
Europe’s Position
Europe leads in AI regulation (EU AI Act, the world’s first comprehensive AI law) but lags in AI development. Mistral AI is the primary European challenger. The regulatory approach creates both opportunity (trust and compliance as competitive advantages) and risk (innovation friction that drives AI development to less regulated jurisdictions).
Critical for leaders: AI geopolitics directly affects enterprise strategy. U.S. export controls may restrict which AI tools are available in certain markets. EU regulations impose compliance requirements that affect deployment timelines and costs. Chinese AI alternatives offer cost advantages but carry data sovereignty and supply chain risks. Your AI vendor strategy must account for the geopolitical landscape, not just technical capabilities.
map
Navigating the Landscape: The Executive Map
How to make vendor and platform decisions in a rapidly shifting market
Five Principles for Vendor Selection
1. Bet on funded platforms for critical workloads — OpenAI, Anthropic, Google, AWS, Azure have the capital and scale to sustain long-term. Avoid building critical dependencies on underfunded startups.

2. Adopt a hybrid open/closed strategy — Closed-source for capability-intensive tasks. Open-source for high-volume, cost-sensitive, or data-sovereign workloads. The ratio will shift toward open-source over time.

3. Minimize model lock-in — Abstract your AI layer so you can switch models without rewriting applications. Use frameworks like LangChain or LiteLLM that support multiple providers.
Principles (Continued)
4. Prioritize workflow-native AI over general-purpose — AI products built for your specific industry and workflow will outperform general-purpose tools with AI features added. Evaluate depth of integration, not breadth of features.

5. Plan for consolidation — The AI landscape will consolidate significantly over the next 2–3 years. Many of today’s startups will be acquired or fail. Build relationships with the likely survivors and maintain optionality where possible.
The bottom line: The AI landscape is the most dynamic technology market since the early internet. $226B in annual AI investment, 2,000+ AI companies, and a new breakthrough every quarter. For executives, the goal is not to pick the “winner” — it’s to build an AI architecture that can absorb change. Abstract your model layer, diversify your vendor relationships, invest in your own data and integration infrastructure, and stay close to the platforms with the capital and talent to endure.