Ch 9 — The Vendor Landscape

SaaS platforms, cloud-native agents, open-source frameworks, and the build-vs-buy decision
High Level
storefront
SaaS
arrow_forward
cloud
Cloud
arrow_forward
code
Open Source
arrow_forward
balance
Build/Buy
arrow_forward
checklist
Evaluate
arrow_forward
rocket_launch
Deploy
-
Click play or press Space to begin...
Step- / 8
landscape
The Market in 2026
$7.9B market growing at 45.8% CAGR — every major platform now has an agent story
Market Overview
The enterprise AI agent market grew from $7.9 billion in 2025 to a projected $236 billion by 2034 at 45.8% CAGR. Gartner predicts 40% of enterprise applications will feature task-specific AI agents by end of 2026, up from less than 5% in 2025. Every major enterprise platform — Salesforce, Microsoft, ServiceNow, SAP, UiPath — now has an agent offering. The shift from "AI experiments" to "production agents" happened faster than predicted. But the landscape is fragmented: SaaS platforms (configure-and-deploy), cloud-native services (build-on-infrastructure), and open-source frameworks (code-from-scratch) serve fundamentally different needs. Choosing wrong costs 6–12 months.
Market Snapshot
Market size: 2025: $7.9B 2034: $236B (projected) CAGR: 45.8% Adoption: 79% of orgs have deployed agents 40% adoption by end of 2026 (Gartner) Average projected ROI: 171% Three vendor categories: SaaS platforms: configure & deploy Cloud-native: build on infrastructure Open-source: code from scratch Foundation model share (enterprise): Anthropic (Claude): 40% OpenAI: 27% Google: 21%
Key insight: The foundation model market share tells a story: enterprises prioritize safety, reasoning, and business-focused controls over raw capability. Claude's 40% enterprise share reflects this priority.
storefront
SaaS Platforms: Salesforce, Microsoft, ServiceNow
Configure-and-deploy agents embedded in the platforms you already use
The SaaS Approach
SaaS agent platforms embed AI directly into the enterprise software you already run. Salesforce Agentforce leads with 8,000+ customers and $900M in AI revenue within 6 months of launch. It excels at CRM-centric workflows: sales coaching, service handoffs, and marketing automation, with pre-built agents and an Agent Exchange marketplace. Pricing is action-based at $0.10 per action. Microsoft Copilot Studio has 230,000+ organizations using it, with strength in Microsoft 365/Teams/SharePoint ecosystems. ServiceNow Now Assist targets $1B AI-specific revenue by 2026, with a new OpenAI partnership for voice agents, excelling at IT service management and HR operations. The SaaS model offers the fastest time-to-value but constrains you to the platform's boundaries.
SaaS Comparison
Salesforce Agentforce Customers: 8,000+ Revenue: $900M in 6 months Pricing: $0.10 per action Best for: Sales, service, marketing Strength: CRM-native, marketplace Microsoft Copilot Studio Orgs: 230,000+ Pricing: Included in some M365 plans Best for: Microsoft ecosystem Strength: Teams, SharePoint, wikis ServiceNow Now Assist Target: $1B AI revenue by 2026 Best for: ITSM, HR operations Strength: Orchestrator, voice agents
Key insight: SaaS platforms are the right choice when the agent's scope aligns with the platform's domain. Salesforce for customer-facing, ServiceNow for internal operations, Microsoft for knowledge work. The wrong choice is trying to stretch them beyond their natural boundaries.
cloud
Cloud-Native: AWS, Google, Azure
Build agents on enterprise-grade infrastructure with full model choice
The Cloud Approach
Cloud-native agent services offer more flexibility than SaaS platforms at the cost of more engineering effort. AWS Bedrock Agents provides access to 7+ model providers (Claude, Llama, Mistral, Cohere) with seamless IAM, VPC, and Lambda integration — rated 10/10 for security with SOC 2, HIPAA, and GDPR compliance. Google Vertex AI Agent Builder excels at multimodal workloads via Gemini with native Google Search grounding and tight BigQuery integration. Azure AI Studio offers the cheapest low-volume pricing and strong Copilot integration. The cloud approach is best when you need model flexibility, custom tool integration, or cross-platform agents that span multiple enterprise systems.
Cloud Comparison
AWS Bedrock Agents Models: 7+ providers Security: 10/10 (SOC 2, HIPAA, GDPR) Best for: AWS-native teams Strength: IAM, model choice Google Vertex AI Agent Builder Rating: 4.2/5 Best for: Multimodal, ML-heavy Strength: Gemini, BigQuery, Search Azure AI Studio Rating: 4.3/5 Best for: Microsoft ecosystem Strength: Cheapest low-volume // Choose based on existing cloud // commitment, not feature lists
Key insight: The strongest predictor of cloud platform success is existing cloud commitment, not feature comparison. An AWS shop building on Vertex AI (or vice versa) will spend months on infrastructure plumbing that provides zero business value.
code
Open-Source Frameworks: LangGraph, CrewAI
Maximum control and 55% lower per-agent cost — but 2.3x more setup time
The Open-Source Approach
Open-source agent frameworks offer maximum control and the lowest per-agent cost — 55% lower than pure platform solutions — but require 2.3x more initial setup time and strong engineering teams. LangGraph (by LangChain, 25k GitHub stars) provides granular control over cyclic workflows with token-level streaming and conditional edge routing — best for complex stateful agents but with a steep learning curve. CrewAI (44.6k stars) offers role-based multi-agent collaboration with faster idea-to-production cycles — best for content generation, research, and parallel agent workflows. Pydantic AI (15.1k stars) is the type-safe newcomer gaining traction for its developer experience. Forrester predicts 75% of companies attempting to build their own agentic systems will fail.
Framework Comparison
LangGraph Stars: 25k Control: Highest Learning curve: Steep Best for: Complex stateful agents CrewAI Stars: 44.6k Control: Moderate Learning curve: Moderate Best for: Multi-agent collaboration Pydantic AI Stars: 15.1k Control: Moderate Learning curve: Low Best for: Type-safe agent dev Economics: Cost per agent: 55% lower Setup time: 2.3x more Failure rate: 75% (Forrester)
Key insight: Open-source is the right choice when AI is your core differentiator and you have the engineering talent to sustain it. For most enterprises, the 75% failure rate on custom builds makes a strong case for starting with a platform and customizing from there.
balance
The Build vs Buy Decision
Purchased solutions have a 67% success rate vs 33% for internal builds
The Decision Framework
Build when AI is your competitive advantage — when it's your product or proprietary IP. Buy for everything else. Purchased vendor solutions have a 67% success rate compared to 33% for internal builds. The reasons are structural: the field evolves too rapidly for most teams to keep up, there's a wide gap between working prototypes and production-grade deployments, data preparation accounts for up to 80% of total project effort, and infrastructure shifts faster than organizations can standardize. Even Lambda — a $4B+ AI company with world-class engineers — chose to buy rather than build. Only 11% of organizations have AI agents in production, and Forrester predicts 75% of companies attempting to build their own agentic systems will fail.
Decision Matrix
Build when: AI is your product/core IP You have dedicated AI engineering Unique data gives competitive edge No vendor covers your domain Buy when: AI supports (not is) your business Speed to production matters You lack specialized AI talent Standard use cases (CRM, ITSM, HR) Success rates: Buy: 67% Build: 33% // Lambda ($4B+ AI company) chose to // buy rather than build internally
Key insight: The build-vs-buy decision is not about capability — it's about sustained investment. Building an agent is the easy part. Maintaining it as models, APIs, and best practices change monthly is where most organizations fail.
handyman
Three Vendor Models
Consulting firms, SaaS platforms, and the emerging agent-platform-plus-engineering model
Engagement Models
Beyond the technology choice, the engagement model determines outcomes. Consulting firms (Deloitte, Accenture, McKinsey) charge $500K–$2M+ for a first production agent over 6–12+ month engagements — their incentive structure rewards longer engagements rather than speed. SaaS platforms (Salesforce, ServiceNow, Glean) offer per-user or usage pricing with days-to-weeks deployment — predictable costs but limited scope. The emerging third model is agent platform + embedded engineering: per-agent pricing with Forward Deployed Engineers included, typically a 3-month POC with measurable outcomes, after which business teams own operations. Each model has its place, but the choice should match your organization's maturity and internal capability.
Vendor Model Comparison
Consulting firms Cost: $500K-$2M+ Timeline: 6-12+ months You own: Knowledge transfer Risk: Incentive misalignment SaaS platforms Cost: Per-user / per-action Timeline: Days to weeks You own: Configuration Risk: Platform boundaries Agent platform + engineering Cost: Per-agent Timeline: 3-month POC You own: Operations post-deploy Risk: Newer model, less proven
Key insight: Ask every vendor: "What happens when the engagement ends?" The best vendor relationships produce organizational capability, not just a working agent. If the vendor leaves and the agent can't be maintained, you've bought a depreciating asset.
checklist
Evaluation Criteria
The five readiness pillars and the questions that separate good vendors from great ones
Pre-Vendor Readiness
Before evaluating vendors, assess your own readiness across five pillars. Business problem: do you have a specific, measurable use case identified? Data foundations: is the data for that use case clean and accessible? Infrastructure: do you have cloud or on-prem capacity for AI workloads? Organizational culture: do you have leadership buy-in and team adaptability? Governance baseline: are data privacy and access controls in place? Vendors can't compensate for gaps in these pillars — they'll just be more expensive failures. Once ready, evaluate vendors on model-agnosticism (avoid lock-in to a single LLM provider), human-in-the-loop controls, audit trails, and total cost of ownership including ongoing maintenance.
Evaluation Checklist
5 Readiness Pillars (pre-vendor): 1. Business problem identified? 2. Data clean & accessible? 3. Infrastructure ready? 4. Leadership buy-in? 5. Governance baseline set? Vendor evaluation: □ Model-agnostic? (avoid lock-in) □ Human-in-the-loop controls? □ Audit trail & logging? □ Total cost of ownership? □ What happens when you leave? □ How fast to first value? □ Who owns the data? □ SLA for agent uptime? // The best question: "Show me a // customer who left. What happened?"
Key insight: The most revealing vendor question is about portability: "If we leave in 12 months, what do we take with us?" A vendor confident in their value will answer clearly. One that depends on lock-in will deflect.
lightbulb
Avoiding Vendor Lock-In
Model-agnostic platforms, data portability, and the abstraction layer strategy
The Lock-In Problem
The AI landscape changes so fast that today's best vendor may be tomorrow's legacy system. Vendor lock-in in AI agents manifests in three ways: model lock-in (tied to a single LLM provider whose pricing or quality may change), platform lock-in (agent logic embedded in proprietary tools that can't be exported), and data lock-in (training data, fine-tuning, and feedback loops trapped in a vendor's system). The mitigation strategy is an abstraction layer: use model-agnostic platforms that support multiple LLM providers, keep agent logic in portable formats (code, not drag-and-drop), and ensure all data — prompts, feedback, evaluations — can be exported. Evaluate platforms that support Anthropic, OpenAI, and Google models simultaneously.
Lock-In Mitigation
Three types of lock-in: 1. Model lock-in (single LLM) 2. Platform lock-in (proprietary tools) 3. Data lock-in (trapped feedback) Mitigation strategy: □ Model-agnostic platform □ Agent logic in portable code □ Full data export capability □ Standard protocols (MCP, A2A) □ Multi-provider support □ Own your evaluation data Practical test: Can you switch LLM providers in under 1 week? If no: you're locked in. // MCP & A2A are emerging standards // that reduce platform lock-in
Key insight: The emerging standards — MCP (Model Context Protocol) for agent-to-tool communication and A2A (Agent-to-Agent Protocol) for agent-to-agent communication — are the best insurance against lock-in. Prefer vendors that support these open standards.