Context
Since 2023, a wave of frameworks has emerged for orchestrating multiple LLM-powered agents: Microsoft’s AutoGen, open-source patterns like CrewAI, LangGraph for stateful agent graphs, and many others. They share a core idea: wrap LLM calls in agent abstractions with roles, tools, and conversation protocols. The differences lie in how much structure they impose (free chat vs rigid DAG), human-in-the-loop support, and observability. This chapter surveys the design patterns rather than endorsing a single library.
Pattern
Agent = LLM + role + tools
Framework = orchestration glue
// Patterns outlast libraries
Key insight: Learn the patterns (role specialization, conversation loops, tool routing) — frameworks change fast.