Ch 1 — What Are Multi-Agent Systems?

Agents, autonomy, interaction, and the field of distributed artificial intelligence
High Level
smart_toy
Agent
arrow_forward
flag
Goal
arrow_forward
forum
Interact
arrow_forward
groups
Society
arrow_forward
extension
MAS
arrow_forward
rocket_launch
Apply
-
Click play or press Space to begin...
Step- / 8
menu_book
The Classical Definition
Wooldridge and the textbook view of MAS
What the Field Studies
In An Introduction to MultiAgent Systems, Michael Wooldridge describes multi-agent systems as a paradigm for building and understanding distributed systems where components are autonomous: they control their own behavior while pursuing their own objectives. The emphasis is on many interacting intelligent agents — software processes, robots, or other entities — rather than a single monolithic program. MAS research spans how to design capable individual agents and how to design societies of agents that can work together (or compete) effectively. This course uses that classical framing and later connects it to today’s LLM-based multi-agent applications.
Core Questions
Agent design: What does it mean to act intelligently in an environment? Society design: How should agents communicate, coordinate, and divide work? Interaction types: Cooperative (shared goal) Competitive / self-interested (markets) // Wooldridge; standard MAS curricula
Key insight: MAS is not “many API calls” by itself — it is a design discipline for autonomy, interaction, and (often) incomplete information.
smart_toy
What Is an Agent?
Autonomy, environment, sensors, and actuators
The Usual Picture
An agent is a system situated in an environment: it receives information through sensors (API responses, user messages, sensor readings) and acts through actuators (tool calls, motor commands, database writes). Autonomy means the agent decides what to do next based on its internal state and goals, not that every step is hard-coded by a human operator. Practical definitions also stress reactivity (responding to changes), pro-activeness (taking initiative), and social ability (interacting with other agents or humans). In LLM stacks, a single chat model with a tool loop is often described as an agent; multiple such processes messaging each other form a multi-agent system.
Agent Loop
perceive → update beliefs/state deliberate → choose action / message act → tools, messages, moves repeat // Same abstract loop for robots or LLMs
Key insight: “Agent” is a role in an architecture, not a brand name: anything that closes the perceive–decide–act loop with goals can be modeled as an agent.
compare_arrows
One Agent vs Many
When decomposition pays off
Single-Agent Baseline
A single agent (or one orchestrated pipeline) keeps all reasoning and state in one place. That is simpler to debug and deploy when the task is narrow. Multi-agent designs split responsibilities across entities with distinct roles, memories, or policies. Reasons to use MAS include: modularity (teams can own different agents), parallelism (independent subtasks), robustness (failure isolation), specialization (different models or prompts per role), and explicit interaction (negotiation, critique, voting). Costs include coordination overhead, harder testing, and risk of incoherent or conflicting behavior if protocols are weak.
Rule of Thumb
Prefer single agent when task is sequential, small context, one owner Consider MAS when roles are stable, subtasks parallelize, or you need explicit debate / oversight // Multi-agent is a design choice, not hype
Key insight: More agents ≠ smarter system. Add agents when interaction structure buys clarity, safety, or scale — not by default.
groups
Cooperative vs Competitive
Shared goals, markets, and mixed motives
Two Familiar Regimes
Wooldridge contrasts cooperative settings (e.g., teams, supply-chain partners optimizing a shared objective) with self-interested settings closer to markets, where agents optimize private utility and interaction is structured by prices, contracts, or rules. Real systems are often mixed: engineering squads cooperate inside a firm while competing for budget; LLM agents may cooperate on a user task yet still reflect prompts that encode conflicting sub-goals. MAS theory supplies vocabulary and mechanisms — auctions, voting, norms — for both extremes. Later chapters connect competition to game theory and cooperation to joint plans and commitments.
Spectrum
Fully cooperative → shared reward Competitive → private payoff General-sum / mixed → real life // Mechanism must match incentives
Key insight: If incentives are misaligned, “cooperative” prompts alone won’t keep agents aligned — you need protocols, oversight, or redesigned rewards.
construction
Why Build Multi-Agent Systems?
Engineering and scientific motivations
Practical Drivers
Organizations adopt MAS-style software when problems are naturally distributed: different owners, data silos, or geographic sites. Scientifically, MAS models social and economic processes (traffic, ecosystems, electronic markets) where no central controller exists. In AI engineering, multi-agent setups can mirror human team workflows — researcher, coder, reviewer — each implemented as a policy with its own context window. Benefits include reuse of specialized components and clearer interfaces between them. Drawbacks: you must engineer message schemas, termination conditions, and failure recovery across process boundaries.
Checklist
Distribution of data or ownership? Parallelism / throughput need? Role clarity for prompts & models? Oversight via separate critic agent? // If all "no", start single-agent
Key insight: Treat MAS as systems integration: the hard part is usually interfaces and state — not the number of LLM calls.
domain
Application Domains
Where MAS has a long track record
Examples
Historically, MAS appeared in robot soccer, sensor networks, manufacturing control, logistics and ride-sharing, simulation of populations, and grid computing. In enterprise IT, workflow engines and microservices often resemble societies of agents even when not labeled as such. Recent LLM applications reuse the same patterns for research assistants, coding crews, customer support triage, and games / simulations. The domain shapes whether you need hard real-time guarantees, formal protocols, or flexible natural-language chatter between agents.
Mapping
Physical world → sensing noise, safety, real-time constraints Enterprise → ACLs, audit logs, human approvals LLM chat → unstructured messages, higher ambiguity, tool risk // Chapter 7: LLM frameworks
Key insight: Domain requirements drive protocol strictness: robotics needs milliseconds and determinism; office agents often need human-readable traces and policy gates.
smart_toy
Classical MAS Meets LLM Agents
Same ideas, new substrates
Continuity and Differences
LLM-based agents still perceive (tokens, tool outputs), update internal state (context, memory stores), and act (send messages, call APIs). Classical MAS contributed speech-act taxonomies, negotiation protocols, and commitment models; LLM systems often approximate these with natural language and ad-hoc prompts. Differences: non-determinism and hallucination complicate guarantees; context limits force summarization; cost and latency dominate at scale. Frameworks such as Microsoft’s AutoGen (multi-agent conversation, tools, humans in the loop) operationalize patterns that MAS researchers described abstractly. This course bridges terminology so you can read both academic MAS and vendor docs.
Vocabulary Bridge
ACL / ontology ≈ JSON schema + shared prompt contracts Contract net ≈ manager agent broadcasts subtasks BDI ≈ planner + belief store (often implicit in LLM context) // Informal analogies, not formal proofs
Key insight: LLM multi-agent is MAS engineering with softer interfaces — which raises both flexibility and the need for evaluation and guardrails.
map
How This Course Unfolds
Roadmap through ten chapters
Sequence
We move from definitions and architectures (Chapters 1–2) to communication and coordination (3–4), then planning and negotiation (5). The second half covers emergence and incentives (6), LLM-native frameworks (7), evaluation (8), safety (9), and production patterns (10). Related material appears in Agentic AI, MCP, and Reasoning & CoT — use those for deeper single-agent reasoning and tooling.
Chapters at a Glance
1 Definitions (this chapter) 2 Architectures & paradigms 3 Communication 4 Coordination 5 Planning & negotiation 6 Game theory & emergence 7 LLM frameworks 8 Evaluation 9 Safety 10 Production & future
Key insight: Master the concepts first; any specific framework is a thin layer on top of communication, coordination, and evaluation discipline.