Ch 9 — Framework Landscape

Under the Hood 10 Steps
1
Click Next to compare the internal architectures of every major agentic AI framework — how they define agents, execute loops, manage state, and wire tools.
Step - / 10
LangGraph Graph Compilation & Pregel Engine The architecture we've been learning
1
schema
StateGraph
TypedDict state, nodes, edges, reducers
.compile()
memory
Pregel Engine
Channels, supersteps, checkpoint after each
runs
loop
Superstep Loop
Execute nodes → apply reducers → checkpoint → next
2
diversity_3 CrewAI internals — Crew → Task → Agent execution pipeline
CrewAI Crew Execution Pipeline Role-based orchestration internals
groups
Crew.kickoff()
Entry point — iterates tasks by process type
assigns
task
Task → Agent
Agent builds prompt from role + goal + context
loops
psychology
Agent ReAct Loop
LLM → tool calls → observe → repeat until done
AutoGen Conversational Agent Architecture Message-passing and GroupChat internals
3
person
ConversableAgent
Base class: generate_reply() + send/receive
joins
forum
GroupChat
Shared message thread, speaker selection
managed by
manage_accounts
GroupChatManager
Selects next speaker: round-robin or LLM-based
4
smart_toy OpenAI Agents SDK — Runner loop & handoff internals
OpenAI SDK Runner Loop & Handoff Mechanics How the execution engine orchestrates agents
play_arrow
Runner.run()
Entry: agent + input → starts execution loop
loop
call_split
Decide & Execute
LLM returns tool_call or final_output or handoff
handoff?
swap_horiz
Agent Switch
Replace current agent, carry conversation forward
Google ADK Session & Memory Architecture How ADK manages state across conversations
5
badge
Session Object
id, appName, userId, events[], state{}
managed by
storage
SessionService
CRUD for sessions, state persistence
+
neurology
MemoryService
Long-term searchable knowledge across sessions
6
verified Pydantic AI — Agent[Deps, Output] & dependency injection
Pydantic AI Type-Safe Agent Architecture Generic agents, DI, and structured output
data_object
Agent[Deps, Out]
Generic in dependency type + output type
.run()
input
RunContext[Deps]
Injects deps into tool functions at runtime
validates
check_circle
Pydantic Output
BaseModel validation on every LLM response
smolagents CodeAgent vs ToolCallingAgent Two fundamentally different execution strategies
7
code
CodeAgent
LLM writes Python → sandboxed exec → result
vs
build
ToolCallingAgent
LLM returns JSON tool_calls → dispatch → result
both use
replay
MultiStepAgent
Base class: plan → act → observe → loop
8
window Semantic Kernel — Kernel DI, plugins, function calling planner
Semantic Kernel Kernel, Plugins & Auto-Planning Microsoft's enterprise orchestration layer
hub
Kernel (DI)
Central container: services, plugins, config
registers
extension
Plugins
@KernelFunction + @Description → tool schema
planned by
auto_fix_high
Function Calling
LLM iteratively calls plugins to solve tasks
Comparison The Agent Loop — Same Pattern, Different Wrappers Every framework implements the same core loop
9
psychology
LLM Decides
Tool call? Final answer? Handoff?
build
Execute Action
Run tool / call agent / write code
visibility
Observe Result
Append to context, update state
loop
Repeat or Stop
Until final answer or max iterations
Tradeoffs Architectural Tradeoffs Summary What you gain and lose with each approach
10
tune
Explicit vs Magic
LangGraph/smolagents vs CrewAI/Agno
+
lock
Portable vs Native
Model-agnostic vs vendor-optimized
+
balance
Code vs Config
Programmatic graphs vs declarative roles
1
Detail
close