4
Function calling is what transforms an LLM from a text generator into an active agent.
- The Tool Protocol: The LLM doesn't execute code; it outputs a JSON request to call a tool. Your application executes the tool and returns the result to the LLM.
- MCP (Model Context Protocol): The emerging standard for connecting AI models to external data sources and tools securely.
5
RAG gives the agent access to private, real-time data it wasn't trained on.
- Embeddings: Converting text into high-dimensional vectors so that semantic similarity can be calculated mathematically.
- Vector Stores: Specialized databases designed to quickly find the "nearest neighbors" to a user's query to provide context to the LLM.
6
Complex agents require state machines, not linear chains, to handle loops, retries, and memory.
- Nodes and Edges: Modeling agent workflows as graphs where nodes are functions (or LLM calls) and edges are conditional routing logic.
- Checkpointer: Saving the graph's state at every step allows for "time travel," debugging, and Human-in-the-Loop approval before critical actions.
The Bottom Line: True agency requires three things: Tools (to take action), RAG (to access knowledge), and a Stateful Graph (to plan, loop, and remember).