Ch 2 — Your First LLM Chain

Prompt in, response out. The simplest possible LangChain call.
High Level
-
Click play or press Space to begin...
Step - / 8
A What Is a Chain? The simplest unit of work in LangChain
1
input
Input A question or
instruction (dict)
goes into
link
Chain Prompt | Model
| Parser
produces
output
Output A string, object,
or structured data
2
arrow_downward The three pieces of every chain
B The Three Pieces Prompt, Model, Parser
edit_note
Prompt Template with
variables like {topic}
formats
smart_toy
Chat Model GPT-4o, Claude,
Gemini, Llama...
returns
output
Output Parser Extracts the text
or structured data
3
arrow_downward Step 1: Build the prompt template
C The Prompt Template Turns user input into formatted messages
person
System Message "You are a
helpful assistant"
+
chat
Human Message "Explain {topic}
in one sentence"
=
format_list_bulleted
Message List Ready to send
to the model
4
arrow_downward Step 2: The model generates a response
D The Chat Model Sends messages to the LLM API and gets a response
format_list_bulleted
Messages In [SystemMessage,
HumanMessage]
API call
cloud
LLM API OpenAI, Anthropic,
Google, local...
returns
psychology
AIMessage content + metadata
(tokens, model, etc.)
5
arrow_downward Step 3: Parse the output
E The Output Parser Extracts usable data from the AIMessage
psychology
AIMessage Raw response
object from LLM
extract
text_fields
StrOutputParser Pulls out the
.content string
or
6
data_object
JsonOutputParser Parses JSON into
a Python dict
7
arrow_downward Putting it all together with the | operator
F The Complete Chain Three pieces composed into one callable
edit_note
prompt ChatPrompt
Template
|
smart_toy
model ChatOpenAI
(gpt-4o)
|
output
parser StrOutput
Parser
.invoke()
8
check_circle
Result "Quantum computing
uses qubits..."
1
Detail