Ch 3 — Prompt Templates & Output Parsers

Structured inputs and validated outputs. Control what goes into the LLM and what comes out.
High Level
-
Click play or press Space to begin...
Step- / 8
AWhy Prompt Templates?Don't hardcode — parameterize
1
text_fields
Hardcoded String"Explain quantum
computing"
vs
edit_note
Template"Explain {topic}
in {style}"
reusable
repeat
Many InputsSame template,
different variables
2
arrow_downward Types of prompt templates
BPrompt Template TypesFrom simple to multi-turn
short_text
PromptTemplateSingle string
with {variables}
or
forum
ChatPromptTemplateSystem + Human
message pairs
or
history
MessagesPlaceholderInject conversation
history dynamically
3
arrow_downward Now the output side: why parsers matter
CWhy Output Parsers?LLMs return text — you need structure
psychology
LLM OutputFree-form text
"The answer is 42"
parse
data_object
Structured Data{"answer": 42}
typed, validated
use in
code
Your Coderesult.answer
type-safe access
4
arrow_downward The parser lineup: simple to strict
DOutput Parser TypesFrom StrOutputParser to Pydantic validation
text_fields
StrOutputParserReturns plain
string (Ch 2)
stricter
data_object
JsonOutputParserParses JSON
into a dict
strictest
5
verified
PydanticOutputParserValidates against
a Pydantic model
6
arrow_downward The modern approach: with_structured_output()
Ewith_structured_output()The preferred way — let the model enforce the schema
schema
Pydantic ModelDefine your
output schema
bind
smart_toy
model.with_
structured_output()
Model enforces
schema natively
returns
check_circle
Pydantic ObjectValidated, typed,
ready to use
7
arrow_downward Putting templates + parsers together in a chain
FComplete Chain with Structured OutputTemplate in, validated Pydantic object out
edit_note
Template"Analyze {text}
for sentiment"
|
smart_toy
Structured Modelwith_structured_
output(Analysis)
.invoke()
8
verified
Analysis Objectsentiment="positive"
confidence=0.92
1
Detail