What It Was
Prompt engineering was the practice of manually crafting specific text instructions for individual LLM interactions. Techniques like chain-of-thought, few-shot examples, and role-based system prompts dominated the field from 2022 through early 2025. The focus was entirely on how you phrase the question to the model.
The Limitation
In production systems, the user prompt is a tiny fraction of what the model actually sees. 80–90% of the context window is filled by retrieved documents, conversation history, tool definitions, and system instructions. Optimizing only the prompt is like tuning the radio while ignoring the engine.
Key insight: Prompt engineering addresses only one of eight components that enter an LLM’s context window. The other seven — system prompt, history, RAG docs, tool schemas, few-shot examples, memory, and metadata — are where production quality is won or lost.