token

How LLMs Work

From text to tokens to transformers — the complete story of large language models
Co-Created by Kiran Shirol and Claude
TopicsTokenizationTransformersTrainingAlignmentInference
home Learning Portal play_arrow Start Learningsummarize Key Insightsdictionary Glossary14 chapters · 5 parts
Part 1

From Text to Numbers

Tokenization, embeddings, and the attention mechanism.
Part 2

The Transformer Architecture

Blocks, scaling, and the training recipe for frontier models.
Part 3

Making LLMs Useful

Fine-tuning, alignment, and text generation mechanics.
Part 4

Under the Hood in Practice

Context windows, inference optimization, and multimodal capabilities.
Part 5

The Bigger Picture

Emergent abilities, limitations, and the LLM landscape.