Ch 8 — Training Infrastructure & Tools — Under the Hood

Axolotl configs, Unsloth setup, WandB integration, OpenAI fine-tuning API, and environment setup
Under the Hood
-
Click play or press Space to begin...
Step- / 10
AEnvironment SetupInstalling the complete fine-tuning stack
1
terminal
pip install
Core packages
verify
2
check_circle
GPU Check
CUDA + drivers
3
settingsAxolotl: define entire training run in one YAML file
BAxolotl ConfigurationYAML-driven fine-tuning with Axolotl
3
description
SFT Config
LoRA + QLoRA
or
4
description
DPO Config
Alignment YAML
CUnsloth: Speed-Optimized Training2x faster LoRA/QLoRA with custom kernels
5
bolt
Unsloth SFT
Fast LoRA training
export
6
ios_share
Export Model
GGUF, merged, Hub
7
monitoringWandB: automatic metric logging with one line of config
DExperiment Tracking & Managed APIsWandB setup and OpenAI fine-tuning API
7
monitoring
WandB Setup
Login + config
or
8
cloud
OpenAI API
Managed fine-tune
EReproducibility & CheckpointingDocker, seeds, and checkpoint management
9
inventory_2
Docker
Reproducible env
+
10
save
Checkpoints
Resume + spot recovery
1
Title