The Collapse
Expert systems proved brittle, expensive to maintain, and unable to learn. Every new scenario required manual rule authoring by scarce domain experts. The specialized Lisp machine market collapsed as cheaper general-purpose workstations caught up.
Japan’s Fifth Generation project ended in 1992 without achieving its goals. DARPA’s Strategic Computing Initiative was cancelled. The AI industry contracted sharply.
Seeds of Revival
Quietly, important work continued:
1989: Yann LeCun demonstrates CNNs for handwriting recognition (LeNet)
1992: Gerald Tesauro’s TD-Gammon masters backgammon via reinforcement learning
1997: IBM’s Deep Blue defeats Garry Kasparov in chess
// Why expert systems failed
Problem 1: Knowledge bottleneck
Every rule hand-coded by human experts
Can’t learn from data or adapt
Problem 2: Brittleness
Works perfectly within its rules
Fails catastrophically outside them
Problem 3: Maintenance cost
R1/XCON grew to 17,500 rules
Became nearly impossible to update
Problem 4: No common sense
Couldn’t handle situations outside
the narrow domain of its rules
Backpropagation survived: Despite the winter, Hinton, LeCun, and Bengio kept neural network research alive. LeCun’s 1989 LeNet could read handwritten zip codes — the first practical CNN. These researchers would later be called the “Godfathers of Deep Learning” and share the 2018 Turing Award.