Ch 5 — Perceptrons & Neurons — Under the Hood

Perceptron math, learning rule derivation, activation function calculus, XOR solution, and MLP forward pass
Under the Hood
-
Click play or press Space to begin. Click any node for deep-dive details...
Step- / 10
APerceptron InternalsWeighted sum · step function · learning rule · convergence
1
calculate
Weighted Sumz = w·x + b
dot product
2
school
Learning Rulew += η·error·x
convergence proof
3
arrow_downward From step function to smooth activations
BActivation Functions Deep DiveSigmoid · tanh · ReLU · derivatives · vanishing gradients
ssid_chart
Sigmoidσ(z) = 1/(1+e\u207b\u1dbb)
derivative & limits
4
bolt
ReLU Familymax(0,z), Leaky
GELU, Swish
5
arrow_downward The XOR problem: why one layer fails
CXOR & Linear SeparabilityGeometric proof · decision boundaries · the impossibility
close
XOR ProofWhy no single line
can separate XOR
6
check_circle
XOR SolutionTwo hidden neurons
space transformation
7
arrow_downward MLP forward pass: matrix multiplication in action
DMLP Forward PassMatrix multiplication · layer-by-layer computation · parameter counting
grid_view
Matrix Formh = σ(Wx + b)
vectorized ops
8
tag
Param CountWeights + biases
per layer
9
arrow_downward Weight initialization & the universal approximation theorem
EInitialization & TheoryXavier/He init · universal approximation · expressiveness
shuffle
Weight InitXavier, He, Kaiming
variance preservation
10
all_inclusive
Universal ApproxAny function
existence theorem