How It Works
The perceptron is the simplest possible neural network — a single artificial neuron. It takes multiple inputs, multiplies each by a learned weight, adds a bias, and passes the result through a step function. If the sum exceeds the threshold, output 1; otherwise, output 0.
# Perceptron computation
Inputs: x = [x\u2081, x\u2082, ..., x\u2099]
Weights: w = [w\u2081, w\u2082, ..., w\u2099]
Bias: b
# Step 1: Weighted sum
z = w\u2081x\u2081 + w\u2082x\u2082 + ... + w\u2099x\u2099 + b
# Step 2: Step function
y = 1 if z ≥ 0
y = 0 if z < 0
The Learning Rule
Rosenblatt’s key innovation: the perceptron learns its own weights from labeled examples. For each misclassified example, nudge the weights in the direction that reduces the error. Repeat until all training examples are classified correctly.
# Perceptron learning rule
For each training example (x, target):
prediction = perceptron(x)
error = target - prediction
if error ≠ 0:
w = w + η × error × x
b = b + η × error
# η = learning rate (small, e.g. 0.01)
# Convergence theorem: if data is linearly
# separable, this WILL find a solution
1958 hype: The New York Times reported the perceptron as a machine that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” The reality was far more modest — but the learning principle was genuinely revolutionary.