The Analogy
You think there’s a 30% chance of rain (prior belief). Then you see dark clouds (evidence). Bayes’ theorem tells you how to update your belief to get the posterior: P(rain | clouds). It balances what you believed before with how likely the evidence is under each scenario.
Key insight: The medical test paradox: even with a 99% accurate test, if only 1% of people have the disease, a positive result means only ~17% chance of actually having it. Bayes’ theorem reveals this counterintuitive truth. The base rate (prior) matters enormously.
Worked Example
# Bayes' Theorem:
# P(A|B) = P(B|A) × P(A) / P(B)
# Medical test (continuing from Step 3):
P_disease = 0.01
P_pos_given_disease = 0.99
P_pos_given_healthy = 0.05
# P(positive) = P(pos|D)×P(D) + P(pos|H)×P(H)
P_pos = 0.99*0.01 + 0.05*0.99 # 0.0594
# P(disease | positive)
P_disease_given_pos = (0.99 * 0.01) / 0.0594
# = 0.167 — only 16.7%!
# NOT 99%! The base rate matters.
Formula: P(A|B) = P(B|A) × P(A) / P(B). Prior × Likelihood / Evidence = Posterior. This is the engine behind spam filters, medical AI, and Bayesian neural networks.