Ch 13 — Ethics & Bias in AI
Fairness metrics, debiasing math, SHAP values, differential privacy, and audit frameworks
Under the Hood
A Fairness Metrics & Impossibility
balance
Fairness Metrics: The Math
Demographic parity, equalized odds, calibration, disparate impact
S1
arrow_downward
block
Impossibility Theorem
Chouldechova & Kleinberg proofs, worked example
S2
arrow_downward
B Debiasing Techniques
tune
Pre-Processing Debiasing
Reweighting, resampling, disparate impact remover
S3
arrow_downward
model_training
In-Processing & Post-Processing
Adversarial debiasing, constrained optimization, threshold tuning
S4
arrow_downward
C Explainability Methods
scatter_plot
SHAP: Shapley Values
Game-theoretic feature attribution, computation
S5
arrow_downward
search
LIME & Gradient Methods
Local surrogate models, saliency maps, integrated gradients
S6
arrow_downward
D Privacy & Data Protection
lock
Differential Privacy
ε-DP definition, noise mechanisms, privacy budget
S7
arrow_downward
hub
Federated Learning & Data Governance
Training without centralizing data, GDPR compliance
S8
arrow_downward
E Audit & Governance Frameworks
description
Model Cards & Datasheets
Documentation standards, audit trails
S9
arrow_downward
checklist
AI Audit Pipeline
End-to-end fairness audit, toolkits, compliance
S10