Ch 1 — Why AI Ethics Matters

Real-world harms, core principles, the cost of getting it wrong, and the case for responsible AI
High Level
warning
Harms
arrow_forward
psychology
Principles
arrow_forward
groups
Stakeholders
arrow_forward
balance
Frameworks
arrow_forward
gavel
Regulation
arrow_forward
rocket_launch
Action
-
Click play or press Space to begin...
Step- / 8
warning
AI Harms Are Real
When algorithms cause measurable damage to people
Real-World Cases
AI systems are making high-stakes decisions about people’s lives right now, and they’re getting it wrong in ways that disproportionately harm marginalized groups. Amazon’s hiring tool (2018): trained on historical resumes, the system learned to penalize resumes containing the word “women’s” and preferred candidates from all-male colleges. Amazon scrapped it. COMPAS recidivism: a risk assessment algorithm used in US courts was found to falsely label Black defendants as high-risk at nearly twice the rate of white defendants. Healthcare AI: UnitedHealth’s nH Predict system reportedly had a 90% error rate while denying elderly patients coverage for post-acute care. Facial recognition: MIT’s Gender Shades study (Buolamwini & Gebru, 2018) found error rates of 0.8% for light-skinned men vs. 34.7% for dark-skinned women.
The Scope of Impact
// AI decisions affecting millions daily Hiring: 83% of companies use AI to screen resumes 50% use AI for initial rejections // Many candidates never see a human Criminal Justice: Risk scores influence bail, sentencing COMPAS: 2x false positive rate for Black defendants vs. white defendants Healthcare: AI determines insurance coverage Diagnostic AI has racial disparities Dermatology AI: trained mostly on light skin, fails on dark skin Finance: Credit scoring, loan approvals Apple Card: gave women lower limits than men with identical finances Content: Recommendation algorithms amplify misinformation and extremism
Key insight: These aren’t hypothetical risks — they’re documented harms affecting real people today. AI ethics isn’t a philosophical exercise; it’s an engineering requirement with legal, financial, and human consequences.
psychology
Core Ethical Principles
The foundational values for responsible AI
Five Pillars
Most AI ethics frameworks converge on five core principles: Fairness — AI systems should not discriminate against individuals or groups based on protected attributes (race, gender, age, disability). Transparency — people affected by AI decisions should understand how and why decisions are made. Accountability — there must be clear responsibility for AI outcomes, including mechanisms for redress when things go wrong. Privacy — AI systems should respect data rights, minimize data collection, and protect personal information. Safety — AI systems should be reliable, robust, and should not cause harm. These principles appear in the EU AI Act, OECD AI Principles, IEEE Ethically Aligned Design, and corporate AI ethics guidelines from Google, Microsoft, and others.
Principles in Practice
// Five pillars of AI ethics 1. Fairness No discrimination on protected attributes Equal treatment ≠ equitable outcomes Measure and mitigate bias 2. Transparency Explainable decisions Disclose AI use to affected people Document model limitations 3. Accountability Clear ownership of AI outcomes Mechanisms for appeal and redress Audit trails for decisions 4. Privacy Data minimization Informed consent Right to be forgotten 5. Safety Robust to adversarial inputs Fail gracefully Human oversight for high-stakes
Key insight: These principles often conflict. A more transparent model may be less accurate. A fairer model may require collecting sensitive demographic data (privacy tension). Navigating these trade-offs is the core challenge of applied AI ethics.
payments
The Business Case
Why ethics is good for the bottom line
Cost of Getting It Wrong
Unethical AI is expensive: Regulatory fines — the EU AI Act imposes fines up to €35 million or 7% of global annual turnover for violations. Lawsuits — class-action suits against biased AI in hiring, lending, and healthcare are increasing. Reputational damage — Amazon’s biased hiring tool and Apple Card’s gender discrimination made global headlines. Lost customers — users abandon products they don’t trust. Employee attrition — engineers leave companies that deploy harmful AI (Google’s Timnit Gebru firing led to significant talent loss). The business case is clear: investing in responsible AI upfront is cheaper than cleaning up harms after deployment.
Cost Comparison
// Cost of unethical AI Regulatory: EU AI Act: up to €35M or 7% turnover GDPR: up to €20M or 4% turnover US state laws: growing rapidly Legal: Class-action lawsuits: $10M-$100M+ Settlement costs Legal team overhead Reputation: Customer trust: hard to rebuild Media coverage: amplifies harm Talent: engineers leave Prevention cost: Ethics review board: ~$200K/year Bias testing tools: ~$50K/year Impact assessments: ~$100K/year // Total: ~$350K vs. $10M+ in damages
Key insight: Responsible AI isn’t just the right thing to do — it’s the smart thing to do. The cost of prevention ($350K/year) is a fraction of the cost of a single major incident ($10M+ in fines, lawsuits, and lost trust).
groups
Who Is Responsible?
Stakeholders in the AI ethics ecosystem
Shared Responsibility
AI ethics is not one person’s job — it’s a shared responsibility across the entire AI lifecycle: Data scientists & engineers — choose appropriate training data, test for bias, implement fairness constraints, document model limitations. Product managers — define acceptable risk levels, ensure user disclosure, design appeal mechanisms. Leadership — set organizational values, fund ethics infrastructure, establish governance. Legal & compliance — interpret regulations, ensure compliance, manage risk. Affected communities — should be consulted in design, have mechanisms for feedback and redress. Regulators — set guardrails, enforce standards, protect public interest. The most common failure mode is diffusion of responsibility: everyone assumes someone else is handling ethics.
Responsibility Matrix
// Who is responsible for what? Data Scientists / Engineers: ✓ Bias testing and mitigation ✓ Model documentation (model cards) ✓ Fairness metrics in evaluation ✓ Privacy-preserving techniques Product Managers: ✓ Define acceptable risk thresholds ✓ User disclosure ("AI-generated") ✓ Appeal and redress mechanisms ✓ Stakeholder impact assessment Leadership: ✓ Ethics review board ✓ Budget for responsible AI ✓ Organizational AI principles ✓ Incident response plan Legal / Compliance: ✓ Regulatory compliance ✓ Data protection (GDPR, etc.) ✓ Liability assessment ✓ Audit trail requirements
Key insight: The most dangerous phrase in AI ethics is “that’s not my department.” Every person who touches the AI system — from data collection to deployment — has an ethical responsibility. Build ethics into the process, not as an afterthought.
account_tree
The AI Ethics Landscape
Frameworks, organizations, and standards
Key Frameworks
Multiple organizations have published AI ethics frameworks: OECD AI Principles (2019) — adopted by 46 countries, emphasizing inclusive growth, human-centered values, transparency, robustness, and accountability. EU AI Act (2024) — the world’s first comprehensive AI law, using a risk-based approach with four tiers from minimal to unacceptable risk. IEEE Ethically Aligned Design — technical standards for ethical AI, including well-being metrics and data agency. NIST AI Risk Management Framework (2023) — US voluntary framework for managing AI risks across the lifecycle. Corporate frameworks — Google, Microsoft, Meta, and others publish their own AI principles, though enforcement varies.
Framework Comparison
// Major AI ethics frameworks OECD AI Principles (2019): Scope: 46 countries Type: Voluntary guidelines Focus: Human-centered values EU AI Act (2024): Scope: All AI touching EU citizens Type: Binding regulation Focus: Risk-based classification Penalty: €35M or 7% turnover NIST AI RMF (2023): Scope: US voluntary Type: Risk management framework Focus: Govern, Map, Measure, Manage IEEE EAD: Scope: Global technical standards Type: Best practices Focus: Well-being, data agency Corporate (Google, Microsoft, etc.): Scope: Internal Type: Self-regulation Focus: Varies by company
Key insight: The EU AI Act is the most consequential development in AI ethics. Like GDPR for data privacy, it will set the global standard because any company whose AI touches EU citizens must comply — regardless of where the company is headquartered.
timeline
The Ethics Timeline
How we got here and where we’re going
Key Milestones
2016: ProPublica’s investigation of COMPAS reveals racial bias in criminal justice AI. 2018: Amazon scraps biased hiring tool; MIT Gender Shades study exposes facial recognition disparities; Timnit Gebru and Joy Buolamwini publish groundbreaking bias research. 2019: OECD AI Principles adopted by 46 countries. 2020: Google fires Timnit Gebru over a paper on LLM risks, sparking industry-wide debate. 2023: NIST AI RMF published; ChatGPT and generative AI raise new ethical questions about hallucination, copyright, and deepfakes. 2024: EU AI Act enters into force (August 1). 2025: Prohibited AI practices banned (February); general-purpose AI obligations apply (August). 2026: High-risk AI obligations fully enforceable (August 2).
EU AI Act Timeline
// EU AI Act implementation timeline Aug 2024: Act enters into force Feb 2025: Prohibited practices banned ✗ Cognitive manipulation ✗ Social scoring ✗ Predictive policing ✗ Emotion recognition at work Aug 2025: GPAI obligations apply General-purpose AI models Transparency requirements Copyright compliance Aug 2026: High-risk AI obligations Mandatory testing & auditing Human oversight requirements Transparency to affected persons Conformity assessments // Applies to ANY company whose AI // touches EU citizens, worldwide
Key insight: We are in a critical transition period. By August 2026, high-risk AI obligations will be fully enforceable. Organizations that haven’t started preparing for compliance are already behind.
school
Common Misconceptions
What AI ethics is NOT
Myths vs. Reality
Myth: “AI is objective.” Reality: AI reflects the biases in its training data and the choices of its designers. An algorithm trained on biased historical data will reproduce and amplify those biases. Myth: “Ethics slows down innovation.” Reality: Ethical failures slow down innovation far more — through lawsuits, regulatory action, and lost trust. Building ethics in from the start is faster than retrofitting. Myth: “We just need more data.” Reality: More biased data produces more confident biased predictions. Data quality and representativeness matter more than quantity. Myth: “Ethics is subjective.” Reality: While some ethical questions are debatable, many harms are measurable (disparate impact, accuracy gaps across demographics) and can be addressed with engineering rigor.
Myth Busting
// Common AI ethics misconceptions Myth: "AI is objective" Reality: AI amplifies human biases in training data and design choices Myth: "Ethics slows innovation" Reality: Ethical failures cost more Amazon hiring tool: years wasted EU AI Act fines: up to 7% revenue Myth: "More data fixes bias" Reality: More biased data = more bias Need representative, balanced data Myth: "Ethics is subjective" Reality: Many harms are measurable Disparate impact: quantifiable Accuracy gaps: testable Privacy violations: auditable Myth: "It's the algorithm's fault" Reality: Humans choose the data, the objective, and the deployment
Key insight: The most dangerous misconception is that AI is neutral. Every design choice — what data to collect, what to optimize for, who to test on — embeds values. There is no value-free AI. The question is whether those values are intentional or accidental.
map
Course Roadmap
What you’ll learn in this course
Course Overview
This course covers AI ethics from principles to practice: Chapters 1–4 (Foundations): Why ethics matters, how bias enters AI systems, formal fairness definitions and metrics, and practical bias mitigation techniques. Chapters 5–7 (Transparency & Privacy): Explainability and interpretability (SHAP, LIME), privacy and data rights (GDPR, differential privacy, federated learning), and LLM-specific ethical challenges (hallucination, copyright, deepfakes, alignment). Chapters 8–10 (Governance & Regulation): The global regulatory landscape (EU AI Act, US policy), AI safety and alignment, and building ethical AI in practice (governance frameworks, impact assessments, ethics review boards).
Chapter Map
// AI Ethics & Responsible AI — 10 chapters Section 1: Foundations Ch 1: Why AI Ethics Matters ← you are here Ch 2: Bias in AI Systems Ch 3: Fairness Definitions & Metrics Ch 4: Bias Mitigation Techniques Section 2: Transparency & Privacy Ch 5: Explainability & Interpretability Ch 6: Privacy & Data Rights Ch 7: LLM-Specific Ethics Section 3: Governance & Regulation Ch 8: AI Regulation & Policy Ch 9: AI Safety & Alignment Ch 10: Building Ethical AI in Practice
Key insight: AI ethics is not a separate discipline — it’s a lens that should be applied to every stage of the ML lifecycle: data collection, model design, evaluation, deployment, and monitoring. This course gives you the tools to do that.