Ch 8 — AI Governance & Regulation

EU AI Act, NIST AI RMF, ISO 42001, corporate governance, and the global regulatory landscape
High Level
public
Global
arrow_forward
flag
EU Act
arrow_forward
description
NIST
arrow_forward
verified
ISO
arrow_forward
corporate_fare
Corporate
arrow_forward
checklist
Comply
-
Click play or press Space to begin...
Step- / 8
public
The Global Regulatory Landscape
How different regions approach AI regulation
Regional Approaches
AI regulation is developing rapidly worldwide, with fundamentally different approaches: European Union — the EU AI Act (2024) is the world’s first comprehensive AI law. Risk-based approach with binding requirements and penalties. United States — sector-specific approach with no comprehensive federal AI law. Executive orders, NIST frameworks, and state-level legislation (California, Colorado). China — has enacted specific regulations for algorithmic recommendations (2022), deepfakes (2023), and generative AI (2023). Focuses on content control and social stability. United Kingdom — “pro-innovation” approach with sector-specific regulation rather than a single AI law. Regulators (FCA, Ofcom, CMA) apply existing frameworks. Canada — proposed AIDA (Artificial Intelligence and Data Act) as part of Bill C-27. Japan — light-touch, voluntary guidelines favoring innovation.
Global Comparison
// Global AI regulation approaches EU: Comprehensive, risk-based EU AI Act (binding law) Penalties: up to €35M or 7% turnover // World's first AI-specific law US: Sector-specific, voluntary No federal AI law NIST AI RMF (voluntary) State laws (CA, CO, IL) // Innovation-first approach China: Content-focused, binding Algorithmic recommendation rules Deepfake regulations Generative AI measures // Social stability focus UK: Pro-innovation, sector-specific No single AI law Existing regulators adapt // Flexibility over prescription Japan: Light-touch, voluntary Guidelines, not laws // Most permissive approach Trend: Convergence toward risk-based frameworks with sector-specific rules
Key insight: The EU is setting the global standard through the “Brussels effect” — companies building AI for the EU market must comply with the AI Act, and many adopt these standards globally rather than maintaining separate systems. Even US companies are preparing for EU AI Act compliance.
flag
The EU AI Act Deep Dive
Risk tiers, obligations, and timeline
Risk Tiers
The EU AI Act classifies AI systems into four risk tiers: Unacceptable risk (banned) — social scoring, manipulative AI targeting vulnerabilities, real-time biometric identification for law enforcement, emotion recognition in workplaces/schools, untargeted biometric data scraping. Prohibitions effective Feb 2025. High risk — AI in critical areas: medical devices, employment screening, credit scoring, education, law enforcement, migration. Must comply with data governance, documentation, transparency, human oversight, accuracy requirements. Conformity assessment required. Limited risk — transparency obligations only. Must disclose AI interaction (chatbots), label AI-generated content (deepfakes), label emotion recognition systems. Minimal risk — no requirements. Most AI systems fall here (spam filters, recommendation engines, video games).
EU AI Act Structure
// EU AI Act risk tiers BANNED (Unacceptable Risk): ✗ Social scoring ✗ Manipulative AI ✗ Real-time biometric ID (police) ✗ Emotion recognition (work/school) ✗ Untargeted biometric scraping // Effective: Feb 2025 HIGH RISK (Strict Requirements): Medical devices, hiring AI, credit scoring, education, law enforcement Requirements: Data governance system Technical documentation Transparency to users Human oversight mechanism Accuracy/robustness testing Conformity assessment // Effective: Aug 2026 LIMITED RISK (Transparency): Chatbots: disclose AI interaction Deepfakes: label as AI-generated Emotion recognition: disclose use MINIMAL RISK (No Requirements): Spam filters, games, recommendations GPAI (General-Purpose AI): Special rules for foundation models Systemic risk assessment for large models (>10^25 FLOPs training) // Effective: Aug 2025
Key insight: The EU AI Act’s “general-purpose AI” (GPAI) provisions are particularly significant: foundation models like GPT-4 face special obligations including technical documentation, training data summaries, and systemic risk assessments. Models trained with >1025 FLOPs face the strictest requirements.
description
NIST AI Risk Management Framework
The US voluntary framework for AI risk management
Framework Structure
The NIST AI RMF (version 1.0, January 2023) is a voluntary framework for managing AI risks. It’s organized around four core functions: Govern — establish policies, roles, and a culture of responsible AI. Define risk tolerances and accountability structures. Map — identify and understand AI risks in context. Categorize AI systems, identify stakeholders, and assess potential impacts. Measure — analyze and quantify AI risks. Use metrics, benchmarks, and testing to evaluate trustworthiness characteristics (fairness, privacy, security, transparency). Manage — prioritize and respond to AI risks. Implement mitigations, monitor effectiveness, and communicate residual risks. While voluntary, NIST AI RMF is widely adopted by US companies and referenced by federal agencies. It complements the EU AI Act rather than competing with it.
NIST AI RMF Functions
// NIST AI RMF core functions GOVERN: Policies and procedures Roles and responsibilities Risk tolerance definition Organizational culture // Foundation for everything else MAP: Identify AI system context Categorize by risk level Identify stakeholders Assess potential impacts Document assumptions/limitations // Know your risks before measuring MEASURE: Quantify risks with metrics Benchmark against standards Test for bias, security, privacy Evaluate trustworthiness // Data-driven risk assessment MANAGE: Prioritize risks Implement mitigations Monitor effectiveness Communicate residual risks Continuous improvement // Ongoing, not one-time Status: Voluntary but widely adopted Referenced by federal agencies Complementary to EU AI Act
Key insight: NIST AI RMF’s “Govern” function is the most important and most overlooked. Without organizational policies, culture, and accountability structures, the other functions (Map, Measure, Manage) lack a foundation. Start with governance.
verified
ISO/IEC 42001
The first global AI management system standard
What It Is
ISO/IEC 42001:2023 is the first international standard for AI management systems. It provides a framework for organizations to establish, implement, maintain, and improve their AI management. Key features: Certifiable — unlike NIST AI RMF, organizations can get ISO 42001 certification from accredited bodies. This provides third-party validation. Management system approach — follows the familiar ISO structure (like ISO 27001 for information security). Plan-Do-Check-Act cycle. Risk-based — requires AI risk assessment and treatment. Lifecycle coverage — covers the entire AI lifecycle from design to decommissioning. Complementary — designed to work alongside ISO 27001 (security), ISO 27701 (privacy), and sector-specific regulations. Organizations can build a single program that satisfies ISO 42001, NIST AI RMF, and EU AI Act requirements simultaneously.
ISO 42001 vs Others
// Comparing AI governance frameworks ISO 42001: Type: International standard Certifiable: Yes (third-party audit) Scope: AI management system Approach: Plan-Do-Check-Act // Best for: proving compliance NIST AI RMF: Type: Voluntary framework Certifiable: No Scope: AI risk management Approach: Govern-Map-Measure-Manage // Best for: US organizations EU AI Act: Type: Binding law Certifiable: Conformity assessment Scope: AI systems in EU market Approach: Risk-based tiers // Best for: legal compliance Strategy: Build ONE program that satisfies all ISO 42001 as the management system NIST AI RMF for risk methodology EU AI Act for legal requirements // Unified, not duplicated
Key insight: Don’t build separate programs for each framework. ISO 42001, NIST AI RMF, and the EU AI Act are complementary. Build one unified AI governance program that maps to all three. ISO 42001 certification provides the strongest proof of compliance.
corporate_fare
Corporate AI Governance
Building an AI governance program inside your organization
Governance Structure
Effective corporate AI governance requires organizational structure, not just policies: AI Ethics Board — cross-functional body (engineering, legal, product, ethics, domain experts) that reviews high-risk AI projects. Should have real authority, not just advisory power. Responsible AI Lead — a dedicated role (or team) responsible for AI governance. Reports to C-suite. AI Risk Assessment — every AI project goes through a risk assessment before development. Categorize by risk level (mirroring EU AI Act tiers). Model Registry — centralized inventory of all AI models in production. Track purpose, data, performance, and risk level. Incident Response — documented process for AI failures, bias incidents, and safety issues. Training — all employees working with AI receive ethics and governance training. Lessons from failed ethics boards: Google dissolved its AI ethics board in one week (2019). Microsoft disbanded its ethics team (2023). Ethics boards fail when they lack authority or executive support.
Governance Components
// Corporate AI governance AI Ethics Board: Cross-functional: eng, legal, product Include external experts Real authority (not just advisory) Review high-risk projects // Must have executive backing Risk Assessment Process: Every AI project → risk assessment Low risk: proceed with monitoring Medium risk: additional review High risk: ethics board review Unacceptable: don't build it Model Registry: All models in production tracked Purpose, data, performance, risk Owner, last audit date // You can't govern what you can't see Incident Response: Bias incident → investigate → fix Safety failure → rollback → review Document and share learnings Failures to Learn From: Google AI ethics board: 1 week Microsoft ethics team: disbanded // Ethics without authority = theater
Key insight: AI ethics boards fail when they lack real authority. Google’s board lasted one week; Microsoft disbanded its ethics team. The lesson: governance structures must have executive backing, decision-making power, and the ability to stop or modify projects — not just advise.
fact_check
AI Auditing & Impact Assessments
Systematic evaluation of AI systems for compliance and harm
Types of Assessments
AI auditing evaluates whether AI systems meet legal, ethical, and technical requirements: Algorithmic Impact Assessment (AIA) — evaluates the potential impact of an AI system on individuals and society before deployment. Required by Canada’s proposed AIDA and recommended by the EU AI Act for high-risk systems. Data Protection Impact Assessment (DPIA) — required by GDPR for high-risk data processing. Evaluates privacy risks and mitigations. Bias Audit — systematic testing for discriminatory outcomes across protected groups. NYC Local Law 144 (2023) requires annual bias audits for automated employment decision tools. Conformity Assessment — EU AI Act requires high-risk AI systems to undergo conformity assessment (self-assessment or third-party) before market entry. Third-party Audit — independent evaluation by external auditors. Provides credibility but the AI auditing profession is still maturing.
Assessment Types
// AI auditing and assessments Algorithmic Impact Assessment: When: Before deployment What: Potential harms to individuals Who: Internal team + stakeholders Required by: Canada AIDA, EU AI Act DPIA (Data Protection): When: Before high-risk processing What: Privacy risks and mitigations Who: DPO + data team Required by: GDPR Bias Audit: When: Before deployment + annually What: Discriminatory outcomes Who: Internal or third-party Required by: NYC Law 144 (hiring AI) Conformity Assessment: When: Before EU market entry What: EU AI Act compliance Who: Self or notified body Required by: EU AI Act (high-risk) Best Practice: Combine into unified assessment Automate where possible Document everything // Audit trail is your defense
Key insight: NYC Local Law 144 (effective July 2023) is a bellwether: it requires annual bias audits for AI used in hiring. Expect similar requirements to spread to other cities, states, and domains. Build audit capability now, before it’s legally required.
timeline
EU AI Act Implementation Timeline
Phased rollout from 2024 to 2027
Timeline
The EU AI Act entered into force on August 1, 2024, with a phased implementation: Feb 2, 2025 — Prohibited AI practices take effect. Social scoring, manipulative AI, and other banned uses become illegal. Aug 2, 2025 — GPAI (General-Purpose AI) rules take effect. Foundation model providers must comply with transparency, documentation, and systemic risk requirements. Aug 2, 2026 — High-risk AI system requirements take effect. Full compliance required for AI in healthcare, employment, credit scoring, etc. Aug 2, 2027 — Remaining provisions take effect, including requirements for high-risk AI systems that are safety components of products. The EU has launched support mechanisms: the AI Pact (voluntary early compliance), the AI Act Service Desk (guidance), and published guidelines on prohibited practices.
Implementation Phases
// EU AI Act timeline Aug 1, 2024: Entry into force // Clock starts ticking Feb 2, 2025: Prohibitions ✓ Social scoring → BANNED Manipulative AI → BANNED Real-time biometric ID → BANNED // Already in effect! Aug 2, 2025: GPAI rules Foundation model obligations Technical documentation Training data summaries Systemic risk assessment Aug 2, 2026: High-risk rules Full compliance required Healthcare, hiring, credit AI Conformity assessments // The big compliance deadline Aug 2, 2027: Remaining provisions Safety-component AI systems Product-integrated AI Support: AI Pact: voluntary early compliance AI Act Service Desk: guidance Published guidelines available
Key insight: The prohibited practices are already in effect (Feb 2025). If your organization uses social scoring, manipulative AI, or unauthorized biometric identification, you are already non-compliant. The high-risk deadline (Aug 2026) is the next major milestone — start preparing now.
checklist
Building a Compliance Program
Practical steps to achieve AI governance maturity
Step-by-Step
Building an AI governance program from scratch: Step 1: Inventory — catalog all AI systems in your organization. You can’t govern what you can’t see. Step 2: Classify — categorize each system by risk level (using EU AI Act tiers as a framework). Step 3: Gap analysis — compare current practices against requirements (EU AI Act, NIST AI RMF, ISO 42001). Step 4: Governance structure — establish an AI ethics board, appoint a Responsible AI lead, define roles and responsibilities. Step 5: Policies — create AI-specific policies for development, deployment, monitoring, and incident response. Step 6: Technical controls — implement bias testing, explainability tools, monitoring, guardrails. Step 7: Training — educate all AI practitioners on ethics and governance. Step 8: Audit — conduct regular internal audits and consider ISO 42001 certification.
Compliance Roadmap
// Building AI governance Month 1-2: Foundation □ Inventory all AI systems □ Classify by risk level □ Gap analysis vs requirements □ Executive sponsorship secured Month 3-4: Structure □ AI ethics board established □ Responsible AI lead appointed □ Roles and responsibilities defined □ Budget allocated Month 5-6: Policies □ AI development policy □ AI deployment policy □ Incident response plan □ Data governance policy Month 7-9: Technical □ Bias testing pipeline □ Explainability tools deployed □ Monitoring dashboards □ Guardrails implemented Month 10-12: Maturity □ Training program launched □ First internal audit □ ISO 42001 certification prep □ Continuous improvement cycle
Key insight: Start with inventory and classification — most organizations don’t even know how many AI systems they have in production. You can’t govern what you can’t see. A simple spreadsheet tracking every AI system, its risk level, and its owner is more valuable than a perfect policy document.