Ch 12 — AI Governance, Compliance & Risk Management

EU AI Act, NIST AI RMF, ISO 42001, MITRE ATLAS — real regulations with real deadlines
High Level
category
Classify Risk
arrow_forward
assessment
Assess
arrow_forward
description
Document
arrow_forward
engineering
Implement
arrow_forward
fact_check
Audit
arrow_forward
verified
Certify
-
Click play or press Space to begin the journey...
Step- / 7
gavel
EU AI Act: The World’s First AI Law
Four-tier risk framework — penalties up to 7% of global revenue
Four Risk Tiers
The EU AI Act uses a graduated, risk-based approach:

1. Prohibited AI: Absolute ban on 8 practices — social scoring by public authorities, subliminal manipulation, real-time biometric identification in public spaces for law enforcement

2. High-Risk AI: Permitted but subject to extensive requirements — recruitment, credit scoring, medical devices, education, law enforcement

3. Limited-Risk: Transparency obligations (must disclose AI-generated content)

4. Minimal-Risk: Voluntary compliance frameworks
Enforcement Timeline
Feb 2, 2025: Prohibited AI practices banned — already in effect
Aug 2, 2025: General-purpose AI (GPAI) obligations — already in effect
Aug 2, 2026: High-risk AI requirements fully enforceable
Aug 2, 2027: All remaining provisions apply
Penalties
Prohibited AI violations: Up to €35M or 7% of global annual turnover
High-risk AI violations: Up to €15M or 3% of global turnover

Applies extraterritorially — any organization whose AI affects EU residents, regardless of where the company is based. Documented compliance efforts are recognized as formal mitigating factors.
Key requirement for high-risk AI (Aug 2026): Data governance, risk management systems, technical documentation, human oversight, robustness testing, and post-market monitoring. Start preparing now.
account_balance
NIST AI RMF & AI 600-1 GenAI Profile
Govern → Map → Measure → Manage
NIST AI RMF 1.0 (Jan 2023)
The NIST AI Risk Management Framework is the US standard for AI risk management. It organizes around four core functions:

Govern: Establish policies, roles, and accountability structures
Map: Contextualize risks — identify where AI is used, who is affected, what can go wrong
Measure: Quantify risks using testing, metrics, and red teaming (Ch 11)
Manage: Prioritize, respond to, and monitor risks over time

Voluntary but increasingly referenced by regulators, procurement requirements, and industry standards.
NIST AI 600-1 (Jul 2024)
The Generative AI Profile is a companion to the AI RMF, developed in response to Executive Order 14110. It identifies 12 risks specific to generative AI and provides corresponding management actions. Covers hallucination, data privacy, CBRN risks, content provenance, and more.

Source: doi.org/10.6028/NIST.AI.600-1
EO 14110 (Oct 2023)
Executive Order 14110 established federal requirements for AI safety: standardized testing, risk mitigation, content provenance, and post-deployment monitoring. Note: EO 14110 was rescinded on January 20, 2025. However, the NIST frameworks it spawned (AI RMF, AI 600-1, ARIA) continue as voluntary standards and are widely adopted.
Practical impact: Even without the EO mandate, NIST AI RMF is becoming the de facto US standard. Federal procurement, insurance underwriting, and enterprise vendor assessments increasingly require NIST alignment.
workspace_premium
ISO/IEC 42001:2023 — AI Management Systems
The world’s first certifiable AI standard
What It Is
ISO/IEC 42001:2023 is the first international standard for AI Management Systems (AIMS). Published December 2023, it provides requirements for organizations to establish, implement, maintain, and continually improve responsible AI governance. Think of it as ISO 27001 but for AI — a certifiable management system standard.
Key Requirements
Risk management throughout the AI lifecycle
Transparency and accountability in AI systems
Data governance and system lifecycle controls
Defined roles and leadership commitment
Continual improvement through performance monitoring
Ethical considerations and legal compliance
Why It Matters
ISO 42001 is certifiable — independent auditors can verify compliance and issue certificates. This gives organizations a way to demonstrate responsible AI governance to customers, partners, and regulators. It applies to any organization that develops, provides, or uses AI, regardless of size or sector.
Relationship to Other Standards
ISO 42001 + NIST AI RMF: Complementary. ISO 42001 provides the management system; NIST provides the risk assessment methodology.
ISO 42001 + EU AI Act: ISO 42001 certification can serve as evidence of compliance with EU AI Act requirements, though it’s not a guarantee of compliance.
Certification status: Certification is voluntary, conducted by independent accredited bodies. Early adopters are using it as a competitive differentiator and a pre-emptive compliance measure for the EU AI Act August 2026 deadline.
security
MITRE ATLAS: AI Threat Intelligence
15 tactics, 66 techniques, 33 real-world case studies
What ATLAS Is
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a knowledge base of adversary tactics, techniques, and procedures (TTPs) targeting AI/ML systems. Modeled after MITRE ATT&CK, it fills the gap that traditional cybersecurity frameworks have for AI-specific threats.
Framework Scope (Oct 2025)
15 tactics — high-level adversary goals (reconnaissance, resource development, initial access, etc.)
66 techniques — specific methods to achieve tactics
46 sub-techniques — variations of techniques
26 mitigations — defensive countermeasures
33 case studies — real-world incidents (ShadowRay, Morris II Worm, etc.)

Four main attack categories: evasion, poisoning, privacy, and abuse.
Practical Use
ATLAS Navigator: Visual tool for mapping threats to your AI systems (like ATT&CK Navigator)
Arsenal: Red teaming tools aligned with ATLAS techniques

~70% of ATLAS mitigations map to existing security controls, making integration with current SOC workflows practical. Over 150 organizations currently use ATLAS for AI threat modeling.
Governance connection: ATLAS provides the threat model that feeds into NIST AI RMF’s “Map” and “Measure” functions. Use ATLAS to identify what threats exist, NIST to assess how to manage them, and ISO 42001 to ensure continuous management.
inventory_2
Model Cards, AI-BOM & Documentation
Transparency, traceability, and supply chain inventory
Model Cards
Model cards document what a model does, how it was trained, its limitations, and safety evaluations. OpenAI publishes system cards for GPT-5; Google uses a Govern-Map-Measure-Manage framework; Microsoft releases annual Responsible AI Transparency Reports.

The problem: a 2025 analysis found 947 unique section names across frontier model cards, with safety information under 97 different labels. Frontier labs achieve ~80% transparency compliance; most providers fall below 60%.
AI Bill of Materials (AI-BOM)
An AI-BOM is a structured inventory of every AI component: trained models, training datasets, inference APIs, agent dependencies, MCP servers, and tool integrations. Traditional SBOM tools miss AI artifacts.

EU AI Act Article 53 (Aug 2025) requires a complete AI component inventory. Over 60% of AI usage is undocumented — “shadow AI” that bypasses security review.
# AI-BOM tools # Trusera AI-BOM (open-source) # 13 scanners, 9 output formats # Detects LLM providers, agent frameworks, # API keys, cloud AI services $ pip install ai-bom $ ai-bom scan ./my-project # AIsbom (ML artifact scanner) # Supports PyTorch, Safetensors, GGUF # Drift detection, strict-mode policies $ aisbom scan model.safetensors # OWASP AIBOM project: standardizing # the schema (CycloneDX extension)
visibility
The Transparency Gap
What organizations claim vs. what they can prove
The Problem
AI governance is only as good as the documentation behind it. Current transparency gaps are severe:

Safety-critical deficits: Deception behaviors, hallucinations, and child safety evaluations show the largest documentation gaps across evaluated models (148, 124, and 116 aggregate points lost respectively)

Fragmentation: No standard format for model cards means auditors can’t compare across providers

Shadow AI: 60%+ of enterprise AI usage is undocumented — you can’t govern what you can’t see
Emerging Solutions
AI Transparency Atlas (2025): Weighted framework with 8 sections and 23 subsections, prioritizing Safety Evaluation (25%) and Critical Risk (20%). Uses EU AI Act Annex IV and Stanford Transparency Index as baselines. Automated multi-agent pipeline evaluates model completeness.

Content provenance: C2PA (Coalition for Content Provenance and Authenticity) standard for labeling AI-generated content — required by the EU AI Act for limited-risk systems.
Audit readiness: When regulators come knocking in August 2026, they’ll ask for documentation. Model cards, AI-BOMs, risk assessments, red team reports (Ch 11), and incident logs. Start building this documentation trail now — retroactive documentation is unreliable and legally questionable.
verified
Building a Governance Program
From frameworks to operational compliance
The Governance Stack
1. Classify Risk (EU AI Act tiers) — Determine which of your AI systems are high-risk, limited-risk, or minimal-risk

2. Assess (NIST AI RMF + MITRE ATLAS) — Map threats, measure risks, identify gaps using ATLAS TTPs and NIST’s Govern-Map-Measure-Manage

3. Document (Model cards + AI-BOM) — Create model cards, maintain AI-BOMs, document data provenance and training decisions

4. Implement (Technical controls from Ch 1–11) — Guardrails, PII scanning, sandboxing, red teaming, MCP security

5. Audit (Continuous testing) — Automated red teaming (Ch 11), post-deployment monitoring, incident logging

6. Certify (ISO 42001) — Independent certification as evidence of responsible AI governance
Framework Alignment
MITRE ATLAS → What threats exist
NIST AI RMF → How to manage them
NIST AI 600-1 → GenAI-specific risks
ISO 42001 → Certifiable management system
EU AI Act → Legal requirements and deadlines
OWASP LLM/MCP Top 10 → Technical vulnerability checklists
The bottom line: AI governance is no longer optional. The EU AI Act has real deadlines (Aug 2026) and real penalties (7% revenue). NIST provides the methodology. ISO 42001 provides the certification. MITRE ATLAS provides the threat model. The technical controls from Chapters 1–11 provide the implementation. This chapter ties them all together into a compliance program.