Legal Landscape
GDPR Article 22: individuals have the right not to be subject to decisions based solely on automated processing that significantly affect them. They can request “meaningful information about the logic involved.” EU AI Act: high-risk AI systems must be “sufficiently transparent to enable users to interpret the system’s output and use it appropriately.” Penalties: up to €35M or 7% of global turnover. US: no federal AI explainability law yet, but state laws are emerging (California SB-1001 requires disclosure of AI use). Equal Credit Opportunity Act: lenders must provide specific reasons for credit denials — “the algorithm said no” is not sufficient. The trend is clear: explainability is moving from best practice to legal requirement.
Regulatory Requirements
// Explainability regulations
GDPR (EU, 2018):
Art 22: right to human review
Art 13-14: "meaningful information
about the logic involved"
Penalty: €20M or 4% turnover
EU AI Act (2024):
High-risk AI: must be transparent
"Enable users to interpret output"
Model documentation required
Penalty: €35M or 7% turnover
US ECOA:
Credit denials: specific reasons
"Algorithm said no" = not enough
Must explain key factors
California SB-1001:
Disclose AI use to consumers
Right to opt out of AI decisions
// Trend: explainability is becoming
// a legal requirement, not just
// a nice-to-have
Key insight: The EU AI Act’s high-risk AI obligations (fully enforceable August 2026) will require explainability for AI in healthcare, hiring, credit, education, and law enforcement. Organizations should start implementing explainability now.