Framework Structure
The NIST AI RMF (version 1.0, January 2023) is a voluntary framework for managing AI risks. It’s organized around four core functions: Govern — establish policies, roles, and a culture of responsible AI. Define risk tolerances and accountability structures. Map — identify and understand AI risks in context. Categorize AI systems, identify stakeholders, and assess potential impacts. Measure — analyze and quantify AI risks. Use metrics, benchmarks, and testing to evaluate trustworthiness characteristics (fairness, privacy, security, transparency). Manage — prioritize and respond to AI risks. Implement mitigations, monitor effectiveness, and communicate residual risks. While voluntary, NIST AI RMF is widely adopted by US companies and referenced by federal agencies. It complements the EU AI Act rather than competing with it.
NIST AI RMF Functions
// NIST AI RMF core functions
GOVERN:
Policies and procedures
Roles and responsibilities
Risk tolerance definition
Organizational culture
// Foundation for everything else
MAP:
Identify AI system context
Categorize by risk level
Identify stakeholders
Assess potential impacts
Document assumptions/limitations
// Know your risks before measuring
MEASURE:
Quantify risks with metrics
Benchmark against standards
Test for bias, security, privacy
Evaluate trustworthiness
// Data-driven risk assessment
MANAGE:
Prioritize risks
Implement mitigations
Monitor effectiveness
Communicate residual risks
Continuous improvement
// Ongoing, not one-time
Status: Voluntary but widely adopted
Referenced by federal agencies
Complementary to EU AI Act
Key insight: NIST AI RMF’s “Govern” function is the most important and most overlooked. Without organizational policies, culture, and accountability structures, the other functions (Map, Measure, Manage) lack a foundation. Start with governance.