NIST AI Risk Management Framework (AI RMF) Overview
The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), published by the U.S. National Institute of Standards and Technology, provides voluntary guidance for developing and using trustworthy AI systems.
It establishes a structured approach for identifying, assessing, managing, and communicating AI-related risks across the system lifecycle. The framework promotes responsible AI practices centered on trustworthiness, emphasizing reliability, transparency, privacy, and fairness.
Purpose of the AI RMF
The AI RMF’s purpose is to improve how organizations design and manage AI systems in a way that fosters public trust and aligns with organizational and societal values.
It helps teams understand the risks inherent in AI and implement processes to manage them effectively.
Key objectives include:
Establishing governance processes for ethical and responsible AI.
Encouraging documentation and transparency throughout model development.
Mitigating risks related to bias, robustness, and unintended outcomes.
Promoting alignment with human oversight and accountability.
Scope and Applicability
The AI RMF is voluntary and applies to any organization that designs, develops, deploys, or uses AI systems.
It is intended for cross-sector use, from government agencies to private enterprises, research labs, and startups seeking a structured way to operationalize trustworthy AI principles.
What the Framework Covers
The AI RMF is organized into four core functions:
Govern: Establish and oversee AI risk management policies and accountability structures.
Map: Identify AI system context, intended use, and potential impacts.
Measure: Evaluate and monitor performance, bias, and risk metrics.
Manage: Implement and improve controls, mitigation, and monitoring processes.
The framework also defines seven key attributes of trustworthy AI: valid and reliable, safe, secure and resilient, explainable and interpretable, privacy-enhanced, fair with harmful bias managed, and accountable and transparent.
Implementation and Continuous Compliance
Organizations should integrate AI RMF functions into existing risk and governance programs, conduct regular model impact assessments, and ensure cross-disciplinary review of data, design, and deployment processes.
The framework aligns closely with ISO 42001 and can serve as a complementary reference for AI Management Systems (AIMS).
NIST AI RMF in DSALTA
DSALTA supports AI RMF alignment by enabling organizations to:
Document AI system inventories and risk assessments.
Map risk categories to DSALTA controls for governance and transparency.
Track AI lifecycle testing, validation, and mitigation evidence.
Link AI RMF practices to ISO 42001 and EU AI Act requirements.
