Skip to main content
The NIST AI Risk Management Framework (AI RMF) provides voluntary guidance for managing risks associated with artificial intelligence systems. Developed by the National Institute of Standards and Technology, it helps organizations design, develop, deploy, and use AI systems responsibly.

Who needs NIST AI RMF?

AI developers

Organizations building AI/ML systems that need a structured approach to identifying and mitigating AI-specific risks.

AI deployers

Companies using AI systems in production that need to ensure responsible and trustworthy AI operations.

Four core functions

Govern

Establish policies and processes for AI risk management across the organization.

Map

Identify and categorize AI systems, their contexts, and associated risks.

Measure

Assess and analyze AI risks using quantitative and qualitative methods.

Manage

Prioritize and respond to AI risks with appropriate controls and mitigations.

Key risk categories

CategoryDescription
Bias & FairnessAI systems may produce discriminatory or unfair outcomes
TransparencyDecisions made by AI should be explainable
PrivacyAI systems must protect personal data
SecurityAI systems are vulnerable to adversarial attacks
ReliabilityAI systems must perform consistently and accurately
AccountabilityClear ownership of AI system outcomes

How DSALTA helps

  • NIST AI RMF controls mapped to the four core functions
  • AI risk register to document and score AI-specific risks
  • Policy templates for AI governance documentation
  • Cross-framework mapping — overlaps with ISO 42001, EU AI Act, and SOC 2

Frequently asked questions

No. NIST AI RMF is a voluntary framework. However, it is increasingly referenced in procurement requirements and is aligned with the EU AI Act’s risk-based approach.
The NIST AI RMF and EU AI Act share similar risk-based approaches. Implementing NIST AI RMF provides a strong foundation for EU AI Act compliance.