Topics

AI RMF Core Functions, NIST AI Framework
AI Societal Impact
  • This community highlights the unique influence of AI systems on society and the future as detailed in international technical reports.
  • ISO/IEC TR 24368:2022::document
AI RMF Resources
AI Policy Recommendations
Federal Standards
AI Lifecycle, Risk Measurement
Transparency and Accountability
AI Testing, Incident Response
Stakeholder Engagement
Third-Party Risk
TEVV Metrics
  • This community focuses on the effectiveness of Test, Evaluation, Verification, and Validation (TEVV) metrics in assessing AI performance.
  • MEASURE 2.13::framework_component
Risk Monitoring
  • This community focuses on the mechanisms required to track and monitor AI risks over the duration of a system's operation.
  • MEASURE 3::framework_component
AI Lifecycle Phases, TEVV
AI Expertise, Assessment Teams
End Users
  • This community focuses on the individuals who interact directly with AI systems.
  • End users::person
Affected Communities
Public Impact
  • This community focuses on the broader general public and their experience with the societal impacts of AI technologies.
  • General public::person

Content

Show original text

The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023 by the U.S. Department of Commerce under Secretary Gina M. Raimondo and NIST Director Laurie E. Locascio, provides guidelines for managing AI risks. It is available for free at https://doi.org/10.6028/NIST.AI.100-1. Mention of commercial products in the document does not constitute an endorsement by NIST. The framework is a living document that will be reviewed regularly, with a formal community review expected by 2028. Updates use a versioning system where the first number indicates major revisions and the decimal indicates minor changes, all of which are tracked in a version control table.

<p>NIST AI 100-1<br /> Artificial Intelligence Risk Management<br /> Framework (AI RMF 1.0)</p> <p>NIST AI 100-1<br /> Artificial Intelligence Risk Management<br /> Framework (AI RMF 1.0)<br /> This publication is available free of charge from:<br /> https://doi.org/10.6028/NIST.AI.100-1<br /> January 2023<br /> U.S. Department of Commerce<br /> Gina M. Raimondo, Secretary<br /> National Institute of Standards and Technology<br /> Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology</p> <p>Certain commercial entities, equipment, or materials may be identified in this document in order to describe <br /> an experimental procedure or concept adequately. Such identification is not intended to imply recommenda-<br /> tion or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that <br /> the entities, materials, or equipment are necessarily the best available for the purpose. <br /> This publication is available free of charge from: https://doi.org/10.6028/NIST.AI.100-1<br /> Update Schedule and Versions<br /> The Artificial Intelligence Risk Management Framework (AI RMF) is intended to be a living document.<br /> NIST will review the content and usefulness of the Framework regularly to determine if an update is appro-<br /> priate; a review with formal input from the AI community is expected to take place no later than 2028. The<br /> Framework will employ a two-number versioning system to track and identify major and minor changes. The<br /> first number will represent the generation of the AI RMF and its companion documents (e.g., 1.0) and will<br /> change only with major revisions. Minor revisions will be tracked using “.n” after the generation number<br /> (e.g., 1.1). All changes will be tracked using a Version Control Table which identifies the history, including<br /> version number, date of change, and description of change.</p>
Show original text

The AI RMF Playbook will be updated frequently, with changes tracked in a Version Control Table that records version numbers, dates, and descriptions. Version numbers will follow an '.n' format (e.g., 1.1). You can email feedback to AIframework@nist.gov at any time; NIST reviews and incorporates these comments semi-annually. The Playbook is organized into two main parts: Part 1 covers foundational information, including risk framing, audience, AI trustworthiness characteristics (such as safety, security, and fairness), and effectiveness. Part 2 details the AI RMF Core, which consists of the Govern, Map, and Measure functions.

<p>will be tracked using “.n” after the generation number<br /> (e.g., 1.1). All changes will be tracked using a Version Control Table which identifies the history, including<br /> version number, date of change, and description of change. NIST plans to update the AI RMF Playbook<br /> frequently. Comments on the AI RMF Playbook may be sent via email to AIframework@nist.gov at any time<br /> and will be reviewed and integrated on a semi-annual basis.</p> <p>Table of Contents<br /> Executive Summary<br /> 1<br /> Part 1: Foundational Information<br /> 4<br /> 1<br /> Framing Risk<br /> 4<br /> 1.1<br /> Understanding and Addressing Risks, Impacts, and Harms<br /> 4<br /> 1.2<br /> Challenges for AI Risk Management<br /> 5<br /> 1.2.1<br /> Risk Measurement<br /> 5<br /> 1.2.2<br /> Risk Tolerance<br /> 7<br /> 1.2.3<br /> Risk Prioritization<br /> 7<br /> 1.2.4<br /> Organizational Integration and Management of Risk<br /> 8<br /> 2<br /> Audience<br /> 9<br /> 3<br /> AI Risks and Trustworthiness<br /> 12<br /> 3.1<br /> Valid and Reliable<br /> 13<br /> 3.2<br /> Safe<br /> 14<br /> 3.3<br /> Secure and Resilient<br /> 15<br /> 3.4<br /> Accountable and Transparent<br /> 15<br /> 3.5<br /> Explainable and Interpretable<br /> 16<br /> 3.6<br /> Privacy-Enhanced<br /> 17<br /> 3.7<br /> Fair – with Harmful Bias Managed<br /> 17<br /> 4<br /> Effectiveness of the AI RMF<br /> 19<br /> Part 2: Core and Profiles<br /> 20<br /> 5<br /> AI RMF Core<br /> 20<br /> 5.1<br /> Govern<br /> 21<br /> 5.2<br /> Map<br /> 24<br /> 5.3<br /> Measure<br /> 28<br /> 5.</p>
Show original text

The NIST AI RMF 1.0 (AI 100-1) provides a framework for managing AI risks. The core of the framework consists of four functions: Govern (5.1), Map (5.2), Measure (5.3), and Manage (5.4), each detailed with specific categories and subcategories in Tables 1-4. The document also includes AI RMF Profiles (Section 6) and several appendices covering AI actor tasks, differences between AI and traditional software risks, human-AI interaction, and framework attributes. Key figures illustrate potential AI harms (Fig. 1), the AI system lifecycle and dimensions (Fig. 2), and the roles of AI actors throughout that lifecycle (Fig. 3).

<p>RMF<br /> 19<br /> Part 2: Core and Profiles<br /> 20<br /> 5<br /> AI RMF Core<br /> 20<br /> 5.1<br /> Govern<br /> 21<br /> 5.2<br /> Map<br /> 24<br /> 5.3<br /> Measure<br /> 28<br /> 5.4<br /> Manage<br /> 31<br /> 6<br /> AI RMF Profiles<br /> 33<br /> Appendix A: Descriptions of AI Actor Tasks from Figures 2 and 3<br /> 35<br /> Appendix B: How AI Risks Differ from Traditional Software Risks<br /> 38<br /> Appendix C: AI Risk Management and Human-AI Interaction<br /> 40<br /> Appendix D: Attributes of the AI RMF<br /> 42<br /> List of Tables<br /> Table 1 Categories and subcategories for the GOVERN function.<br /> 22<br /> Table 2 Categories and subcategories for the MAP function.<br /> 26<br /> Table 3 Categories and subcategories for the MEASURE function.<br /> 29<br /> Table 4 Categories and subcategories for the MANAGE function.<br /> 32<br /> i</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> List of Figures<br /> Fig. 1<br /> Examples of potential harms related to AI systems. Trustworthy AI systems<br /> and their responsible use can mitigate negative risks and contribute to bene-<br /> fits for people, organizations, and ecosystems.<br /> 5<br /> Fig. 2<br /> Lifecycle and Key Dimensions of an AI System. Modified from OECD<br /> (2022) OECD Framework for the Classification of AI systems — OECD<br /> Digital Economy Papers. The two inner circles show AI systems’ key di-<br /> mensions and the outer circle shows AI lifecycle stages. Ideally, risk man-<br /> agement efforts start with the Plan and Design function in the application<br /> context and are performed throughout the AI system lifecycle. See Figure 3<br /> for representative AI actors.<br /> 10<br /> Fig. 3<br /> AI actors across AI lifecycle stages. See Appendix A for detailed descrip-<br /> tions of AI actor tasks, including details about testing, evaluation, verifica-<br /> tion, and validation tasks.</p>
Show original text

The NIST AI RMF 1.0 Executive Summary highlights that while AI offers transformative benefits for society, health, and the economy, it also introduces significant risks that vary in scope and impact. To manage these risks, the framework outlines several key components: AI actors are categorized by their lifecycle roles, with a best practice of separating those who build models from those who verify and validate them. Trustworthy AI systems are built on a foundation of validity and reliability, supported by accountability and transparency. Finally, AI risk management is organized into four core functions—govern, map, measure, and manage—with governance serving as a cross-cutting function that informs all other activities.

<p>for representative AI actors.<br /> 10<br /> Fig. 3<br /> AI actors across AI lifecycle stages. See Appendix A for detailed descrip-<br /> tions of AI actor tasks, including details about testing, evaluation, verifica-<br /> tion, and validation tasks. Note that AI actors in the AI Model dimension<br /> (Figure 2) are separated as a best practice, with those building and using the<br /> models separated from those verifying and validating the models.<br /> 11<br /> Fig. 4<br /> Characteristics of trustworthy AI systems. Valid &amp; Reliable is a necessary<br /> condition of trustworthiness and is shown as the base for other trustworthi-<br /> ness characteristics. Accountable &amp; Transparent is shown as a vertical box<br /> because it relates to all other characteristics.<br /> 12<br /> Fig. 5<br /> Functions organize AI risk management activities at their highest level to<br /> govern, map, measure, and manage AI risks. Governance is designed to be<br /> a cross-cutting function to inform and be infused throughout the other three<br /> functions.<br /> 20<br /> Page ii</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Executive Summary<br /> Artificial intelligence (AI) technologies have significant potential to transform society and<br /> people’s lives – from commerce and health to transportation and cybersecurity to the envi-<br /> ronment and our planet. AI technologies can drive inclusive economic growth and support<br /> scientific advancements that improve the conditions of our world. AI technologies, how-<br /> ever, also pose risks that can negatively impact individuals, groups, organizations, commu-<br /> nities, society, the environment, and the planet. Like risks for other types of technology, AI<br /> risks can emerge in a variety of ways and can be characterized as long- or short-term, high-<br /> or low-probability, systemic or localized, and high- or low-impact.</p>
Show original text

AI risks vary in duration, probability, scope, and impact. The AI RMF defines an AI system as an engineered, machine-based tool that operates with varying autonomy to generate predictions, recommendations, or decisions that influence real or virtual environments (based on OECD 2019 and ISO/IEC 22989:2022). Unlike traditional software, AI risks are unique because systems can be trained on unpredictable, evolving data, making their behavior difficult to interpret. Because AI is socio-technical, its risks and benefits depend on the complex interaction between technology, human behavior, and societal context. If left uncontrolled, AI can worsen inequitable outcomes; however, with proper management, it can also help mitigate them. Effective AI risk management is essential for the responsible development and use of these systems.

<p>Like risks for other types of technology, AI<br /> risks can emerge in a variety of ways and can be characterized as long- or short-term, high-<br /> or low-probability, systemic or localized, and high- or low-impact.<br /> The AI RMF refers to an AI system as an engineered or machine-based system that<br /> can, for a given set of objectives, generate outputs such as predictions, recommenda-<br /> tions, or decisions influencing real or virtual environments. AI systems are designed<br /> to operate with varying levels of autonomy (Adapted from: OECD Recommendation<br /> on AI:2019; ISO/IEC 22989:2022).<br /> While there are myriad standards and best practices to help organizations mitigate the risks<br /> of traditional software or information-based systems, the risks posed by AI systems are in<br /> many ways unique (See Appendix B). AI systems, for example, may be trained on data that<br /> can change over time, sometimes significantly and unexpectedly, affecting system function-<br /> ality and trustworthiness in ways that are hard to understand. AI systems and the contexts<br /> in which they are deployed are frequently complex, making it difficult to detect and respond<br /> to failures when they occur. AI systems are inherently socio-technical in nature, meaning<br /> they are influenced by societal dynamics and human behavior. AI risks – and benefits –<br /> can emerge from the interplay of technical aspects combined with societal factors related<br /> to how a system is used, its interactions with other AI systems, who operates it, and the<br /> social context in which it is deployed.<br /> These risks make AI a uniquely challenging technology to deploy and utilize both for orga-<br /> nizations and within society. Without proper controls, AI systems can amplify, perpetuate,<br /> or exacerbate inequitable or undesirable outcomes for individuals and communities. With<br /> proper controls, AI systems can mitigate and manage inequitable outcomes.<br /> AI risk management is a key component of responsible development and use of AI sys-<br /> tems.</p>
Show original text

AI systems can cause unfair outcomes, but proper risk management helps ensure they are developed and used responsibly. Responsible AI focuses on human-centric design, social responsibility, and sustainability. By critically evaluating the potential impacts of AI, organizations can build more trustworthy systems and earn public confidence. Key standards define social responsibility as ethical, transparent behavior (ISO 26000:2010) and sustainability as meeting current needs without harming future generations (ISO/IEC TR 24368:2022). Furthermore, professionals involved in AI development are expected to exercise 'professional responsibility,' recognizing their influence on society and the future of technology (ISO/IEC TR 24368:2022). These practices are guided by the NIST AI RMF 1.0 and the National Artificial Intelligence Initiative Act of 2020.

<p>uate,<br /> or exacerbate inequitable or undesirable outcomes for individuals and communities. With<br /> proper controls, AI systems can mitigate and manage inequitable outcomes.<br /> AI risk management is a key component of responsible development and use of AI sys-<br /> tems. Responsible AI practices can help align the decisions about AI system design, de-<br /> velopment, and uses with intended aim and values. Core concepts in responsible AI em-<br /> phasize human centricity, social responsibility, and sustainability. AI risk management can<br /> drive responsible uses and practices by prompting organizations and their internal teams<br /> who design, develop, and deploy AI to think more critically about context and potential<br /> or unexpected negative and positive impacts. Understanding and managing the risks of AI<br /> systems will help to enhance trustworthiness, and in turn, cultivate public trust.<br /> Page 1</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Social responsibility can refer to the organization’s responsibility “for the impacts<br /> of its decisions and activities on society and the environment through transparent<br /> and ethical behavior” (ISO 26000:2010). Sustainability refers to the “state of the<br /> global system, including environmental, social, and economic aspects, in which the<br /> needs of the present are met without compromising the ability of future generations<br /> to meet their own needs” (ISO/IEC TR 24368:2022). Responsible AI is meant to<br /> result in technology that is also equitable and accountable. The expectation is that<br /> organizational practices are carried out in accord with “professional responsibility,”<br /> defined by ISO as an approach that “aims to ensure that professionals who design,<br /> develop, or deploy AI systems and applications or AI-based products or systems,<br /> recognize their unique position to exert influence on people, society, and the future<br /> of AI” (ISO/IEC TR 24368:2022).<br /> As directed by the National Artificial Intelligence Initiative Act of 2020 (P.L.</p>
Show original text

The AI Risk Management Framework (AI RMF), mandated by the National Artificial Intelligence Initiative Act of 2020, provides a voluntary, flexible guide for organizations to manage AI risks and promote responsible, trustworthy AI. It applies to all 'AI actors'—defined by the OECD as anyone involved in the AI lifecycle—across all sectors and use cases. The framework is designed to be practical and adaptable, ensuring society benefits from AI while remaining protected from its risks. NIST will continuously update the AI RMF to reflect technological advancements, global standards, and community feedback.

<p>recognize their unique position to exert influence on people, society, and the future<br /> of AI” (ISO/IEC TR 24368:2022).<br /> As directed by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283),<br /> the goal of the AI RMF is to offer a resource to the organizations designing, developing,<br /> deploying, or using AI systems to help manage the many risks of AI and promote trustwor-<br /> thy and responsible development and use of AI systems. The Framework is intended to be<br /> voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibil-<br /> ity to organizations of all sizes and in all sectors and throughout society to implement the<br /> approaches in the Framework.<br /> The Framework is designed to equip organizations and individuals – referred to here as<br /> AI actors – with approaches that increase the trustworthiness of AI systems, and to help<br /> foster the responsible design, development, deployment, and use of AI systems over time.<br /> AI actors are defined by the Organisation for Economic Co-operation and Development<br /> (OECD) as “those who play an active role in the AI system lifecycle, including organiza-<br /> tions and individuals that deploy or operate AI” [OECD (2019) Artificial Intelligence in<br /> Society—OECD iLibrary] (See Appendix A).<br /> The AI RMF is intended to be practical, to adapt to the AI landscape as AI technologies<br /> continue to develop, and to be operationalized by organizations in varying degrees and<br /> capacities so society can benefit from AI while also being protected from its potential<br /> harms.<br /> The Framework and supporting resources will be updated, expanded, and improved based<br /> on evolving technology, the standards landscape around the world, and AI community ex-<br /> perience and feedback. NIST will continue to align the AI RMF and related guidance with<br /> applicable international standards, guidelines, and practices.</p>
Show original text

NIST will regularly update the AI Risk Management Framework (AI RMF) based on new technology, global standards, and user feedback. The framework is divided into two parts: Part 1 defines AI risks and the characteristics of trustworthy AI, such as safety, security, reliability, transparency, and fairness. Part 2 provides the 'Core' functions—GOVERN, MAP, MEASURE, and MANAGE—to help organizations practically address AI risks throughout the system lifecycle. Additional guidance is available in the AI RMF Playbook on the NIST website. This framework was developed in collaboration with public and private sectors to align with the National AI Initiative Act of 2020 and other federal standards initiatives.

<p>, and improved based<br /> on evolving technology, the standards landscape around the world, and AI community ex-<br /> perience and feedback. NIST will continue to align the AI RMF and related guidance with<br /> applicable international standards, guidelines, and practices. As the AI RMF is put into<br /> use, additional lessons will be learned to inform future updates and additional resources.<br /> The Framework is divided into two parts. Part 1 discusses how organizations can frame<br /> the risks related to AI and describes the intended audience. Next, AI risks and trustworthi-<br /> ness are analyzed, outlining the characteristics of trustworthy AI systems, which include<br /> Page 2</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> valid and reliable, safe, secure and resilient, accountable and transparent, explainable and<br /> interpretable, privacy enhanced, and fair with their harmful biases managed.<br /> Part 2 comprises the “Core” of the Framework. It describes four specific functions to help<br /> organizations address the risks of AI systems in practice. These functions – GOVERN,<br /> MAP, MEASURE, and MANAGE – are broken down further into categories and subcate-<br /> gories. While GOVERN applies to all stages of organizations’ AI risk management pro-<br /> cesses and procedures, the MAP, MEASURE, and MANAGE functions can be applied in AI<br /> system-specific contexts and at specific stages of the AI lifecycle.<br /> Additional resources related to the Framework are included in the AI RMF Playbook,<br /> which is available via the NIST AI RMF website:<br /> https://www.nist.gov/itl/ai-risk-management-framework.<br /> Development of the AI RMF by NIST in collaboration with the private and public sec-<br /> tors is directed and consistent with its broader AI efforts called for by the National AI<br /> Initiative Act of 2020, the National Security Commission on Artificial Intelligence recom-<br /> mendations, and the Plan for Federal Engagement in Developing Technical Standards and<br /> Related Tools.</p>
Show original text

The AI Risk Management Framework (AI RMF 1.0) aligns with the National AI Initiative Act of 2020, the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in Developing Technical Standards and Related Tools. NIST developed this framework through extensive public engagement, including workshops, public comments, and forums. Future updates will be managed through an AI Risk Management Framework Roadmap. The framework aims to make AI systems more trustworthy by helping organizations identify and manage risks—defined as the probability and impact of an event—to minimize negative consequences like threats to civil liberties while maximizing positive outcomes.

<p>and consistent with its broader AI efforts called for by the National AI<br /> Initiative Act of 2020, the National Security Commission on Artificial Intelligence recom-<br /> mendations, and the Plan for Federal Engagement in Developing Technical Standards and<br /> Related Tools. Engagement with the AI community during this Framework’s development<br /> – via responses to a formal Request for Information, three widely attended workshops,<br /> public comments on a concept paper and two drafts of the Framework, discussions at mul-<br /> tiple public forums, and many small group meetings – has informed development of the AI<br /> RMF 1.0 as well as AI research and development and evaluation conducted by NIST and<br /> others. Priority research and additional guidance that will enhance this Framework will be<br /> captured in an associated AI Risk Management Framework Roadmap to which NIST and<br /> the broader community can contribute.<br /> Page 3</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Part 1: Foundational Information<br /> 1.<br /> Framing Risk<br /> AI risk management offers a path to minimize potential negative impacts of AI systems,<br /> such as threats to civil liberties and rights, while also providing opportunities to maximize<br /> positive impacts. Addressing, documenting, and managing AI risks and potential negative<br /> impacts effectively can lead to more trustworthy AI systems.<br /> 1.1<br /> Understanding and Addressing Risks, Impacts, and Harms<br /> In the context of the AI RMF, risk refers to the composite measure of an event’s probability<br /> of occurring and the magnitude or degree of the consequences of the corresponding event.<br /> The impacts, or consequences, of AI systems can be positive, negative, or both and can<br /> result in opportunities or threats (Adapted from: ISO 31000:2018).</p>
Show original text

AI systems can create both positive opportunities and negative threats. According to ISO 31000:2018, risk management involves coordinated activities to direct and control an organization regarding risk. When assessing negative outcomes, risk is defined by the severity of potential harm and the likelihood of it occurring (OMB Circular A-130:2016). These harms can affect individuals, groups, organizations, society, and the environment. This Framework aims to minimize these negative impacts while maximizing benefits, leading to more trustworthy and effective AI. By acknowledging the limitations and uncertainties of AI models, developers and users can improve system performance. The AI RMF is designed to be flexible, allowing it to address evolving risks and unforeseen impacts as AI technology advances.

<p>magnitude or degree of the consequences of the corresponding event.<br /> The impacts, or consequences, of AI systems can be positive, negative, or both and can<br /> result in opportunities or threats (Adapted from: ISO 31000:2018). When considering the<br /> negative impact of a potential event, risk is a function of 1) the negative impact, or magni-<br /> tude of harm, that would arise if the circumstance or event occurs and 2) the likelihood of<br /> occurrence (Adapted from: OMB Circular A-130:2016). Negative impact or harm can be<br /> experienced by individuals, groups, communities, organizations, society, the environment,<br /> and the planet.<br /> “Risk management refers to coordinated activities to direct and control an organiza-<br /> tion with regard to risk” (Source: ISO 31000:2018).<br /> While risk management processes generally address negative impacts, this Framework of-<br /> fers approaches to minimize anticipated negative impacts of AI systems and identify op-<br /> portunities to maximize positive impacts. Effectively managing the risk of potential harms<br /> could lead to more trustworthy AI systems and unleash potential benefits to people (individ-<br /> uals, communities, and society), organizations, and systems/ecosystems. Risk management<br /> can enable AI developers and users to understand impacts and account for the inherent lim-<br /> itations and uncertainties in their models and systems, which in turn can improve overall<br /> system performance and trustworthiness and the likelihood that AI technologies will be<br /> used in ways that are beneficial.<br /> The AI RMF is designed to address new risks as they emerge. This flexibility is particularly<br /> important where impacts are not easily foreseeable and applications are evolving. While<br /> some AI risks and benefits are well-known, it can be challenging to assess negative impacts<br /> and the degree of harms. Figure 1 provides examples of potential harms that can be related<br /> to AI systems.</p>
Show original text

AI applications are evolving rapidly, making it difficult to fully assess their potential risks and harms. A key challenge in AI risk management is the human tendency to overestimate AI capabilities, often assuming these systems are more objective or powerful than they actually are. According to the NIST AI RMF 1.0, managing these risks is essential for building trustworthy AI. A major obstacle is the difficulty of measuring risks that are not clearly defined. Furthermore, relying on third-party data, software, and hardware complicates risk assessment, as developers and operators may use different risk metrics or lack transparency regarding their methodologies.

<p>easily foreseeable and applications are evolving. While<br /> some AI risks and benefits are well-known, it can be challenging to assess negative impacts<br /> and the degree of harms. Figure 1 provides examples of potential harms that can be related<br /> to AI systems.<br /> AI risk management efforts should consider that humans may assume that AI systems work<br /> – and work well – in all settings. For example, whether correct or not, AI systems are<br /> often perceived as being more objective than humans or as offering greater capabilities<br /> than general software.<br /> Page 4</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Fig. 1. Examples of potential harms related to AI systems. Trustworthy AI systems and their<br /> responsible use can mitigate negative risks and contribute to benefits for people, organizations, and<br /> ecosystems.<br /> 1.2<br /> Challenges for AI Risk Management<br /> Several challenges are described below. They should be taken into account when managing<br /> risks in pursuit of AI trustworthiness.<br /> 1.2.1<br /> Risk Measurement<br /> AI risks or failures that are not well-defined or adequately understood are difficult to mea-<br /> sure quantitatively or qualitatively. The inability to appropriately measure AI risks does not<br /> imply that an AI system necessarily poses either a high or low risk. Some risk measurement<br /> challenges include:<br /> Risks related to third-party software, hardware, and data: Third-party data or systems<br /> can accelerate research and development and facilitate technology transition. They also<br /> may complicate risk measurement. Risk can emerge both from third-party data, software or<br /> hardware itself and how it is used. Risk metrics or methodologies used by the organization<br /> developing the AI system may not align with the risk metrics or methodologies uses by<br /> the organization deploying or operating the system. Also, the organization developing<br /> the AI system may not be transparent about the risk metrics or methodologies it used.</p>
Show original text

Managing AI risk is challenging because developers and users often use different, non-transparent methods to measure risk. Risks also increase when companies integrate third-party data without proper internal oversight. To address this, all AI actors—whether developing, deploying, or using these systems—must actively manage risks and track new, emerging threats. While impact assessments are useful, there is currently no industry consensus on how to reliably measure AI trustworthiness. Existing metrics are often flawed, oversimplified, or biased by institutional interests. Effective risk measurement must account for specific contexts and recognize that AI harms can disproportionately affect different groups, including those who are not direct users of the system.

<p>the organization<br /> developing the AI system may not align with the risk metrics or methodologies uses by<br /> the organization deploying or operating the system. Also, the organization developing<br /> the AI system may not be transparent about the risk metrics or methodologies it used. Risk<br /> measurement and management can be complicated by how customers use or integrate third-<br /> party data or systems into AI products or services, particularly without sufficient internal<br /> governance structures and technical safeguards. Regardless, all parties and AI actors should<br /> manage risk in the AI systems they develop, deploy, or use as standalone or integrated<br /> components.<br /> Tracking emergent risks: Organizations’ risk management efforts will be enhanced by<br /> identifying and tracking emergent risks and considering techniques for measuring them.<br /> Page 5</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> AI system impact assessment approaches can help AI actors understand potential impacts<br /> or harms within specific contexts.<br /> Availability of reliable metrics: The current lack of consensus on robust and verifiable<br /> measurement methods for risk and trustworthiness, and applicability to different AI use<br /> cases, is an AI risk measurement challenge. Potential pitfalls when seeking to measure<br /> negative risk or harms include the reality that development of metrics is often an institu-<br /> tional endeavor and may inadvertently reflect factors unrelated to the underlying impact. In<br /> addition, measurement approaches can be oversimplified, gamed, lack critical nuance, be-<br /> come relied upon in unexpected ways, or fail to account for differences in affected groups<br /> and contexts.<br /> Approaches for measuring impacts on a population work best if they recognize that contexts<br /> matter, that harms may affect varied groups or sub-groups differently, and that communities<br /> or other sub-groups who may be harmed are not always direct users of a system.</p>
Show original text

Measuring AI risks requires a nuanced approach that accounts for specific contexts, diverse community impacts, and the fact that those harmed may not be direct users. Key considerations include: 1) Lifecycle stages: Risks evolve as AI systems adapt, and developers and deployers often have different perspectives on these risks. All actors share responsibility for creating trustworthy systems. 2) Real-world settings: Risks identified in controlled lab environments often differ from those that emerge during actual operation. 3) Inscrutability: AI systems are often difficult to measure due to a lack of transparency, poor documentation, or inherent complexity. 4) Human baselines: Comparing AI performance to human activity is challenging because AI systems perform tasks differently than humans, making it difficult to establish standard metrics for risk management.

<p>Approaches for measuring impacts on a population work best if they recognize that contexts<br /> matter, that harms may affect varied groups or sub-groups differently, and that communities<br /> or other sub-groups who may be harmed are not always direct users of a system.<br /> Risk at different stages of the AI lifecycle: Measuring risk at an earlier stage in the AI<br /> lifecycle may yield different results than measuring risk at a later stage; some risks may<br /> be latent at a given point in time and may increase as AI systems adapt and evolve. Fur-<br /> thermore, different AI actors across the AI lifecycle can have different risk perspectives.<br /> For example, an AI developer who makes AI software available, such as pre-trained mod-<br /> els, can have a different risk perspective than an AI actor who is responsible for deploying<br /> that pre-trained model in a specific use case. Such deployers may not recognize that their<br /> particular uses could entail risks which differ from those perceived by the initial developer.<br /> All involved AI actors share responsibilities for designing, developing, and deploying a<br /> trustworthy AI system that is fit for purpose.<br /> Risk in real-world settings: While measuring AI risks in a laboratory or a controlled<br /> environment may yield important insights pre-deployment, these measurements may differ<br /> from risks that emerge in operational, real-world settings.<br /> Inscrutability: Inscrutable AI systems can complicate risk measurement. Inscrutability<br /> can be a result of the opaque nature of AI systems (limited explainability or interpretabil-<br /> ity), lack of transparency or documentation in AI system development or deployment, or<br /> inherent uncertainties in AI systems.<br /> Human baseline: Risk management of AI systems that are intended to augment or replace<br /> human activity, for example decision making, requires some form of baseline metrics for<br /> comparison. This is difficult to systematize since AI systems carry out different tasks – and<br /> perform tasks differently – than humans.</p>
Show original text

Evaluating AI systems that replace or augment human decision-making is challenging because AI performs tasks differently than humans, making it hard to establish standard comparison metrics. According to the NIST AI RMF 1.0 (Section 1.2.2), the framework helps prioritize risks but does not define specific 'risk tolerance'—the level of risk an organization is willing to accept to meet its goals. Risk tolerance is highly contextual, influenced by laws, regulations, industry norms, and organizational priorities, and it will evolve as technology and policies change. Because there is ongoing debate regarding how to balance AI costs and benefits, the framework may not be applicable in every situation. Organizations should use this flexible framework to supplement their existing risk management practices, ensuring they remain compliant with all relevant legal and professional requirements.

<p>that are intended to augment or replace<br /> human activity, for example decision making, requires some form of baseline metrics for<br /> comparison. This is difficult to systematize since AI systems carry out different tasks – and<br /> perform tasks differently – than humans.<br /> Page 6</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> 1.2.2<br /> Risk Tolerance<br /> While the AI RMF can be used to prioritize risk, it does not prescribe risk tolerance. Risk<br /> tolerance refers to the organization’s or AI actor’s (see Appendix A) readiness to bear the<br /> risk in order to achieve its objectives. Risk tolerance can be influenced by legal or regula-<br /> tory requirements (Adapted from: ISO GUIDE 73). Risk tolerance and the level of risk that<br /> is acceptable to organizations or society are highly contextual and application and use-case<br /> specific. Risk tolerances can be influenced by policies and norms established by AI sys-<br /> tem owners, organizations, industries, communities, or policy makers. Risk tolerances are<br /> likely to change over time as AI systems, policies, and norms evolve. Different organiza-<br /> tions may have varied risk tolerances due to their particular organizational priorities and<br /> resource considerations.<br /> Emerging knowledge and methods to better inform harm/cost-benefit tradeoffs will con-<br /> tinue to be developed and debated by businesses, governments, academia, and civil society.<br /> To the extent that challenges for specifying AI risk tolerances remain unresolved, there may<br /> be contexts where a risk management framework is not yet readily applicable for mitigating<br /> negative AI risks.<br /> The Framework is intended to be flexible and to augment existing risk practices<br /> which should align with applicable laws, regulations, and norms. Organizations<br /> should follow existing regulations and guidelines for risk criteria, tolerance, and<br /> response established by organizational, domain, discipline, sector, or professional<br /> requirements.</p>
Show original text

Organizations should integrate the AI RMF into their current risk management practices, ensuring alignment with relevant laws, industry standards, and sector-specific requirements. If no formal guidelines exist, organizations must define their own reasonable risk tolerance levels. Because it is impossible to eliminate all risks, organizations should avoid unrealistic expectations that waste resources. Instead, they should foster a risk-aware culture that prioritizes resources based on the specific impact and context of each AI system. High-risk systems require the most urgent attention and the most rigorous management processes.

<p>to augment existing risk practices<br /> which should align with applicable laws, regulations, and norms. Organizations<br /> should follow existing regulations and guidelines for risk criteria, tolerance, and<br /> response established by organizational, domain, discipline, sector, or professional<br /> requirements. Some sectors or industries may have established definitions of harm or<br /> established documentation, reporting, and disclosure requirements. Within sectors,<br /> risk management may depend on existing guidelines for specific applications and<br /> use case settings. Where established guidelines do not exist, organizations should<br /> define reasonable risk tolerance. Once tolerance is defined, this AI RMF can be used<br /> to manage risks and to document risk management processes.<br /> 1.2.3<br /> Risk Prioritization<br /> Attempting to eliminate negative risk entirely can be counterproductive in practice because<br /> not all incidents and failures can be eliminated. Unrealistic expectations about risk may<br /> lead organizations to allocate resources in a manner that makes risk triage inefficient or<br /> impractical or wastes scarce resources. A risk management culture can help organizations<br /> recognize that not all AI risks are the same, and resources can be allocated purposefully.<br /> Actionable risk management efforts lay out clear guidelines for assessing trustworthiness<br /> of each AI system an organization develops or deploys. Policies and resources should be<br /> prioritized based on the assessed risk level and potential impact of an AI system. The extent<br /> to which an AI system may be customized or tailored to the specific context of use by the<br /> AI deployer can be a contributing factor.<br /> Page 7</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> When applying the AI RMF, risks which the organization determines to be highest for the<br /> AI systems within a given context of use call for the most urgent prioritization and most<br /> thorough risk management process.</p>
Show original text

The AI RMF 1.0 framework requires organizations to prioritize risks based on their severity and context. Systems posing unacceptable risks—such as those causing severe or imminent harm—must be paused until they can be safely managed. Conversely, low-risk systems may receive lower priority. Prioritization should be higher for systems that interact with humans or use sensitive data, while systems interacting only with other computers using non-sensitive data may be lower priority. However, all systems require regular assessment because even non-human-facing AI can have downstream consequences. Providers must document 'residual risk' (the risk remaining after mitigation) to ensure transparency for end users. Finally, AI risk management should not be handled in isolation; it requires collaboration, as different actors have distinct responsibilities throughout the AI lifecycle.

<p>AI RMF 1.0<br /> When applying the AI RMF, risks which the organization determines to be highest for the<br /> AI systems within a given context of use call for the most urgent prioritization and most<br /> thorough risk management process. In cases where an AI system presents unacceptable<br /> negative risk levels – such as where significant negative impacts are imminent, severe harms<br /> are actually occurring, or catastrophic risks are present – development and deployment<br /> should cease in a safe manner until risks can be sufficiently managed. If an AI system’s<br /> development, deployment, and use cases are found to be low-risk in a specific context, that<br /> may suggest potentially lower prioritization.<br /> Risk prioritization may differ between AI systems that are designed or deployed to directly<br /> interact with humans as compared to AI systems that are not. Higher initial prioritization<br /> may be called for in settings where the AI system is trained on large datasets comprised of<br /> sensitive or protected data such as personally identifiable information, or where the outputs<br /> of the AI systems have direct or indirect impact on humans. AI systems designed to interact<br /> only with computational systems and trained on non-sensitive datasets (for example, data<br /> collected from the physical environment) may call for lower initial prioritization. Nonethe-<br /> less, regularly assessing and prioritizing risk based on context remains important because<br /> non-human-facing AI systems can have downstream safety or social implications.<br /> Residual risk – defined as risk remaining after risk treatment (Source: ISO GUIDE 73) –<br /> directly impacts end users or affected individuals and communities. Documenting residual<br /> risks will call for the system provider to fully consider the risks of deploying the AI product<br /> and will inform end users about potential negative impacts of interacting with the system.<br /> 1.2.4<br /> Organizational Integration and Management of Risk<br /> AI risks should not be considered in isolation. Different AI actors have different responsi-<br /> bilities and awareness depending on their roles in the lifecycle.</p>
Show original text

AI risk management should not be handled in isolation; instead, it must be integrated into an organization's broader enterprise risk management strategy alongside concerns like cybersecurity and privacy. Because different actors in the AI lifecycle have varying levels of information and responsibility, organizations should use the NIST AI RMF (AI Risk Management Framework) in conjunction with other existing frameworks. Many AI risks—such as data privacy, environmental impact, and system security—overlap with standard software development challenges. Ultimately, the AI RMF is only effective if supported by senior leadership, clear accountability, and a strong organizational culture. Organizations of all sizes must adapt these practices to fit their specific resources and capabilities.

<p>impacts of interacting with the system.<br /> 1.2.4<br /> Organizational Integration and Management of Risk<br /> AI risks should not be considered in isolation. Different AI actors have different responsi-<br /> bilities and awareness depending on their roles in the lifecycle. For example, organizations<br /> developing an AI system often will not have information about how the system may be<br /> used. AI risk management should be integrated and incorporated into broader enterprise<br /> risk management strategies and processes. Treating AI risks along with other critical risks,<br /> such as cybersecurity and privacy, will yield a more integrated outcome and organizational<br /> efficiencies.<br /> The AI RMF may be utilized along with related guidance and frameworks for managing<br /> AI system risks or broader enterprise risks. Some risks related to AI systems are common<br /> across other types of software development and deployment. Examples of overlapping risks<br /> include: privacy concerns related to the use of underlying data to train AI systems; the en-<br /> ergy and environmental implications associated with resource-heavy computing demands;<br /> security concerns related to the confidentiality, integrity, and availability of the system and<br /> its training and output data; and general security of the underlying software and hardware<br /> for AI systems.<br /> Page 8</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Organizations need to establish and maintain the appropriate accountability mechanisms,<br /> roles and responsibilities, culture, and incentive structures for risk management to be ef-<br /> fective. Use of the AI RMF alone will not lead to these changes or provide the appropriate<br /> incentives. Effective risk management is realized through organizational commitment at<br /> senior levels and may require cultural change within an organization or industry. In addi-<br /> tion, small to medium-sized organizations managing AI risks or implementing the AI RMF<br /> may face different challenges than large organizations, depending on their capabilities and<br /> resources.<br /> 2.</p>
Show original text

Implementing the AI RMF may require cultural shifts, and the challenges faced vary based on an organization's size and resources. The framework is designed for a diverse range of 'AI actors'—individuals with varied expertise and backgrounds who manage AI throughout its lifecycle. These actors are responsible for the design, development, deployment, evaluation, and use of AI systems. The framework utilizes five socio-technical dimensions—Application Context, Data and Input, AI Model, Task and Output, and the NIST-emphasized Test, Evaluation, Verification, and Validation (TEVV) processes—to guide risk management. All AI actors, particularly those with TEVV expertise, should collaborate using this framework to ensure AI systems are trustworthy and responsible.

<p>and may require cultural change within an organization or industry. In addi-<br /> tion, small to medium-sized organizations managing AI risks or implementing the AI RMF<br /> may face different challenges than large organizations, depending on their capabilities and<br /> resources.<br /> 2.<br /> Audience<br /> Identifying and managing AI risks and potential impacts – both positive and negative – re-<br /> quires a broad set of perspectives and actors across the AI lifecycle. Ideally, AI actors will<br /> represent a diversity of experience, expertise, and backgrounds and comprise demograph-<br /> ically and disciplinarily diverse teams. The AI RMF is intended to be used by AI actors<br /> across the AI lifecycle and dimensions.<br /> The OECD has developed a framework for classifying AI lifecycle activities according to<br /> five key socio-technical dimensions, each with properties relevant for AI policy and gover-<br /> nance, including risk management [OECD (2022) OECD Framework for the Classification<br /> of AI systems — OECD Digital Economy Papers]. Figure 2 shows these dimensions,<br /> slightly modified by NIST for purposes of this framework. The NIST modification high-<br /> lights the importance of test, evaluation, verification, and validation (TEVV) processes<br /> throughout an AI lifecycle and generalizes the operational context of an AI system.<br /> AI dimensions displayed in Figure 2 are the Application Context, Data and Input, AI<br /> Model, and Task and Output. AI actors involved in these dimensions who perform or<br /> manage the design, development, deployment, evaluation, and use of AI systems and drive<br /> AI risk management efforts are the primary AI RMF audience.<br /> Representative AI actors across the lifecycle dimensions are listed in Figure 3 and described<br /> in detail in Appendix A. Within the AI RMF, all AI actors work together to manage risks<br /> and achieve the goals of trustworthy and responsible AI. AI actors with TEVV-specific<br /> expertise are integrated throughout the AI lifecycle and are especially likely to benefit from<br /> the Framework.</p>
Show original text

The NIST AI RMF 1.0 encourages all AI stakeholders to collaborate to ensure AI is trustworthy and responsible. Experts in Testing, Evaluation, Verification, and Validation (TEVV) play a critical role throughout the AI lifecycle. By performing TEVV tasks regularly, teams can identify technical, legal, and ethical risks, allowing for both immediate adjustments and long-term risk management. The 'People & Planet' dimension, central to the framework, focuses on human rights and societal well-being. This area involves a secondary group of stakeholders—including trade associations, researchers, advocacy groups, and impacted communities—who provide essential input to the primary AI actors. Risk management should begin during the initial planning and design phases and continue throughout the entire lifecycle of the AI system.

<p>the AI RMF, all AI actors work together to manage risks<br /> and achieve the goals of trustworthy and responsible AI. AI actors with TEVV-specific<br /> expertise are integrated throughout the AI lifecycle and are especially likely to benefit from<br /> the Framework. Performed regularly, TEVV tasks can provide insights relative to technical,<br /> societal, legal, and ethical standards or norms, and can assist with anticipating impacts and<br /> assessing and tracking emergent risks. As a regular process within an AI lifecycle, TEVV<br /> allows for both mid-course remediation and post-hoc risk management.<br /> The People &amp; Planet dimension at the center of Figure 2 represents human rights and the<br /> broader well-being of society and the planet. The AI actors in this dimension comprise<br /> a separate AI RMF audience who informs the primary audience. These AI actors may in-<br /> clude trade associations, standards developing organizations, researchers, advocacy groups,<br /> Page 9</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Fig. 2. Lifecycle and Key Dimensions of an AI System. Modified from OECD (2022) OECD<br /> Framework for the Classification of AI systems — OECD Digital Economy Papers. The two inner<br /> circles show AI systems’ key dimensions and the outer circle shows AI lifecycle stages. Ideally,<br /> risk management efforts start with the Plan and Design function in the application context and are<br /> performed throughout the AI system lifecycle. See Figure 3 for representative AI actors.<br /> environmental groups, civil society organizations, end users, and potentially impacted in-<br /> dividuals and communities.</p>
Show original text

The 'Plan and Design' phase is a continuous part of the AI system lifecycle. Effective risk management requires collaboration among diverse AI actors, including environmental groups, civil society organizations, end users, and impacted communities. These groups help define operational boundaries, provide guidance, and balance societal priorities like equity, civil liberties, and the economy. Success depends on collective responsibility and diverse teams, which help uncover hidden assumptions and risks. As outlined in the NIST AI RMF 1.0, it is a best practice to separate those who build AI models from those who verify and validate them. Ultimately, to be considered trustworthy, AI systems must address the diverse values and needs of all interested parties.

<p>Plan and Design function in the application context and are<br /> performed throughout the AI system lifecycle. See Figure 3 for representative AI actors.<br /> environmental groups, civil society organizations, end users, and potentially impacted in-<br /> dividuals and communities. These actors can:<br /> • assist in providing context and understanding potential and actual impacts;<br /> • be a source of formal or quasi-formal norms and guidance for AI risk management;<br /> • designate boundaries for AI operation (technical, societal, legal, and ethical); and<br /> • promote discussion of the tradeoffs needed to balance societal values and priorities<br /> related to civil liberties and rights, equity, the environment and the planet, and the<br /> economy.<br /> Successful risk management depends upon a sense of collective responsibility among AI<br /> actors shown in Figure 3. The AI RMF functions, described in Section 5, require diverse<br /> perspectives, disciplines, professions, and experiences. Diverse teams contribute to more<br /> open sharing of ideas and assumptions about the purposes and functions of technology –<br /> making these implicit aspects more explicit. This broader collective perspective creates<br /> opportunities for surfacing problems and identifying existing and emergent risks.<br /> Page 10</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Fig. 3. AI actors across AI lifecycle stages. See Appendix A for detailed descriptions of AI actor tasks, including details about testing,<br /> evaluation, verification, and validation tasks. Note that AI actors in the AI Model dimension (Figure 2) are separated as a best practice, with<br /> those building and using the models separated from those verifying and validating the models.<br /> Page 11</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> 3.<br /> AI Risks and Trustworthiness<br /> For AI systems to be trustworthy, they often need to be responsive to a multiplicity of cri-<br /> teria that are of value to interested parties.</p>
Show original text

The AI RMF 1.0 framework outlines how to build trustworthy AI by managing risks and balancing key characteristics. These characteristics include being valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with managed bias. Because these traits are socio-technical, they depend on organizational behavior, data quality, model selection, and human oversight. Trustworthiness is not a one-size-fits-all approach; developers must use human judgment to balance these factors based on the specific context of the AI system, as prioritizing one trait may require trade-offs with others.

<p>1<br /> AI RMF 1.0<br /> 3.<br /> AI Risks and Trustworthiness<br /> For AI systems to be trustworthy, they often need to be responsive to a multiplicity of cri-<br /> teria that are of value to interested parties. Approaches which enhance AI trustworthiness<br /> can reduce negative AI risks. This Framework articulates the following characteristics of<br /> trustworthy AI and offers guidance for addressing them. Characteristics of trustworthy AI<br /> systems include: valid and reliable, safe, secure and resilient, accountable and trans-<br /> parent, explainable and interpretable, privacy-enhanced, and fair with harmful bias<br /> managed. Creating trustworthy AI requires balancing each of these characteristics based<br /> on the AI system’s context of use. While all characteristics are socio-technical system at-<br /> tributes, accountability and transparency also relate to the processes and activities internal<br /> to an AI system and its external setting. Neglecting these characteristics can increase the<br /> probability and magnitude of negative consequences.<br /> Fig. 4. Characteristics of trustworthy AI systems. Valid &amp; Reliable is a necessary condition of<br /> trustworthiness and is shown as the base for other trustworthiness characteristics. Accountable &amp;<br /> Transparent is shown as a vertical box because it relates to all other characteristics.<br /> Trustworthiness characteristics (shown in Figure 4) are inextricably tied to social and orga-<br /> nizational behavior, the datasets used by AI systems, selection of AI models and algorithms<br /> and the decisions made by those who build them, and the interactions with the humans who<br /> provide insight from and oversight of such systems. Human judgment should be employed<br /> when deciding on the specific metrics related to AI trustworthiness characteristics and the<br /> precise threshold values for those metrics.<br /> Addressing AI trustworthiness characteristics individually will not ensure AI system trust-<br /> worthiness; tradeoffs are usually involved, rarely do all characteristics apply in every set-<br /> ting, and some will be more or less important in any given situation.</p>
Show original text

Achieving AI trustworthiness is not as simple as addressing individual characteristics, as they often conflict and vary in importance depending on the situation. Trustworthiness is a social concept limited by its weakest element. Organizations must navigate difficult tradeoffs, such as balancing predictive accuracy against interpretability or privacy. Because these tradeoffs depend on specific contexts and values, they must be resolved through transparent and justifiable decision-making. To improve contextual awareness, organizations should involve subject matter experts in evaluations and engage a diverse range of stakeholders throughout the AI lifecycle. This collaborative approach helps ensure that AI risks are managed effectively and that the system's benefits are fully realized.

<p>metrics.<br /> Addressing AI trustworthiness characteristics individually will not ensure AI system trust-<br /> worthiness; tradeoffs are usually involved, rarely do all characteristics apply in every set-<br /> ting, and some will be more or less important in any given situation. Ultimately, trustwor-<br /> thiness is a social concept that ranges across a spectrum and is only as strong as its weakest<br /> characteristics.<br /> When managing AI risks, organizations can face difficult decisions in balancing these char-<br /> acteristics. For example, in certain scenarios tradeoffs may emerge between optimizing for<br /> interpretability and achieving privacy. In other cases, organizations might face a tradeoff<br /> between predictive accuracy and interpretability. Or, under certain conditions such as data<br /> sparsity, privacy-enhancing techniques can result in a loss in accuracy, affecting decisions<br /> Page 12</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> about fairness and other values in certain domains. Dealing with tradeoffs requires tak-<br /> ing into account the decision-making context. These analyses can highlight the existence<br /> and extent of tradeoffs between different measures, but they do not answer questions about<br /> how to navigate the tradeoff. Those depend on the values at play in the relevant context and<br /> should be resolved in a manner that is both transparent and appropriately justifiable.<br /> There are multiple approaches for enhancing contextual awareness in the AI lifecycle. For<br /> example, subject matter experts can assist in the evaluation of TEVV findings and work<br /> with product and deployment teams to align TEVV parameters to requirements and de-<br /> ployment conditions. When properly resourced, increasing the breadth and diversity of<br /> input from interested parties and relevant AI actors throughout the AI lifecycle can en-<br /> hance opportunities for informing contextually sensitive evaluations, and for identifying<br /> AI system benefits and positive impacts. These practices can increase the likelihood that<br /> risks arising in social contexts are managed appropriately.</p>
Show original text

Involving relevant AI actors throughout the AI lifecycle helps identify system benefits and manage social risks. Because different roles—such as designers, developers, and deployers—perceive trustworthiness differently, these actors must collaborate. Trustworthiness characteristics are interconnected; for example, a system that is secure but unfair, or accurate but opaque, is undesirable. Therefore, all AI actors share the responsibility of balancing these tradeoffs and determining if an AI tool is appropriate for a specific context. Decisions to deploy AI should be based on a thorough assessment of risks, costs, benefits, and input from interested parties. A key aspect of trustworthiness is ensuring systems are 'Valid and Reliable.' Validation, as defined by ISO 9000:2015, confirms that a system meets its intended requirements. Reliability, defined by ISO/IEC TS 5723:2022, is the ability of a system to perform consistently without failure under specific conditions. Systems that are inaccurate, unreliable, or fail to generalize to new data increase risks and undermine trust.

<p>and relevant AI actors throughout the AI lifecycle can en-<br /> hance opportunities for informing contextually sensitive evaluations, and for identifying<br /> AI system benefits and positive impacts. These practices can increase the likelihood that<br /> risks arising in social contexts are managed appropriately.<br /> Understanding and treatment of trustworthiness characteristics depends on an AI actor’s<br /> particular role within the AI lifecycle. For any given AI system, an AI designer or developer<br /> may have a different perception of the characteristics than the deployer.<br /> Trustworthiness characteristics explained in this document influence each other.<br /> Highly secure but unfair systems, accurate but opaque and uninterpretable systems,<br /> and inaccurate but secure, privacy-enhanced, and transparent systems are all unde-<br /> sirable. A comprehensive approach to risk management calls for balancing tradeoffs<br /> among the trustworthiness characteristics. It is the joint responsibility of all AI ac-<br /> tors to determine whether AI technology is an appropriate or necessary tool for a<br /> given context or purpose, and how to use it responsibly. The decision to commission<br /> or deploy an AI system should be based on a contextual assessment of trustworthi-<br /> ness characteristics and the relative risks, impacts, costs, and benefits, and informed<br /> by a broad set of interested parties.<br /> 3.1<br /> Valid and Reliable<br /> Validation is the “confirmation, through the provision of objective evidence, that the re-<br /> quirements for a specific intended use or application have been fulfilled” (Source: ISO<br /> 9000:2015). Deployment of AI systems which are inaccurate, unreliable, or poorly gener-<br /> alized to data and settings beyond their training creates and increases negative AI risks and<br /> reduces trustworthiness.<br /> Reliability is defined in the same standard as the “ability of an item to perform as required,<br /> without failure, for a given time interval, under given conditions” (Source: ISO/IEC TS<br /> 5723:2022).</p>
Show original text

According to ISO/IEC TS 5723:2022 and the NIST AI RMF 1.0, three key concepts define trustworthy AI: Reliability, Accuracy, and Robustness. Reliability is the ability of an AI to perform as required without failure over a specific time and under expected conditions. Accuracy is how close an AI's results are to the true values; measuring it requires clear test sets, realistic methodology, and documentation of both computational metrics and human-AI performance. Robustness, or generalizability, is the ability of a system to maintain performance across various circumstances, including unexpected settings, while minimizing potential harm. While accuracy and robustness are both essential for trustworthiness, they can sometimes conflict with one another.

<p>iness.<br /> Reliability is defined in the same standard as the “ability of an item to perform as required,<br /> without failure, for a given time interval, under given conditions” (Source: ISO/IEC TS<br /> 5723:2022). Reliability is a goal for overall correctness of AI system operation under the<br /> conditions of expected use and over a given period of time, including the entire lifetime of<br /> the system.<br /> Page 13</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Accuracy and robustness contribute to the validity and trustworthiness of AI systems, and<br /> can be in tension with one another in AI systems.<br /> Accuracy is defined by ISO/IEC TS 5723:2022 as “closeness of results of observations,<br /> computations, or estimates to the true values or the values accepted as being true.” Mea-<br /> sures of accuracy should consider computational-centric measures (e.g., false positive and<br /> false negative rates), human-AI teaming, and demonstrate external validity (generalizable<br /> beyond the training conditions). Accuracy measurements should always be paired with<br /> clearly defined and realistic test sets – that are representative of conditions of expected use<br /> – and details about test methodology; these should be included in associated documen-<br /> tation. Accuracy measurements may include disaggregation of results for different data<br /> segments.<br /> Robustness or generalizability is defined as the “ability of a system to maintain its level<br /> of performance under a variety of circumstances” (Source: ISO/IEC TS 5723:2022). Ro-<br /> bustness is a goal for appropriate system functionality in a broad set of conditions and<br /> circumstances, including uses of AI systems not initially anticipated. Robustness requires<br /> not only that the system perform exactly as it does under expected uses, but also that it<br /> should perform in ways that minimize potential harms to people if it is operating in an<br /> unexpected setting.</p>
Show original text

According to the NIST AI RMF 1.0, AI systems must be robust, reliable, and valid to ensure they function as intended and minimize harm, especially in unexpected situations. Safety is defined by ISO/IEC TS 5723:2022 as the prevention of danger to human life, health, property, or the environment. To achieve this, organizations should prioritize safety from the earliest design stages through deployment. Key strategies include responsible development, clear user documentation, and human intervention when systems fail. Risk management must be tailored to the severity of potential harm, with the highest priority given to risks that could cause serious injury or death.

<p>AI systems not initially anticipated. Robustness requires<br /> not only that the system perform exactly as it does under expected uses, but also that it<br /> should perform in ways that minimize potential harms to people if it is operating in an<br /> unexpected setting.<br /> Validity and reliability for deployed AI systems are often assessed by ongoing testing or<br /> monitoring that confirms a system is performing as intended. Measurement of validity,<br /> accuracy, robustness, and reliability contribute to trustworthiness and should take into con-<br /> sideration that certain types of failures can cause greater harm. AI risk management efforts<br /> should prioritize the minimization of potential negative impacts, and may need to include<br /> human intervention in cases where the AI system cannot detect or correct errors.<br /> 3.2<br /> Safe<br /> AI systems should “not under defined conditions, lead to a state in which human life,<br /> health, property, or the environment is endangered” (Source: ISO/IEC TS 5723:2022). Safe<br /> operation of AI systems is improved through:<br /> • responsible design, development, and deployment practices;<br /> • clear information to deployers on responsible use of the system;<br /> • responsible decision-making by deployers and end users; and<br /> • explanations and documentation of risks based on empirical evidence of incidents.<br /> Different types of safety risks may require tailored AI risk management approaches based<br /> on context and the severity of potential risks presented. Safety risks that pose a potential<br /> risk of serious injury or death call for the most urgent prioritization and most thorough risk<br /> management process.<br /> Page 14</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Employing safety considerations during the lifecycle and starting as early as possible with<br /> planning and design can prevent failures or conditions that can render a system dangerous.</p>
Show original text

According to the NIST AI RMF 1.0, AI safety should be integrated early in the design phase to prevent dangerous failures. Effective safety strategies include rigorous simulation, real-time monitoring, and the ability for human intervention or system shutdown. These practices should align with established safety standards from fields like healthcare and transportation. Regarding security and resilience, AI systems must be able to withstand unexpected changes or attacks while maintaining core functions. Security involves protecting against threats like data poisoning and unauthorized access, often using the NIST Cybersecurity Framework. Resilience focuses on the system's ability to recover from adverse events and continue operating safely.

<p>14</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Employing safety considerations during the lifecycle and starting as early as possible with<br /> planning and design can prevent failures or conditions that can render a system dangerous.<br /> Other practical approaches for AI safety often relate to rigorous simulation and in-domain<br /> testing, real-time monitoring, and the ability to shut down, modify, or have human inter-<br /> vention into systems that deviate from intended or expected functionality.<br /> AI safety risk management approaches should take cues from efforts and guidelines for<br /> safety in fields such as transportation and healthcare, and align with existing sector- or<br /> application-specific guidelines or standards.<br /> 3.3<br /> Secure and Resilient<br /> AI systems, as well as the ecosystems in which they are deployed, may be said to be re-<br /> silient if they can withstand unexpected adverse events or unexpected changes in their envi-<br /> ronment or use – or if they can maintain their functions and structure in the face of internal<br /> and external change and degrade safely and gracefully when this is necessary (Adapted<br /> from: ISO/IEC TS 5723:2022). Common security concerns relate to adversarial examples,<br /> data poisoning, and the exfiltration of models, training data, or other intellectual property<br /> through AI system endpoints. AI systems that can maintain confidentiality, integrity, and<br /> availability through protection mechanisms that prevent unauthorized access and use may<br /> be said to be secure. Guidelines in the NIST Cybersecurity Framework and Risk Manage-<br /> ment Framework are among those which are applicable here.<br /> Security and resilience are related but distinct characteristics. While resilience is the abil-<br /> ity to return to normal function after an unexpected adverse event, security includes re-<br /> silience but also encompasses protocols to avoid, protect against, respond to, or recover<br /> from attacks.</p>
Show original text

Resilience and security are related but distinct. Resilience is the ability to recover from unexpected events, while security includes resilience plus the protocols needed to prevent, respond to, and recover from attacks. Resilience also covers how models handle misuse or adversarial use. Trustworthy AI requires accountability, which relies on transparency. Transparency means providing clear information about an AI system—such as its design, training data, and intended use—to the right people at the right time. This openness builds confidence and allows for accountability if the system causes harm. While transparency alone does not guarantee that a system is accurate, secure, or fair, it is essential for evaluating those qualities as AI systems evolve.

<p>are related but distinct characteristics. While resilience is the abil-<br /> ity to return to normal function after an unexpected adverse event, security includes re-<br /> silience but also encompasses protocols to avoid, protect against, respond to, or recover<br /> from attacks. Resilience relates to robustness and goes beyond the provenance of the data<br /> to encompass unexpected or adversarial use (or abuse or misuse) of the model or data.<br /> 3.4<br /> Accountable and Transparent<br /> Trustworthy AI depends upon accountability. Accountability presupposes transparency.<br /> Transparency reflects the extent to which information about an AI system and its outputs is<br /> available to individuals interacting with such a system – regardless of whether they are even<br /> aware that they are doing so. Meaningful transparency provides access to appropriate levels<br /> of information based on the stage of the AI lifecycle and tailored to the role or knowledge<br /> of AI actors or individuals interacting with or using the AI system. By promoting higher<br /> levels of understanding, transparency increases confidence in the AI system.<br /> This characteristic’s scope spans from design decisions and training data to model train-<br /> ing, the structure of the model, its intended use cases, and how and when deployment,<br /> post-deployment, or end user decisions were made and by whom. Transparency is often<br /> necessary for actionable redress related to AI system outputs that are incorrect or otherwise<br /> lead to negative impacts. Transparency should consider human-AI interaction: for exam-<br /> Page 15</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> ple, how a human operator or user is notified when a potential or actual adverse outcome<br /> caused by an AI system is detected. A transparent system is not necessarily an accurate,<br /> privacy-enhanced, secure, or fair system. However, it is difficult to determine whether an<br /> opaque system possesses such characteristics, and to do so over time as complex systems<br /> evolve.</p>
Show original text

Transparency does not automatically guarantee that an AI system is accurate, secure, or fair, but it is much harder to evaluate these qualities in 'opaque' or hidden systems. To ensure accountability, we must consider the roles of AI actors, noting that risk and responsibility vary across different legal and cultural contexts. When AI decisions impact life or liberty, developers and deployers must increase their transparency and accountability efforts. Organizations should implement risk management and harm-reduction structures to improve accountability, while balancing these needs against resource constraints and the protection of proprietary information. Maintaining clear records of training data provenance and respecting intellectual property rights are essential for accountability. Developers are encouraged to test various transparency tools in collaboration with deployers to ensure systems function as intended. Finally, 'explainability'—understanding how an AI works—and 'interpretability'—understanding what its outputs mean—are both critical for helping users and overseers trust and evaluate AI systems.

<p>detected. A transparent system is not necessarily an accurate,<br /> privacy-enhanced, secure, or fair system. However, it is difficult to determine whether an<br /> opaque system possesses such characteristics, and to do so over time as complex systems<br /> evolve.<br /> The role of AI actors should be considered when seeking accountability for the outcomes of<br /> AI systems. The relationship between risk and accountability associated with AI and tech-<br /> nological systems more broadly differs across cultural, legal, sectoral, and societal contexts.<br /> When consequences are severe, such as when life and liberty are at stake, AI developers<br /> and deployers should consider proportionally and proactively adjusting their transparency<br /> and accountability practices. Maintaining organizational practices and governing structures<br /> for harm reduction, like risk management, can help lead to more accountable systems.<br /> Measures to enhance transparency and accountability should also consider the impact of<br /> these efforts on the implementing entity, including the level of necessary resources and the<br /> need to safeguard proprietary information.<br /> Maintaining the provenance of training data and supporting attribution of the AI system’s<br /> decisions to subsets of training data can assist with both transparency and accountability.<br /> Training data may also be subject to copyright and should follow applicable intellectual<br /> property rights laws.<br /> As transparency tools for AI systems and related documentation continue to evolve, devel-<br /> opers of AI systems are encouraged to test different types of transparency tools in cooper-<br /> ation with AI deployers to ensure that AI systems are used as intended.<br /> 3.5<br /> Explainable and Interpretable<br /> Explainability refers to a representation of the mechanisms underlying AI systems’ oper-<br /> ation, whereas interpretability refers to the meaning of AI systems’ output in the context<br /> of their designed functional purposes. Together, explainability and interpretability assist<br /> those operating or overseeing an AI system, as well as users of an AI system, to gain<br /> deeper insights into the functionality and trustworthiness of the system, including its out-<br /> puts.</p>
Show original text

Explainability and interpretability help users, operators, and overseers understand how AI systems function and why they are trustworthy. Many risks associated with AI stem from a lack of clarity regarding system outputs. By providing tailored explanations based on a user's role and expertise, organizations can better manage these risks, improve debugging, and strengthen documentation and governance. While transparency explains 'what' happened, explainability clarifies 'how' a decision was made, and interpretability explains 'why' it was made and what it means. Additionally, privacy practices are essential to protect human autonomy, identity, and dignity by limiting intrusion and ensuring individuals maintain control over their personal data and reputation. This information is sourced from the NIST AI 100-1 AI RMF 1.0.

<p>. Together, explainability and interpretability assist<br /> those operating or overseeing an AI system, as well as users of an AI system, to gain<br /> deeper insights into the functionality and trustworthiness of the system, including its out-<br /> puts. The underlying assumption is that perceptions of negative risk stem from a lack of<br /> ability to make sense of, or contextualize, system output appropriately. Explainable and<br /> interpretable AI systems offer information that will help end users understand the purposes<br /> and potential impact of an AI system.<br /> Risk from lack of explainability may be managed by describing how AI systems function,<br /> with descriptions tailored to individual differences such as the user’s role, knowledge, and<br /> skill level. Explainable systems can be debugged and monitored more easily, and they lend<br /> themselves to more thorough documentation, audit, and governance.<br /> Page 16</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Risks to interpretability often can be addressed by communicating a description of why<br /> an AI system made a particular prediction or recommendation. (See “Four Principles of<br /> Explainable Artificial Intelligence” and “Psychological Foundations of Explainability and<br /> Interpretability in Artificial Intelligence” found here.)<br /> Transparency, explainability, and interpretability are distinct characteristics that support<br /> each other. Transparency can answer the question of “what happened” in the system. Ex-<br /> plainability can answer the question of “how” a decision was made in the system. Inter-<br /> pretability can answer the question of “why” a decision was made by the system and its<br /> meaning or context to the user.<br /> 3.6<br /> Privacy-Enhanced<br /> Privacy refers generally to the norms and practices that help to safeguard human autonomy,<br /> identity, and dignity. These norms and practices typically address freedom from intrusion,<br /> limiting observation, or individuals’ agency to consent to disclosure or control of facets of<br /> their identities (e.g., body, data, reputation).</p>
Show original text

Privacy in AI protects human autonomy, identity, and dignity by limiting observation and ensuring individuals control their personal data and reputation, as outlined in the NIST Privacy Framework. Designers should prioritize anonymity, confidentiality, and control, while balancing privacy against security, bias, and transparency. AI systems can create new privacy risks, such as identifying individuals through data inference. While Privacy-Enhancing Technologies (PETs) and data minimization methods like de-identification help, they may sometimes reduce model accuracy. Regarding fairness, AI systems must address harmful bias and discrimination. Fairness is complex and culturally subjective, so organizations must consider diverse perspectives. Even if a system balances predictions across demographic groups, it may still be unfair if it excludes people with disabilities or worsens systemic inequalities. Ultimately, bias involves more than just data representativeness; it is a broad issue that requires ongoing management.

<p>to safeguard human autonomy,<br /> identity, and dignity. These norms and practices typically address freedom from intrusion,<br /> limiting observation, or individuals’ agency to consent to disclosure or control of facets of<br /> their identities (e.g., body, data, reputation). (See The NIST Privacy Framework: A Tool<br /> for Improving Privacy through Enterprise Risk Management.)<br /> Privacy values such as anonymity, confidentiality, and control generally should guide choices<br /> for AI system design, development, and deployment. Privacy-related risks may influence<br /> security, bias, and transparency and come with tradeoffs with these other characteristics.<br /> Like safety and security, specific technical features of an AI system may promote or reduce<br /> privacy. AI systems can also present new risks to privacy by allowing inference to identify<br /> individuals or previously private information about individuals.<br /> Privacy-enhancing technologies (“PETs”) for AI, as well as data minimizing methods such<br /> as de-identification and aggregation for certain model outputs, can support design for<br /> privacy-enhanced AI systems. Under certain conditions such as data sparsity, privacy-<br /> enhancing techniques can result in a loss in accuracy, affecting decisions about fairness<br /> and other values in certain domains.<br /> 3.7<br /> Fair – with Harmful Bias Managed<br /> Fairness in AI includes concerns for equality and equity by addressing issues such as harm-<br /> ful bias and discrimination. Standards of fairness can be complex and difficult to define be-<br /> cause perceptions of fairness differ among cultures and may shift depending on application.<br /> Organizations’ risk management efforts will be enhanced by recognizing and considering<br /> these differences. Systems in which harmful biases are mitigated are not necessarily fair.<br /> For example, systems in which predictions are somewhat balanced across demographic<br /> groups may still be inaccessible to individuals with disabilities or affected by the digital<br /> divide or may exacerbate existing disparities or systemic biases.<br /> Page 17</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Bias is broader than demographic balance and data representativeness.</p>
Show original text

According to the NIST AI RMF 1.0, AI bias is a complex issue that goes beyond just data representation. NIST identifies three main categories of bias: systemic (found in datasets, organizational practices, and society), computational and statistical (caused by errors or non-representative data), and human-cognitive (how people perceive information or make decisions). These biases can occur even without intentional discrimination. Because AI can accelerate and scale these biases, they can cause significant harm to individuals and society. Managing these risks is essential for ensuring fairness and transparency in AI systems. For further details, refer to NIST Special Publication 1270.

<p>disabilities or affected by the digital<br /> divide or may exacerbate existing disparities or systemic biases.<br /> Page 17</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Bias is broader than demographic balance and data representativeness. NIST has identified<br /> three major categories of AI bias to be considered and managed: systemic, computational<br /> and statistical, and human-cognitive. Each of these can occur in the absence of prejudice,<br /> partiality, or discriminatory intent. Systemic bias can be present in AI datasets, the orga-<br /> nizational norms, practices, and processes across the AI lifecycle, and the broader society<br /> that uses AI systems. Computational and statistical biases can be present in AI datasets<br /> and algorithmic processes, and often stem from systematic errors due to non-representative<br /> samples. Human-cognitive biases relate to how an individual or group perceives AI sys-<br /> tem information to make a decision or fill in missing information, or how humans think<br /> about purposes and functions of an AI system. Human-cognitive biases are omnipresent<br /> in decision-making processes across the AI lifecycle and system use, including the design,<br /> implementation, operation, and maintenance of AI.<br /> Bias exists in many forms and can become ingrained in the automated systems that help<br /> make decisions about our lives. While bias is not always a negative phenomenon, AI sys-<br /> tems can potentially increase the speed and scale of biases and perpetuate and amplify<br /> harms to individuals, groups, communities, organizations, and society. Bias is tightly asso-<br /> ciated with the concepts of transparency as well as fairness in society. (For more informa-<br /> tion about bias, including the three categories, see NIST Special Publication 1270, Towards<br /> a Standard for Identifying and Managing Bias in Artificial Intelligence.)<br /> Page 18</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> 4.</p>
Show original text

NIST plans to work with the AI community to evaluate the effectiveness of the AI Risk Management Framework (AI RMF 1.0). Organizations using the framework should regularly assess if it helps them manage AI risks, including their policies, processes, and outcomes. NIST will collaborate with users to create metrics and share findings. By using the AI RMF, organizations can expect several benefits: improved risk governance and documentation; a better understanding of AI trustworthiness and trade-offs; clearer decision-making for system deployment; stronger organizational accountability; a culture that prioritizes risk management; better information sharing; increased awareness of downstream risks; improved engagement with stakeholders; and enhanced capacity for testing, evaluation, verification, and validation (TEVV).

<p>including the three categories, see NIST Special Publication 1270, Towards<br /> a Standard for Identifying and Managing Bias in Artificial Intelligence.)<br /> Page 18</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> 4.<br /> Effectiveness of the AI RMF<br /> Evaluations of AI RMF effectiveness – including ways to measure bottom-line improve-<br /> ments in the trustworthiness of AI systems – will be part of future NIST activities, in<br /> conjunction with the AI community.<br /> Organizations and other users of the Framework are encouraged to periodically evaluate<br /> whether the AI RMF has improved their ability to manage AI risks, including but not lim-<br /> ited to their policies, processes, practices, implementation plans, indicators, measurements,<br /> and expected outcomes. NIST intends to work collaboratively with others to develop met-<br /> rics, methodologies, and goals for evaluating the AI RMF’s effectiveness, and to broadly<br /> share results and supporting information. Framework users are expected to benefit from:<br /> • enhanced processes for governing, mapping, measuring, and managing AI risk, and<br /> clearly documenting outcomes;<br /> • improved awareness of the relationships and tradeoffs among trustworthiness char-<br /> acteristics, socio-technical approaches, and AI risks;<br /> • explicit processes for making go/no-go system commissioning and deployment deci-<br /> sions;<br /> • established policies, processes, practices, and procedures for improving organiza-<br /> tional accountability efforts related to AI system risks;<br /> • enhanced organizational culture which prioritizes the identification and management<br /> of AI system risks and potential impacts to individuals, communities, organizations,<br /> and society;<br /> • better information sharing within and across organizations about risks, decision-<br /> making processes, responsibilities, common pitfalls, TEVV practices, and approaches<br /> for continuous improvement;<br /> • greater contextual knowledge for increased awareness of downstream risks;<br /> • strengthened engagement with interested parties and relevant AI actors; and<br /> • augmented capacity for TEVV of AI systems and associated risks.</p>
Show original text

The NIST AI RMF 1.0 (AI Risk Management Framework) provides a structured approach to building trustworthy AI. Its Core consists of four primary functions: GOVERN, MAP, MEASURE, and MANAGE. These functions are organized into categories and subcategories that outline specific outcomes and actions, which are not intended as a rigid checklist. Governance acts as a foundational element that informs the other three functions. Effective risk management requires a continuous, multidisciplinary process throughout the entire AI lifecycle. Engaging diverse perspectives—including those from outside the organization—is essential to identifying potential risks and improving the overall quality of AI systems.

<p>pitfalls, TEVV practices, and approaches<br /> for continuous improvement;<br /> • greater contextual knowledge for increased awareness of downstream risks;<br /> • strengthened engagement with interested parties and relevant AI actors; and<br /> • augmented capacity for TEVV of AI systems and associated risks.<br /> Page 19</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Part 2: Core and Profiles<br /> 5.<br /> AI RMF Core<br /> The AI RMF Core provides outcomes and actions that enable dialogue, understanding, and<br /> activities to manage AI risks and responsibly develop trustworthy AI systems. As illus-<br /> trated in Figure 5, the Core is composed of four functions: GOVERN, MAP, MEASURE,<br /> and MANAGE. Each of these high-level functions is broken down into categories and sub-<br /> categories. Categories and subcategories are subdivided into specific actions and outcomes.<br /> Actions do not constitute a checklist, nor are they necessarily an ordered set of steps.<br /> Fig. 5. Functions organize AI risk management activities at their highest level to govern, map,<br /> measure, and manage AI risks. Governance is designed to be a cross-cutting function to inform<br /> and be infused throughout the other three functions.<br /> Risk management should be continuous, timely, and performed throughout the AI system<br /> lifecycle dimensions. AI RMF Core functions should be carried out in a way that reflects<br /> diverse and multidisciplinary perspectives, potentially including the views of AI actors out-<br /> side the organization. Having a diverse team contributes to more open sharing of ideas and<br /> assumptions about purposes and functions of the technology being designed, developed,<br /> Page 20</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> deployed, or evaluated – which can create opportunities to surface problems and identify<br /> existing and emergent risks.</p>
Show original text

The NIST AI RMF 1.0 helps organizations identify and manage risks throughout the AI lifecycle. To assist with implementation, NIST provides the AI RMF Playbook, a voluntary online resource offering tactical guidance that organizations can customize to fit their specific needs. Both the AI RMF and the Playbook are part of the NIST Trustworthy and Responsible AI Resource Center. Organizations can apply the framework's functions—GOVERN, MAP, MEASURE, and MANAGE—in any order that suits their capabilities, though most users begin with GOVERN and MAP. The process is intended to be iterative, allowing users to cross-reference categories and subcategories as they manage AI risks.

<p>of the technology being designed, developed,<br /> Page 20</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> deployed, or evaluated – which can create opportunities to surface problems and identify<br /> existing and emergent risks.<br /> An online companion resource to the AI RMF, the NIST AI RMF Playbook, is available<br /> to help organizations navigate the AI RMF and achieve its outcomes through suggested<br /> tactical actions they can apply within their own contexts. Like the AI RMF, the Playbook<br /> is voluntary and organizations can utilize the suggestions according to their needs and<br /> interests. Playbook users can create tailored guidance selected from suggested material<br /> for their own use and contribute their suggestions for sharing with the broader community.<br /> Along with the AI RMF, the Playbook is part of the NIST Trustworthy and Responsible AI<br /> Resource Center.<br /> Framework users may apply these functions as best suits their needs for managing<br /> AI risks based on their resources and capabilities. Some organizations may choose<br /> to select from among the categories and subcategories; others may choose and have<br /> the capacity to apply all categories and subcategories. Assuming a governance struc-<br /> ture is in place, functions may be performed in any order across the AI lifecycle as<br /> deemed to add value by a user of the framework. After instituting the outcomes in<br /> GOVERN, most users of the AI RMF would start with the MAP function and con-<br /> tinue to MEASURE or MANAGE. However users integrate the functions, the process<br /> should be iterative, with cross-referencing between functions as necessary. Simi-<br /> larly, there are categories and subcategories with elements that apply to multiple<br /> functions, or that logically should take place before certain subcategory decisions.<br /> 5.</p>
Show original text

The GOVERN function is a foundational, cross-cutting element of the NIST AI RMF 1.0 that must be integrated into all stages of an AI system's lifecycle. It establishes a culture of risk management by aligning technical AI development with organizational values, policies, and strategic goals. Key responsibilities of GOVERN include: creating processes to identify and manage risks to users and society; assessing potential impacts; ensuring compliance with legal and third-party requirements; and providing a framework for individuals involved in acquiring, training, and monitoring AI. Effective governance requires continuous oversight from senior leadership, who set the tone for the organization's risk tolerance and ethical standards.

<p>should be iterative, with cross-referencing between functions as necessary. Simi-<br /> larly, there are categories and subcategories with elements that apply to multiple<br /> functions, or that logically should take place before certain subcategory decisions.<br /> 5.1<br /> Govern<br /> The GOVERN function:<br /> • cultivates and implements a culture of risk management within organizations design-<br /> ing, developing, deploying, evaluating, or acquiring AI systems;<br /> • outlines processes, documents, and organizational schemes that anticipate, identify,<br /> and manage the risks a system can pose, including to users and others across society<br /> – and procedures to achieve those outcomes;<br /> • incorporates processes to assess potential impacts;<br /> • provides a structure by which AI risk management functions can align with organi-<br /> zational principles, policies, and strategic priorities;<br /> • connects technical aspects of AI system design and development to organizational<br /> values and principles, and enables organizational practices and competencies for the<br /> individuals involved in acquiring, training, deploying, and monitoring such systems;<br /> and<br /> • addresses full product lifecycle and associated processes, including legal and other<br /> issues concerning use of third-party software or hardware systems and data.<br /> Page 21</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> GOVERN is a cross-cutting function that is infused throughout AI risk management and<br /> enables the other functions of the process. Aspects of GOVERN, especially those related to<br /> compliance or evaluation, should be integrated into each of the other functions. Attention<br /> to governance is a continual and intrinsic requirement for effective AI risk management<br /> over an AI system’s lifespan and the organization’s hierarchy.<br /> Strong governance can drive and enhance internal practices and norms to facilitate orga-<br /> nizational risk culture. Governing authorities can determine the overarching policies that<br /> direct an organization’s mission, goals, values, culture, and risk tolerance. Senior leader-<br /> ship sets the tone for risk management within an organization, and with it, organizational<br /> culture.</p>
Show original text

Effective AI risk management requires clear leadership and structure. Governing authorities define the organization's mission, values, and risk tolerance, while senior leadership sets the tone for the organizational culture. Management is responsible for aligning technical AI operations with these policies. Maintaining thorough documentation improves transparency, accountability, and human oversight. Organizations should foster a culture focused on understanding and managing risks, continuously updating these practices as AI technology and expectations evolve. Detailed guidance on these governance practices can be found in the NIST AI RMF Playbook. The GOVERN function focuses on ensuring that policies and procedures for mapping, measuring, and managing AI risks are transparent and effective. This includes understanding legal requirements (GOVERN 1.1), integrating trustworthy AI characteristics (GOVERN 1.2), tailoring risk management to the organization's risk tolerance (GOVERN 1.3), and establishing clear, transparent processes based on organizational priorities (GOVERN 1.4).

<p>culture. Governing authorities can determine the overarching policies that<br /> direct an organization’s mission, goals, values, culture, and risk tolerance. Senior leader-<br /> ship sets the tone for risk management within an organization, and with it, organizational<br /> culture. Management aligns the technical aspects of AI risk management to policies and<br /> operations. Documentation can enhance transparency, improve human review processes,<br /> and bolster accountability in AI system teams.<br /> After putting in place the structures, systems, processes, and teams described in the GOV-<br /> ERN function, organizations should benefit from a purpose-driven culture focused on risk<br /> understanding and management. It is incumbent on Framework users to continue to ex-<br /> ecute the GOVERN function as knowledge, cultures, and needs or expectations from AI<br /> actors evolve over time.<br /> Practices related to governing AI risks are described in the NIST AI RMF Playbook. Table<br /> 1 lists the GOVERN function’s categories and subcategories.<br /> Table 1: Categories and subcategories for the GOVERN function.<br /> GOVERN 1:<br /> Policies, processes,<br /> procedures, and<br /> practices across the<br /> organization related<br /> to the mapping,<br /> measuring, and<br /> managing of AI<br /> risks are in place,<br /> transparent, and<br /> implemented<br /> effectively.<br /> GOVERN 1.1: Legal and regulatory requirements involving AI<br /> are understood, managed, and documented.<br /> GOVERN 1.2: The characteristics of trustworthy AI are inte-<br /> grated into organizational policies, processes, procedures, and<br /> practices.<br /> GOVERN 1.3: Processes, procedures, and practices are in place<br /> to determine the needed level of risk management activities based<br /> on the organization’s risk tolerance.<br /> GOVERN 1.4: The risk management process and its outcomes are<br /> established through transparent policies, procedures, and other<br /> controls based on organizational risk priorities.</p>
Show original text

The NIST AI RMF 1.0 GOVERN function outlines how organizations should manage AI risks. GOVERN 1.4 requires transparent policies and controls aligned with risk priorities. GOVERN 1.5 mandates ongoing monitoring, periodic reviews, and clearly defined roles. GOVERN 1.6 requires an inventory of AI systems, while GOVERN 1.7 ensures safe decommissioning of AI systems. GOVERN 2 focuses on accountability, requiring that teams are empowered and trained. Specifically, GOVERN 2.1 mandates clear documentation of roles and communication lines. GOVERN 2.2 requires AI risk management training for personnel and partners. Finally, GOVERN 2.3 establishes that executive leadership is responsible for all AI development and deployment risk decisions.

<p>the needed level of risk management activities based<br /> on the organization’s risk tolerance.<br /> GOVERN 1.4: The risk management process and its outcomes are<br /> established through transparent policies, procedures, and other<br /> controls based on organizational risk priorities.<br /> Categories<br /> Subcategories<br /> Continued on next page<br /> Page 22</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 1: Categories and subcategories for the GOVERN function. (Continued)<br /> GOVERN 1.5: Ongoing monitoring and periodic review of the<br /> risk management process and its outcomes are planned and or-<br /> ganizational roles and responsibilities clearly defined, including<br /> determining the frequency of periodic review.<br /> GOVERN 1.6: Mechanisms are in place to inventory AI systems<br /> and are resourced according to organizational risk priorities.<br /> GOVERN 1.7: Processes and procedures are in place for decom-<br /> missioning and phasing out AI systems safely and in a man-<br /> ner that does not increase risks or decrease the organization’s<br /> trustworthiness.<br /> GOVERN 2:<br /> Accountability<br /> structures are in<br /> place so that the<br /> appropriate teams<br /> and individuals are<br /> empowered,<br /> responsible, and<br /> trained for mapping,<br /> measuring, and<br /> managing AI risks.<br /> GOVERN 2.1: Roles and responsibilities and lines of communi-<br /> cation related to mapping, measuring, and managing AI risks are<br /> documented and are clear to individuals and teams throughout<br /> the organization.<br /> GOVERN 2.2: The organization’s personnel and partners receive<br /> AI risk management training to enable them to perform their du-<br /> ties and responsibilities consistent with related policies, proce-<br /> dures, and agreements.<br /> GOVERN 2.3: Executive leadership of the organization takes re-<br /> sponsibility for decisions about risks associated with AI system<br /> development and deployment.</p>
Show original text

The NIST AI RMF 1.0 GOVERN function outlines five key areas for managing AI systems: GOVERN 2.3 requires executive leadership to take accountability for AI-related risks. GOVERN 3 mandates that diversity, equity, inclusion, and accessibility are prioritized throughout the AI lifecycle, supported by diverse teams (3.1) and clear definitions of human roles and oversight responsibilities (3.2). GOVERN 4 focuses on fostering a safety-first culture by implementing policies that encourage critical thinking (4.1), documenting and communicating AI risks and impacts (4.2), and establishing practices for testing, incident reporting, and information sharing (4.3). Finally, GOVERN 5 requires processes for engaging with all relevant stakeholders involved in AI.

<p>du-<br /> ties and responsibilities consistent with related policies, proce-<br /> dures, and agreements.<br /> GOVERN 2.3: Executive leadership of the organization takes re-<br /> sponsibility for decisions about risks associated with AI system<br /> development and deployment.<br /> GOVERN 3:<br /> Workforce diversity,<br /> equity, inclusion,<br /> and accessibility<br /> processes are<br /> prioritized in the<br /> mapping,<br /> measuring, and<br /> managing of AI<br /> risks throughout the<br /> lifecycle.<br /> GOVERN 3.1: Decision-making related to mapping, measuring,<br /> and managing AI risks throughout the lifecycle is informed by a<br /> diverse team (e.g., diversity of demographics, disciplines, expe-<br /> rience, expertise, and backgrounds).<br /> GOVERN 3.2: Policies and procedures are in place to define and<br /> differentiate roles and responsibilities for human-AI configura-<br /> tions and oversight of AI systems.<br /> GOVERN 4:<br /> Organizational<br /> teams are committed<br /> to a culture<br /> GOVERN 4.1: Organizational policies and practices are in place<br /> to foster a critical thinking and safety-first mindset in the design,<br /> development, deployment, and uses of AI systems to minimize<br /> potential negative impacts.<br /> Categories<br /> Subcategories<br /> Continued on next page<br /> Page 23</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 1: Categories and subcategories for the GOVERN function. (Continued)<br /> that considers and<br /> communicates AI<br /> risk.<br /> GOVERN 4.2: Organizational teams document the risks and po-<br /> tential impacts of the AI technology they design, develop, deploy,<br /> evaluate, and use, and they communicate about the impacts more<br /> broadly.<br /> GOVERN 4.3: Organizational practices are in place to enable AI<br /> testing, identification of incidents, and information sharing.<br /> GOVERN 5:<br /> Processes are in<br /> place for robust<br /> engagement with<br /> relevant AI actors.</p>
Show original text

The governance framework outlines several key requirements for AI management: GOVERN 4.3 requires systems for AI testing, incident reporting, and information sharing. GOVERN 5 focuses on engaging with external stakeholders, with 5.1 requiring the collection of feedback on societal impacts and 5.2 requiring that this feedback be integrated into system design. GOVERN 6 addresses supply chain risks, with 6.1 covering third-party intellectual property rights and 6.2 requiring contingency plans for high-risk third-party failures. Additionally, the MAP function is used to define the context of AI risks. Because the AI lifecycle involves many interdependent activities and actors who often lack full visibility into the entire process, mapping is essential to anticipate potential impacts.

<p>ly.<br /> GOVERN 4.3: Organizational practices are in place to enable AI<br /> testing, identification of incidents, and information sharing.<br /> GOVERN 5:<br /> Processes are in<br /> place for robust<br /> engagement with<br /> relevant AI actors.<br /> GOVERN 5.1: Organizational policies and practices are in place<br /> to collect, consider, prioritize, and integrate feedback from those<br /> external to the team that developed or deployed the AI system<br /> regarding the potential individual and societal impacts related to<br /> AI risks.<br /> GOVERN 5.2: Mechanisms are established to enable the team<br /> that developed or deployed AI systems to regularly incorporate<br /> adjudicated feedback from relevant AI actors into system design<br /> and implementation.<br /> GOVERN 6: Policies<br /> and procedures are<br /> in place to address<br /> AI risks and benefits<br /> arising from<br /> third-party software<br /> and data and other<br /> supply chain issues.<br /> GOVERN 6.1: Policies and procedures are in place that address<br /> AI risks associated with third-party entities, including risks of in-<br /> fringement of a third-party’s intellectual property or other rights.<br /> GOVERN 6.2: Contingency processes are in place to handle<br /> failures or incidents in third-party data or AI systems deemed to<br /> be high-risk.<br /> Categories<br /> Subcategories<br /> 5.2<br /> Map<br /> The MAP function establishes the context to frame risks related to an AI system. The AI<br /> lifecycle consists of many interdependent activities involving a diverse set of actors (See<br /> Figure 3). In practice, AI actors in charge of one part of the process often do not have full<br /> visibility or control over other parts and their associated contexts. The interdependencies<br /> between these activities, and among the relevant AI actors, can make it difficult to reliably<br /> anticipate impacts of AI systems.</p>
Show original text

In the NIST AI RMF 1.0, managing AI risk is challenging because different teams often lack visibility into the entire AI lifecycle. Decisions made early on, such as defining an AI's purpose, can significantly change how the system behaves later, meaning good intentions at one stage can be undone by actions in another. To address this uncertainty, the 'MAP' function is essential. It helps organizations identify risks and context, which serves as the foundation for the 'MEASURE' and 'MANAGE' functions. Effective risk management requires a deep understanding of these contexts and should be strengthened by gathering diverse perspectives from internal teams, external collaborators, end users, and impacted communities.

<p>of the process often do not have full<br /> visibility or control over other parts and their associated contexts. The interdependencies<br /> between these activities, and among the relevant AI actors, can make it difficult to reliably<br /> anticipate impacts of AI systems. For example, early decisions in identifying purposes and<br /> objectives of an AI system can alter its behavior and capabilities, and the dynamics of de-<br /> ployment setting (such as end users or impacted individuals) can shape the impacts of AI<br /> system decisions. As a result, the best intentions within one dimension of the AI lifecycle<br /> can be undermined via interactions with decisions and conditions in other, later activities.<br /> Page 24</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> This complexity and varying levels of visibility can introduce uncertainty into risk man-<br /> agement practices. Anticipating, assessing, and otherwise addressing potential sources of<br /> negative risk can mitigate this uncertainty and enhance the integrity of the decision process.<br /> The information gathered while carrying out the MAP function enables negative risk pre-<br /> vention and informs decisions for processes such as model management, as well as an<br /> initial decision about appropriateness or the need for an AI solution. Outcomes in the<br /> MAP function are the basis for the MEASURE and MANAGE functions. Without contex-<br /> tual knowledge, and awareness of risks within the identified contexts, risk management is<br /> difficult to perform. The MAP function is intended to enhance an organization’s ability to<br /> identify risks and broader contributing factors.<br /> Implementation of this function is enhanced by incorporating perspectives from a diverse<br /> internal team and engagement with those external to the team that developed or deployed<br /> the AI system. Engagement with external collaborators, end users, potentially impacted<br /> communities, and others may vary based on the risk level of a particular AI system, the<br /> makeup of the internal team, and organizational policies.</p>
Show original text

To build trustworthy AI, organizations should engage with end users, impacted communities, and external experts. This collaborative approach helps teams better understand the AI's context, challenge their own assumptions, identify potential risks or limitations, and discover beneficial uses. By completing the 'MAP' function of the NIST AI RMF, organizations gain the necessary insights to decide whether to proceed with an AI project. If they move forward, they should use the 'MEASURE,' 'MANAGE,' and 'GOVERN' functions to handle risks, while continuously re-evaluating the system as it evolves. Detailed guidance on these mapping practices can be found in the NIST AI RMF Playbook.

<p>developed or deployed<br /> the AI system. Engagement with external collaborators, end users, potentially impacted<br /> communities, and others may vary based on the risk level of a particular AI system, the<br /> makeup of the internal team, and organizational policies. Gathering such broad perspec-<br /> tives can help organizations proactively prevent negative risks and develop more trustwor-<br /> thy AI systems by:<br /> • improving their capacity for understanding contexts;<br /> • checking their assumptions about context of use;<br /> • enabling recognition of when systems are not functional within or out of their in-<br /> tended context;<br /> • identifying positive and beneficial uses of their existing AI systems;<br /> • improving understanding of limitations in AI and ML processes;<br /> • identifying constraints in real-world applications that may lead to negative impacts;<br /> • identifying known and foreseeable negative impacts related to intended use of AI<br /> systems; and<br /> • anticipating risks of the use of AI systems beyond intended use.<br /> After completing the MAP function, Framework users should have sufficient contextual<br /> knowledge about AI system impacts to inform an initial go/no-go decision about whether<br /> to design, develop, or deploy an AI system. If a decision is made to proceed, organizations<br /> should utilize the MEASURE and MANAGE functions along with policies and procedures<br /> put into place in the GOVERN function to assist in AI risk management efforts. It is incum-<br /> bent on Framework users to continue applying the MAP function to AI systems as context,<br /> capabilities, risks, benefits, and potential impacts evolve over time.<br /> Practices related to mapping AI risks are described in the NIST AI RMF Playbook. Table<br /> 2 lists the MAP function’s categories and subcategories.<br /> Page 25</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 2: Categories and subcategories for the MAP function.<br /> MAP 1: Context is<br /> established and<br /> understood.<br /> MAP 1.</p>
Show original text

The NIST AI RMF 1.0 (Table 2) outlines the 'MAP' function for AI systems. MAP 1 focuses on establishing and understanding context: MAP 1.1 requires documenting the AI's purpose, intended users, potential impacts, and limitations. MAP 1.2 emphasizes using diverse, interdisciplinary teams. MAP 1.3 and 1.4 require documenting organizational goals and business value. MAP 1.5 involves setting risk tolerance levels. MAP 1.6 requires defining system requirements that address socio-technical risks. MAP 2 focuses on the formal categorization of the AI system.

<p>25</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 2: Categories and subcategories for the MAP function.<br /> MAP 1: Context is<br /> established and<br /> understood.<br /> MAP 1.1: Intended purposes, potentially beneficial uses, context-<br /> specific laws, norms and expectations, and prospective settings in<br /> which the AI system will be deployed are understood and docu-<br /> mented. Considerations include: the specific set or types of users<br /> along with their expectations; potential positive and negative im-<br /> pacts of system uses to individuals, communities, organizations,<br /> society, and the planet; assumptions and related limitations about<br /> AI system purposes, uses, and risks across the development or<br /> product AI lifecycle; and related TEVV and system metrics.<br /> MAP 1.2: Interdisciplinary AI actors, competencies, skills, and<br /> capacities for establishing context reflect demographic diversity<br /> and broad domain and user experience expertise, and their par-<br /> ticipation is documented. Opportunities for interdisciplinary col-<br /> laboration are prioritized.<br /> MAP 1.3: The organization’s mission and relevant goals for AI<br /> technology are understood and documented.<br /> MAP 1.4: The business value or context of business use has been<br /> clearly defined or – in the case of assessing existing AI systems<br /> – re-evaluated.<br /> MAP 1.5: Organizational risk tolerances are determined and<br /> documented.<br /> MAP 1.6: System requirements (e.g., “the system shall respect<br /> the privacy of its users”) are elicited from and understood by rel-<br /> evant AI actors. Design decisions take socio-technical implica-<br /> tions into account to address AI risks.<br /> MAP 2:<br /> Categorization of<br /> the AI system is<br /> performed.<br /> MAP 2.</p>
Show original text

The NIST AI RMF 1.0 framework outlines the MAP function for managing AI risks through three main areas: MAP 2 (Categorization), MAP 2.1-2.3 (Documentation and Integrity), and MAP 3 (Capabilities and Impact). MAP 2 requires defining the AI's tasks and methods (e.g., generative models), documenting knowledge limits, and establishing human oversight protocols. It also mandates documenting scientific integrity, data quality, and testing (TEVV) procedures. MAP 3 focuses on evaluating the AI's goals by documenting potential benefits, assessing both monetary and non-monetary costs related to system errors, and clearly defining the intended scope of use based on the system's capabilities.

<p>from and understood by rel-<br /> evant AI actors. Design decisions take socio-technical implica-<br /> tions into account to address AI risks.<br /> MAP 2:<br /> Categorization of<br /> the AI system is<br /> performed.<br /> MAP 2.1: The specific tasks and methods used to implement the<br /> tasks that the AI system will support are defined (e.g., classifiers,<br /> generative models, recommenders).<br /> MAP 2.2: Information about the AI system’s knowledge limits<br /> and how system output may be utilized and overseen by humans<br /> is documented. Documentation provides sufficient information<br /> to assist relevant AI actors when making decisions and taking<br /> subsequent actions.<br /> Categories<br /> Subcategories<br /> Continued on next page<br /> Page 26</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 2: Categories and subcategories for the MAP function. (Continued)<br /> MAP 2.3: Scientific integrity and TEVV considerations are iden-<br /> tified and documented, including those related to experimental<br /> design, data collection and selection (e.g., availability, repre-<br /> sentativeness, suitability), system trustworthiness, and construct<br /> validation.<br /> MAP 3: AI<br /> capabilities, targeted<br /> usage, goals, and<br /> expected benefits<br /> and costs compared<br /> with appropriate<br /> benchmarks are<br /> understood.<br /> MAP 3.1: Potential benefits of intended AI system functionality<br /> and performance are examined and documented.<br /> MAP 3.2: Potential costs, including non-monetary costs, which<br /> result from expected or realized AI errors or system functionality<br /> and trustworthiness – as connected to organizational risk toler-<br /> ance – are examined and documented.<br /> MAP 3.3: Targeted application scope is specified and docu-<br /> mented based on the system’s capability, established context, and<br /> AI system categorization.<br /> MAP 3.</p>
Show original text

The NIST AI RMF 1.0 MAP function outlines the following requirements for AI systems: MAP 3.3 requires defining the system's scope based on its capabilities and categorization. MAP 3.4 mandates that staff proficiency, technical standards, and certifications are documented and assessed. MAP 3.5 requires formal processes for human oversight aligned with organizational policies. MAP 4 focuses on mapping risks and benefits for all system components, including third-party software and data. MAP 4.1 requires documenting legal and technology risks, including potential intellectual property infringement. MAP 4.2 requires identifying and documenting internal risk controls for all components. MAP 5 requires characterizing the impacts of the AI system on individuals, groups, and society. MAP 5.1 mandates documenting the likelihood and magnitude of both beneficial and harmful impacts based on past performance, public reports, and external feedback.

<p>to organizational risk toler-<br /> ance – are examined and documented.<br /> MAP 3.3: Targeted application scope is specified and docu-<br /> mented based on the system’s capability, established context, and<br /> AI system categorization.<br /> MAP 3.4: Processes for operator and practitioner proficiency<br /> with AI system performance and trustworthiness – and relevant<br /> technical standards and certifications – are defined, assessed, and<br /> documented.<br /> MAP 3.5: Processes for human oversight are defined, assessed,<br /> and documented in accordance with organizational policies from<br /> the GOVERN function.<br /> MAP 4: Risks and<br /> benefits are mapped<br /> for all components<br /> of the AI system<br /> including third-party<br /> software and data.<br /> MAP 4.1: Approaches for mapping AI technology and legal risks<br /> of its components – including the use of third-party data or soft-<br /> ware – are in place, followed, and documented, as are risks of in-<br /> fringement of a third party’s intellectual property or other rights.<br /> MAP 4.2: Internal risk controls for components of the AI sys-<br /> tem, including third-party AI technologies, are identified and<br /> documented.<br /> MAP 5: Impacts to<br /> individuals, groups,<br /> communities,<br /> organizations, and<br /> society are<br /> characterized.<br /> MAP 5.1: Likelihood and magnitude of each identified impact<br /> (both potentially beneficial and harmful) based on expected use,<br /> past uses of AI systems in similar contexts, public incident re-<br /> ports, feedback from those external to the team that developed<br /> or deployed the AI system, or other data are identified and<br /> documented.<br /> Categories<br /> Subcategories<br /> Continued on next page<br /> Page 27</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 2: Categories and subcategories for the MAP function. (Continued)<br /> MAP 5.</p>
Show original text

According to NIST AI 100-1 (AI RMF 1.0), the MEASURE function is used to analyze, assess, and monitor AI risks and impacts using quantitative or qualitative methods. This function builds on the MAP function and provides data to the MANAGE function. Key requirements include: 1) Testing AI systems before and during deployment to document functionality and trustworthiness. 2) Tracking metrics related to social impact, human-AI interaction, and trustworthy characteristics. 3) Using rigorous software testing, performance benchmarks, and uncertainty measurements, supported by formal documentation. 4) Utilizing independent reviews to reduce bias and conflicts of interest. 5) Using measurement data to make informed decisions when trade-offs are necessary, such as recalibrating, mitigating impacts, or removing a system. Ultimately, organizations must establish and document objective, repeatable, and scalable test, evaluation, verification, and validation (TEVV) processes.

<p>Subcategories<br /> Continued on next page<br /> Page 27</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 2: Categories and subcategories for the MAP function. (Continued)<br /> MAP 5.2: Practices and personnel for supporting regular en-<br /> gagement with relevant AI actors and integrating feedback about<br /> positive, negative, and unanticipated impacts are in place and<br /> documented.<br /> Categories<br /> Subcategories<br /> 5.3<br /> Measure<br /> The MEASURE function employs quantitative, qualitative, or mixed-method tools, tech-<br /> niques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related<br /> impacts. It uses knowledge relevant to AI risks identified in the MAP function and informs<br /> the MANAGE function. AI systems should be tested before their deployment and regu-<br /> larly while in operation. AI risk measurements include documenting aspects of systems’<br /> functionality and trustworthiness.<br /> Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact,<br /> and human-AI configurations. Processes developed or adopted in the MEASURE function<br /> should include rigorous software testing and performance assessment methodologies with<br /> associated measures of uncertainty, comparisons to performance benchmarks, and formal-<br /> ized reporting and documentation of results. Processes for independent review can improve<br /> the effectiveness of testing and can mitigate internal biases and potential conflicts of inter-<br /> est.<br /> Where tradeoffs among the trustworthy characteristics arise, measurement provides a trace-<br /> able basis to inform management decisions. Options may include recalibration, impact<br /> mitigation, or removal of the system from design, development, production, or use, as well<br /> as a range of compensating, detective, deterrent, directive, and recovery controls.<br /> After completing the MEASURE function, objective, repeatable, or scalable test, evaluation,<br /> verification, and validation (TEVV) processes including metrics, methods, and methodolo-<br /> gies are in place, followed, and documented.</p>
Show original text

The MEASURE function ensures that AI systems are evaluated using objective, repeatable, and transparent testing, evaluation, verification, and validation (TEVV) processes. Users should develop both qualitative and quantitative metrics that align with scientific, legal, and ethical standards. These measurements help track AI risks and system trustworthiness, with results informing the MANAGE function for ongoing risk response. Because AI risks evolve, this measurement process must be continuous. Detailed guidance is available in the NIST AI RMF Playbook. The MEASURE function consists of two main parts: MEASURE 1.1 involves selecting metrics for the most significant risks identified in the MAP function and documenting any risks that cannot be measured. MEASURE 1.2 requires the regular assessment and updating of these metrics and controls, including reporting on errors and their impact on communities.

<p>and recovery controls.<br /> After completing the MEASURE function, objective, repeatable, or scalable test, evaluation,<br /> verification, and validation (TEVV) processes including metrics, methods, and methodolo-<br /> gies are in place, followed, and documented. Metrics and measurement methodologies<br /> should adhere to scientific, legal, and ethical norms and be carried out in an open and trans-<br /> parent process. New types of measurement, qualitative and quantitative, may need to be<br /> developed. The degree to which each measurement type provides unique and meaningful<br /> information to the assessment of AI risks should be considered. Framework users will en-<br /> hance their capacity to comprehensively evaluate system trustworthiness, identify and track<br /> existing and emergent risks, and verify efficacy of the metrics. Measurement outcomes will<br /> be utilized in the MANAGE function to assist risk monitoring and response efforts. It is in-<br /> cumbent on Framework users to continue applying the MEASURE function to AI systems<br /> as knowledge, methodologies, risks, and impacts evolve over time.<br /> Page 28</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Practices related to measuring AI risks are described in the NIST AI RMF Playbook. Table<br /> 3 lists the MEASURE function’s categories and subcategories.<br /> Table 3: Categories and subcategories for the MEASURE function.<br /> MEASURE 1:<br /> Appropriate<br /> methods and metrics<br /> are identified and<br /> applied.<br /> MEASURE 1.1: Approaches and metrics for measurement of AI<br /> risks enumerated during the MAP function are selected for imple-<br /> mentation starting with the most significant AI risks. The risks<br /> or trustworthiness characteristics that will not – or cannot – be<br /> measured are properly documented.<br /> MEASURE 1.2: Appropriateness of AI metrics and effectiveness<br /> of existing controls are regularly assessed and updated, including<br /> reports of errors and potential impacts on affected communities.<br /> MEASURE 1.</p>
Show original text

The NIST AI RMF 1.0 'MEASURE' function outlines requirements for evaluating AI systems. Key actions include: 1.2) Regularly updating AI metrics and controls while tracking errors and community impacts. 1.3) Involving independent internal experts, external stakeholders, and affected communities in assessments. 2.1) Documenting all testing tools, metrics, and data. 2.2) Ensuring human-subject evaluations are ethical and representative. 2.3) Measuring performance against real-world deployment conditions. 2.4) Monitoring system behavior during production. 2.5) Proving the system is valid and reliable, while clearly documenting its limitations. 2.6) Conducting ongoing safety risk assessments.

<p>be<br /> measured are properly documented.<br /> MEASURE 1.2: Appropriateness of AI metrics and effectiveness<br /> of existing controls are regularly assessed and updated, including<br /> reports of errors and potential impacts on affected communities.<br /> MEASURE 1.3: Internal experts who did not serve as front-line<br /> developers for the system and/or independent assessors are in-<br /> volved in regular assessments and updates.<br /> Domain experts,<br /> users, AI actors external to the team that developed or deployed<br /> the AI system, and affected communities are consulted in support<br /> of assessments as necessary per organizational risk tolerance.<br /> MEASURE 2: AI<br /> systems are<br /> evaluated for<br /> trustworthy<br /> characteristics.<br /> MEASURE 2.1: Test sets, metrics, and details about the tools used<br /> during TEVV are documented.<br /> MEASURE 2.2: Evaluations involving human subjects meet ap-<br /> plicable requirements (including human subject protection) and<br /> are representative of the relevant population.<br /> MEASURE 2.3: AI system performance or assurance criteria<br /> are measured qualitatively or quantitatively and demonstrated<br /> for conditions similar to deployment setting(s). Measures are<br /> documented.<br /> MEASURE 2.4: The functionality and behavior of the AI sys-<br /> tem and its components – as identified in the MAP function – are<br /> monitored when in production.<br /> MEASURE 2.5: The AI system to be deployed is demonstrated<br /> to be valid and reliable. Limitations of the generalizability be-<br /> yond the conditions under which the technology was developed<br /> are documented.<br /> Categories<br /> Subcategories<br /> Continued on next page<br /> Page 29</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 3: Categories and subcategories for the MEASURE function. (Continued)<br /> MEASURE 2.6: The AI system is evaluated regularly for safety<br /> risks – as identified in the MAP function.</p>
Show original text

Table 3 outlines the MEASURE function categories for AI systems. MEASURE 2.6 requires regular safety evaluations to ensure the system is reliable, robust, and fails safely within risk tolerance limits. MEASURE 2.7 through 2.12 mandate that risks related to security, transparency, accountability, model explanation, privacy, fairness, bias, and environmental impact are assessed and documented based on the MAP function. MEASURE 2.13 requires documenting the effectiveness of TEVV metrics. Finally, MEASURE 3 requires implementing mechanisms to track identified AI risks over time.

<p>F 1.0<br /> Table 3: Categories and subcategories for the MEASURE function. (Continued)<br /> MEASURE 2.6: The AI system is evaluated regularly for safety<br /> risks – as identified in the MAP function. The AI system to be de-<br /> ployed is demonstrated to be safe, its residual negative risk does<br /> not exceed the risk tolerance, and it can fail safely, particularly if<br /> made to operate beyond its knowledge limits. Safety metrics re-<br /> flect system reliability and robustness, real-time monitoring, and<br /> response times for AI system failures.<br /> MEASURE 2.7: AI system security and resilience – as identified<br /> in the MAP function – are evaluated and documented.<br /> MEASURE 2.8: Risks associated with transparency and account-<br /> ability – as identified in the MAP function – are examined and<br /> documented.<br /> MEASURE 2.9: The AI model is explained, validated, and docu-<br /> mented, and AI system output is interpreted within its context –<br /> as identified in the MAP function – to inform responsible use and<br /> governance.<br /> MEASURE 2.10: Privacy risk of the AI system – as identified in<br /> the MAP function – is examined and documented.<br /> MEASURE 2.11: Fairness and bias – as identified in the MAP<br /> function – are evaluated and results are documented.<br /> MEASURE 2.12: Environmental impact and sustainability of AI<br /> model training and management activities – as identified in the<br /> MAP function – are assessed and documented.<br /> MEASURE 2.13: Effectiveness of the employed TEVV met-<br /> rics and processes in the MEASURE function are evaluated and<br /> documented.<br /> MEASURE 3:<br /> Mechanisms for<br /> tracking identified<br /> AI risks over time<br /> are in place.<br /> MEASURE 3.</p>
Show original text

The MEASURE function of the NIST AI RMF 1.0 ensures AI systems are evaluated effectively. MEASURE 3 focuses on tracking AI risks: organizations must establish personnel and processes to monitor known and emerging risks (3.1), develop tracking methods for complex scenarios where standard metrics fail (3.2), and create feedback channels for users to report issues or appeal outcomes (3.3). MEASURE 4 focuses on improving measurement efficacy: organizations must tailor measurement approaches to specific deployment contexts using expert and user input (4.1), validate system trustworthiness through expert consultation (4.2), and document performance changes based on feedback from affected communities and field data (4.3).

<p>iveness of the employed TEVV met-<br /> rics and processes in the MEASURE function are evaluated and<br /> documented.<br /> MEASURE 3:<br /> Mechanisms for<br /> tracking identified<br /> AI risks over time<br /> are in place.<br /> MEASURE 3.1: Approaches, personnel, and documentation are<br /> in place to regularly identify and track existing, unanticipated,<br /> and emergent AI risks based on factors such as intended and ac-<br /> tual performance in deployed contexts.<br /> MEASURE 3.2: Risk tracking approaches are considered for<br /> settings where AI risks are difficult to assess using currently<br /> available measurement techniques or where metrics are not yet<br /> available.<br /> Categories<br /> Subcategories<br /> Continued on next page<br /> Page 30</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 3: Categories and subcategories for the MEASURE function. (Continued)<br /> MEASURE 3.3: Feedback processes for end users and impacted<br /> communities to report problems and appeal system outcomes are<br /> established and integrated into AI system evaluation metrics.<br /> MEASURE 4:<br /> Feedback about<br /> efficacy of<br /> measurement is<br /> gathered and<br /> assessed.<br /> MEASURE 4.1: Measurement approaches for identifying AI risks<br /> are connected to deployment context(s) and informed through<br /> consultation with domain experts and other end users.<br /> Ap-<br /> proaches are documented.<br /> MEASURE 4.2: Measurement results regarding AI system trust-<br /> worthiness in deployment context(s) and across the AI lifecycle<br /> are informed by input from domain experts and relevant AI ac-<br /> tors to validate whether the system is performing consistently as<br /> intended. Results are documented.<br /> MEASURE 4.3: Measurable performance improvements or de-<br /> clines based on consultations with relevant AI actors, in-<br /> cluding affected communities, and field data about context-<br /> relevant risks and trustworthiness characteristics are identified<br /> and documented.<br /> Categories<br /> Subcategories<br /> 5.</p>
Show original text

The MANAGE function focuses on allocating resources to address AI risks identified during the MAP and MEASURE phases. Key activities include creating plans to respond to, recover from, and communicate about incidents. By using expert consultations and feedback from affected communities, organizations can reduce system failures and negative impacts. This function requires systematic documentation to ensure transparency and accountability, as well as processes for monitoring new risks and making continuous improvements. Users are expected to prioritize risks and regularly update their management strategies as AI systems, contexts, and stakeholder needs evolve. Detailed guidance for these practices can be found in the NIST AI RMF Playbook.

<p>: Measurable performance improvements or de-<br /> clines based on consultations with relevant AI actors, in-<br /> cluding affected communities, and field data about context-<br /> relevant risks and trustworthiness characteristics are identified<br /> and documented.<br /> Categories<br /> Subcategories<br /> 5.4<br /> Manage<br /> The MANAGE function entails allocating risk resources to mapped and measured risks on<br /> a regular basis and as defined by the GOVERN function. Risk treatment comprises plans to<br /> respond to, recover from, and communicate about incidents or events.<br /> Contextual information gleaned from expert consultation and input from relevant AI actors<br /> – established in GOVERN and carried out in MAP – is utilized in this function to decrease<br /> the likelihood of system failures and negative impacts. Systematic documentation practices<br /> established in GOVERN and utilized in MAP and MEASURE bolster AI risk management<br /> efforts and increase transparency and accountability. Processes for assessing emergent risks<br /> are in place, along with mechanisms for continual improvement.<br /> After completing the MANAGE function, plans for prioritizing risk and regular monitoring<br /> and improvement will be in place. Framework users will have enhanced capacity to man-<br /> age the risks of deployed AI systems and to allocate risk management resources based on<br /> assessed and prioritized risks. It is incumbent on Framework users to continue to apply<br /> the MANAGE function to deployed AI systems as methods, contexts, risks, and needs or<br /> expectations from relevant AI actors evolve over time.<br /> Page 31</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Practices related to managing AI risks are described in the NIST AI RMF Playbook. Table<br /> 4 lists the MANAGE function’s categories and subcategories.<br /> Table 4: Categories and subcategories for the MANAGE function.<br /> MANAGE 1: AI<br /> risks based on<br /> assessments and<br /> other analytical<br /> output from the<br /> MAP and MEASURE<br /> functions are<br /> prioritized,<br /> responded to, and<br /> managed.</p>
Show original text
<p>categories for the MANAGE function.<br /> MANAGE 1: AI<br /> risks based on<br /> assessments and<br /> other analytical<br /> output from the<br /> MAP and MEASURE<br /> functions are<br /> prioritized,<br /> responded to, and<br /> managed.<br /> MANAGE 1.1: A determination is made as to whether the AI<br /> system achieves its intended purposes and stated objectives and<br /> whether its development or deployment should proceed.<br /> MANAGE 1.2: Treatment of documented AI risks is prioritized<br /> based on impact, likelihood, and available resources or methods.<br /> MANAGE 1.3: Responses to the AI risks deemed high priority, as<br /> identified by the MAP function, are developed, planned, and doc-<br /> umented. Risk response options can include mitigating, transfer-<br /> ring, avoiding, or accepting.<br /> MANAGE 1.4: Negative residual risks (defined as the sum of all<br /> unmitigated risks) to both downstream acquirers of AI systems<br /> and end users are documented.<br /> MANAGE 2:<br /> Strategies to<br /> maximize AI<br /> benefits and<br /> minimize negative<br /> impacts are planned,<br /> prepared,<br /> implemented,<br /> documented, and<br /> informed by input<br /> from relevant AI<br /> actors.<br /> MANAGE 2.1: Resources required to manage AI risks are taken<br /> into account – along with viable non-AI alternative systems, ap-<br /> proaches, or methods – to reduce the magnitude or likelihood of<br /> potential impacts.<br /> MANAGE 2.2: Mechanisms are in place and applied to sustain<br /> the value of deployed AI systems.<br /> MANAGE 2.3: Procedures are followed to respond to and recover<br /> from a previously unknown risk when it is identified.<br /> MANAGE 2.4: Mechanisms are in place and applied, and respon-<br /> sibilities are assigned and understood, to supersede, disengage, or<br /> deactivate AI systems that demonstrate performance or outcomes<br /> inconsistent with intended use.</p>
Show original text

The NIST AI RMF 1.0 guidelines for managing AI systems include the following requirements: 1) Systems must have clear procedures to deactivate or override AI that performs unexpectedly. 2) Risks and benefits from third-party entities and pre-trained models must be actively monitored, controlled, and documented. 3) Organizations must maintain documented plans for risk response, recovery, and communication. This includes implementing post-deployment monitoring, gathering user feedback, providing appeal mechanisms, and managing system updates. Finally, all incidents and errors must be tracked, documented, and communicated to affected parties and stakeholders.

<p>AGE 2.4: Mechanisms are in place and applied, and respon-<br /> sibilities are assigned and understood, to supersede, disengage, or<br /> deactivate AI systems that demonstrate performance or outcomes<br /> inconsistent with intended use.<br /> MANAGE 3: AI<br /> risks and benefits<br /> from third-party<br /> entities are<br /> managed.<br /> MANAGE 3.1: AI risks and benefits from third-party resources<br /> are regularly monitored, and risk controls are applied and<br /> documented.<br /> MANAGE 3.2: Pre-trained models which are used for develop-<br /> ment are monitored as part of AI system regular monitoring and<br /> maintenance.<br /> Categories<br /> Subcategories<br /> Continued on next page<br /> Page 32</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Table 4: Categories and subcategories for the MANAGE function. (Continued)<br /> MANAGE 4: Risk<br /> treatments,<br /> including response<br /> and recovery, and<br /> communication<br /> plans for the<br /> identified and<br /> measured AI risks<br /> are documented and<br /> monitored regularly.<br /> MANAGE 4.1: Post-deployment AI system monitoring plans<br /> are implemented, including mechanisms for capturing and eval-<br /> uating input from users and other relevant AI actors, appeal<br /> and override, decommissioning, incident response, recovery, and<br /> change management.<br /> MANAGE 4.2: Measurable activities for continual improvements<br /> are integrated into AI system updates and include regular engage-<br /> ment with interested parties, including relevant AI actors.<br /> MANAGE 4.3: Incidents and errors are communicated to relevant<br /> AI actors, including affected communities. Processes for track-<br /> ing, responding to, and recovering from incidents and errors are<br /> followed and documented.<br /> Categories<br /> Subcategories<br /> 6.</p>
Show original text

Section 4.3 requires that all AI-related incidents and errors be documented and communicated to relevant stakeholders, including affected communities. Organizations must follow established procedures to track, address, and recover from these issues. Section 6 introduces 'AI RMF Profiles,' which are customized implementations of the AI Risk Management Framework tailored to specific sectors or applications, such as hiring or fair housing. These profiles help organizations align risk management with their goals, legal requirements, and resources. 'Temporal Profiles' are used to assess progress: a 'Current Profile' describes existing risk management practices, while a 'Target Profile' outlines desired future outcomes. By comparing these two, organizations can identify gaps, create action plans, and prioritize resources to manage AI risks effectively and cost-efficiently.

<p>4.3: Incidents and errors are communicated to relevant<br /> AI actors, including affected communities. Processes for track-<br /> ing, responding to, and recovering from incidents and errors are<br /> followed and documented.<br /> Categories<br /> Subcategories<br /> 6.<br /> AI RMF Profiles<br /> AI RMF use-case profiles are implementations of the AI RMF functions, categories, and<br /> subcategories for a specific setting or application based on the requirements, risk tolerance,<br /> and resources of the Framework user: for example, an AI RMF hiring profile or an AI<br /> RMF fair housing profile. Profiles may illustrate and offer insights into how risk can be<br /> managed at various stages of the AI lifecycle or in specific sector, technology, or end-use<br /> applications. AI RMF profiles assist organizations in deciding how they might best manage<br /> AI risk that is well-aligned with their goals, considers legal/regulatory requirements and<br /> best practices, and reflects risk management priorities.<br /> AI RMF temporal profiles are descriptions of either the current state or the desired, target<br /> state of specific AI risk management activities within a given sector, industry, organization,<br /> or application context. An AI RMF Current Profile indicates how AI is currently being<br /> managed and the related risks in terms of current outcomes. A Target Profile indicates the<br /> outcomes needed to achieve the desired or target AI risk management goals.<br /> Comparing Current and Target Profiles likely reveals gaps to be addressed to meet AI risk<br /> management objectives. Action plans can be developed to address these gaps to fulfill<br /> outcomes in a given category or subcategory. Prioritization of gap mitigation is driven by<br /> the user’s needs and risk management processes. This risk-based approach also enables<br /> Framework users to compare their approaches with other approaches and to gauge the<br /> resources needed (e.g., staffing, funding) to achieve AI risk management goals in a cost-<br /> effective, prioritized manner.<br /> Page 33</p> <p>NIST AI 100-1<br /> AI RMF 1.</p>
Show original text

The NIST AI RMF 1.0 helps organizations determine the resources—such as staff and funding—needed to manage AI risks efficiently. It introduces 'cross-sectoral profiles,' which address risks common to various industries or applications, such as large language models, cloud services, or procurement. These profiles are flexible and do not require a specific template. Additionally, the framework defines AI actor tasks: 'AI Design' involves planning, setting objectives, and data preparation to ensure systems are lawful and effective. This phase involves a wide range of contributors, including data scientists, domain experts, legal teams, and community representatives. 'AI Development' tasks follow during the model creation phase.

<p>approaches and to gauge the<br /> resources needed (e.g., staffing, funding) to achieve AI risk management goals in a cost-<br /> effective, prioritized manner.<br /> Page 33</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> AI RMF cross-sectoral profiles cover risks of models or applications that can be used across<br /> use cases or sectors. Cross-sectoral profiles can also cover how to govern, map, measure,<br /> and manage risks for activities or business processes common across sectors such as the<br /> use of large language models, cloud-based services or acquisition.<br /> This Framework does not prescribe profile templates, allowing for flexibility in implemen-<br /> tation.<br /> Page 34</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Appendix A:<br /> Descriptions of AI Actor Tasks from Figures 2 and 3<br /> AI Design tasks are performed during the Application Context and Data and Input phases<br /> of the AI lifecycle in Figure 2. AI Design actors create the concept and objectives of AI<br /> systems and are responsible for the planning, design, and data collection and processing<br /> tasks of the AI system so that the AI system is lawful and fit-for-purpose. Tasks include ar-<br /> ticulating and documenting the system’s concept and objectives, underlying assumptions,<br /> context, and requirements; gathering and cleaning data; and documenting the metadata<br /> and characteristics of the dataset. AI actors in this category include data scientists, do-<br /> main experts, socio-cultural analysts, experts in the field of diversity, equity, inclusion,<br /> and accessibility, members of impacted communities, human factors experts (e.g., UX/UI<br /> design), governance experts, data engineers, data providers, system funders, product man-<br /> agers, third-party entities, evaluators, and legal and privacy governance.<br /> AI Development tasks are performed during the AI Model phase of the lifecycle in Figure<br /> 2.</p>
Show original text

The AI lifecycle consists of four primary stages, each involving specific tasks and actors: 1. AI Development: Experts like data scientists and developers build, train, and test models. 2. AI Deployment: Teams including system integrators and end users manage the system's integration into production, ensuring regulatory compliance and compatibility. 3. Operation and Monitoring: Operators, auditors, and management oversee the system's ongoing performance and impact. 4. Test, Evaluation, Verification, and Validation (TEVV): These critical assessment tasks occur continuously throughout the entire lifecycle. Various stakeholders, including governance experts, data engineers, and third-party entities, contribute across these phases.

<p>), governance experts, data engineers, data providers, system funders, product man-<br /> agers, third-party entities, evaluators, and legal and privacy governance.<br /> AI Development tasks are performed during the AI Model phase of the lifecycle in Figure<br /> 2. AI Development actors provide the initial infrastructure of AI systems and are responsi-<br /> ble for model building and interpretation tasks, which involve the creation, selection, cali-<br /> bration, training, and/or testing of models or algorithms. AI actors in this category include<br /> machine learning experts, data scientists, developers, third-party entities, legal and privacy<br /> governance experts, and experts in the socio-cultural and contextual factors associated with<br /> the deployment setting.<br /> AI Deployment tasks are performed during the Task and Output phase of the lifecycle in<br /> Figure 2. AI Deployment actors are responsible for contextual decisions relating to how<br /> the AI system is used to assure deployment of the system into production. Related tasks<br /> include piloting the system, checking compatibility with legacy systems, ensuring regu-<br /> latory compliance, managing organizational change, and evaluating user experience. AI<br /> actors in this category include system integrators, software developers, end users, oper-<br /> ators and practitioners, evaluators, and domain experts with expertise in human factors,<br /> socio-cultural analysis, and governance.<br /> Operation and Monitoring tasks are performed in the Application Context/Operate and<br /> Monitor phase of the lifecycle in Figure 2. These tasks are carried out by AI actors who are<br /> responsible for operating the AI system and working with others to regularly assess system<br /> output and impacts. AI actors in this category include system operators, domain experts, AI<br /> designers, users who interpret or incorporate the output of AI systems, product developers,<br /> evaluators and auditors, compliance experts, organizational management, and members of<br /> the research community.<br /> Test, Evaluation, Verification, and Validation (TEVV) tasks are performed throughout<br /> the AI lifecycle.</p>
Show original text

The NIST AI RMF 1.0 outlines Test, Evaluation, Verification, and Validation (TEVV) processes performed by AI actors—such as developers, auditors, and researchers—throughout the AI lifecycle. Ideally, those verifying and validating systems should be separate from those testing and evaluating them. TEVV tasks occur in four key phases: 1) Design and Data: Validating assumptions and requirements. 2) Development: Validating and assessing models. 3) Deployment: Integrating systems, testing performance, and ensuring legal and ethical compliance. 4) Operations: Monitoring, recalibrating, tracking errors, and managing incident responses. Additionally, Human Factors are integrated across all phases to ensure human-centered design, active user involvement, and the consideration of social norms and values.

<p>output of AI systems, product developers,<br /> evaluators and auditors, compliance experts, organizational management, and members of<br /> the research community.<br /> Test, Evaluation, Verification, and Validation (TEVV) tasks are performed throughout<br /> the AI lifecycle. They are carried out by AI actors who examine the AI system or its<br /> components, or detect and remediate problems. Ideally, AI actors carrying out verification<br /> Page 35</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> and validation tasks are distinct from those who perform test and evaluation actions. Tasks<br /> can be incorporated into a phase as early as design, where tests are planned in accordance<br /> with the design requirement.<br /> • TEVV tasks for design, planning, and data may center on internal and external vali-<br /> dation of assumptions for system design, data collection, and measurements relative<br /> to the intended context of deployment or application.<br /> • TEVV tasks for development (i.e., model building) include model validation and<br /> assessment.<br /> • TEVV tasks for deployment include system validation and integration in production,<br /> with testing, and recalibration for systems and process integration, user experience,<br /> and compliance with existing legal, regulatory, and ethical specifications.<br /> • TEVV tasks for operations involve ongoing monitoring for periodic updates, testing,<br /> and subject matter expert (SME) recalibration of models, the tracking of incidents<br /> or errors reported and their management, the detection of emergent properties and<br /> related impacts, and processes for redress and response.<br /> Human Factors tasks and activities are found throughout the dimensions of the AI life-<br /> cycle. They include human-centered design practices and methodologies, promoting the<br /> active involvement of end users and other interested parties and relevant AI actors, incor-<br /> porating context-specific norms and values in system design, evaluating and adapting end<br /> user experiences, and broad integration of humans and human dynamics in all phases of the<br /> AI lifecycle.</p>
Show original text

The NIST AI RMF 1.0 outlines key roles and responsibilities for managing AI systems throughout their lifecycle: 1) Human Factors Professionals ensure AI is designed with human needs in mind by evaluating user experiences and integrating human dynamics. 2) Domain Experts provide specialized knowledge of specific industries or sectors to guide system design and help interpret AI outputs. 3) AI Impact Assessors evaluate systems for accountability, bias, safety, liability, and security using technical, legal, and socio-cultural expertise. 4) Procurement teams manage the financial and legal aspects of acquiring AI products from third-party vendors. 5) Governance and Oversight teams, including senior leadership and Boards of Directors, hold the ultimate authority and responsibility for the organization's AI strategy, impact, and sustainability.

<p>of end users and other interested parties and relevant AI actors, incor-<br /> porating context-specific norms and values in system design, evaluating and adapting end<br /> user experiences, and broad integration of humans and human dynamics in all phases of the<br /> AI lifecycle. Human factors professionals provide multidisciplinary skills and perspectives<br /> to understand context of use, inform interdisciplinary and demographic diversity, engage<br /> in consultative processes, design and evaluate user experience, perform human-centered<br /> evaluation and testing, and inform impact assessments.<br /> Domain Expert tasks involve input from multidisciplinary practitioners or scholars who<br /> provide knowledge or expertise in – and about – an industry sector, economic sector, con-<br /> text, or application area where an AI system is being used. AI actors who are domain<br /> experts can provide essential guidance for AI system design and development, and inter-<br /> pret outputs in support of work performed by TEVV and AI impact assessment teams.<br /> AI Impact Assessment tasks include assessing and evaluating requirements for AI system<br /> accountability, combating harmful bias, examining impacts of AI systems, product safety,<br /> liability, and security, among others. AI actors such as impact assessors and evaluators<br /> provide technical, human factor, socio-cultural, and legal expertise.<br /> Procurement tasks are conducted by AI actors with financial, legal, or policy management<br /> authority for acquisition of AI models, products, or services from a third-party developer,<br /> vendor, or contractor.<br /> Governance and Oversight tasks are assumed by AI actors with management, fiduciary,<br /> and legal authority and responsibility for the organization in which an AI system is de-<br /> Page 36</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> signed, developed, and/or deployed. Key AI actors responsible for AI governance include<br /> organizational management, senior leadership, and the Board of Directors. These actors<br /> are parties that are concerned with the impact and sustainability of the organization as a<br /> whole.</p>
Show original text

Effective AI governance involves several key groups. Internal leadership, including management, senior executives, and the Board of Directors, oversees the organization's overall impact and sustainability. External third-party entities—such as vendors, developers, and evaluators—provide AI tools and services, though their risk standards may differ from the organization using them. End users are the individuals who directly interact with AI systems, regardless of their technical expertise. Affected individuals and communities include anyone impacted by AI decisions, even if they do not interact with the system directly. Additionally, groups like trade associations, researchers, and advocacy organizations provide guidance and standards for managing AI risks. Finally, the general public experiences the primary impacts of AI and often drives the demand for responsible AI development.

<p>, developed, and/or deployed. Key AI actors responsible for AI governance include<br /> organizational management, senior leadership, and the Board of Directors. These actors<br /> are parties that are concerned with the impact and sustainability of the organization as a<br /> whole.<br /> Additional AI Actors<br /> Third-party entities include providers, developers, vendors, and evaluators of data, al-<br /> gorithms, models, and/or systems and related services for another organization or the or-<br /> ganization’s customers or clients. Third-party entities are responsible for AI design and<br /> development tasks, in whole or in part. By definition, they are external to the design, devel-<br /> opment, or deployment team of the organization that acquires its technologies or services.<br /> The technologies acquired from third-party entities may be complex or opaque, and risk<br /> tolerances may not align with the deploying or operating organization.<br /> End users of an AI system are the individuals or groups that use the system for specific<br /> purposes. These individuals or groups interact with an AI system in a specific context. End<br /> users can range in competency from AI experts to first-time technology end users.<br /> Affected individuals/communities encompass all individuals, groups, communities, or<br /> organizations directly or indirectly affected by AI systems or decisions based on the output<br /> of AI systems. These individuals do not necessarily interact with the deployed system or<br /> application.<br /> Other AI actors may provide formal or quasi-formal norms or guidance for specifying<br /> and managing AI risks. They can include trade associations, standards developing or-<br /> ganizations, advocacy groups, researchers, environmental groups, and civil society<br /> organizations.<br /> The general public is most likely to directly experience positive and negative impacts of<br /> AI technologies. They may provide the motivation for actions taken by the AI actors. This<br /> group can include individuals, communities, and consumers associated with the context in<br /> which an AI system is developed or deployed.<br /> Page 37</p> <p>NIST AI 100-1<br /> AI RMF 1.</p>
Show original text

According to the NIST AI RMF 1.0 (Appendix B), AI systems share some risks with traditional software, such as broad societal impacts, but also introduce unique challenges that current risk frameworks do not fully cover. While features like pre-trained models can improve accuracy, they also require careful management. Key AI-specific risks include: data that may be biased, poor quality, or unrepresentative of the intended use; a heavy reliance on complex, large-scale training data; the risk that training processes may unintentionally alter system performance; the potential for datasets to become outdated or disconnected from their original context; and the immense complexity of AI systems, which can involve trillions of decision points. AI actors—including individuals, communities, and consumers—should use the MAP function to identify these contextual factors and determine appropriate risk management strategies.

<p>taken by the AI actors. This<br /> group can include individuals, communities, and consumers associated with the context in<br /> which an AI system is developed or deployed.<br /> Page 37</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Appendix B:<br /> How AI Risks Differ from Traditional Software Risks<br /> As with traditional software, risks from AI-based technology can be bigger than an en-<br /> terprise, span organizations, and lead to societal impacts. AI systems also bring a set of<br /> risks that are not comprehensively addressed by current risk frameworks and approaches.<br /> Some AI system features that present risks also can be beneficial. For example, pre-trained<br /> models and transfer learning can advance research and increase accuracy and resilience<br /> when compared to other models and approaches. Identifying contextual factors in the MAP<br /> function will assist AI actors in determining the level of risk and potential management<br /> efforts.<br /> Compared to traditional software, AI-specific risks that are new or increased include the<br /> following:<br /> • The data used for building an AI system may not be a true or appropriate representa-<br /> tion of the context or intended use of the AI system, and the ground truth may either<br /> not exist or not be available. Additionally, harmful bias and other data quality issues<br /> can affect AI system trustworthiness, which could lead to negative impacts.<br /> • AI system dependency and reliance on data for training tasks, combined with in-<br /> creased volume and complexity typically associated with such data.<br /> • Intentional or unintentional changes during training may fundamentally alter AI sys-<br /> tem performance.<br /> • Datasets used to train AI systems may become detached from their original and in-<br /> tended context or may become stale or outdated relative to deployment context.<br /> • AI system scale and complexity (many systems contain billions or even trillions of<br /> decision points) housed within more traditional software applications.</p>
Show original text

According to the NIST AI RMF 1.0 (Page 38), AI systems present unique challenges compared to traditional software. Key issues include: 1) Data and models can become outdated or detached from their original context. 2) The massive scale and complexity of AI make it difficult to manage. 3) Pre-trained models introduce statistical uncertainty, bias, and reproducibility concerns. 4) It is hard to predict failure modes or side effects in large-scale models. 5) AI increases privacy risks through enhanced data aggregation. 6) Systems require frequent maintenance due to data or concept drift. 7) AI is often opaque, making it difficult to test or document compared to traditional software standards. 8) High computational costs negatively impact the environment. Finally, privacy and cybersecurity risk management must be integrated into the entire AI lifecycle and broader enterprise risk strategies.

<p>may become detached from their original and in-<br /> tended context or may become stale or outdated relative to deployment context.<br /> • AI system scale and complexity (many systems contain billions or even trillions of<br /> decision points) housed within more traditional software applications.<br /> • Use of pre-trained models that can advance research and improve performance can<br /> also increase levels of statistical uncertainty and cause issues with bias management,<br /> scientific validity, and reproducibility.<br /> • Higher degree of difficulty in predicting failure modes for emergent properties of<br /> large-scale pre-trained models.<br /> • Privacy risk due to enhanced data aggregation capability for AI systems.<br /> • AI systems may require more frequent maintenance and triggers for conducting cor-<br /> rective maintenance due to data, model, or concept drift.<br /> • Increased opacity and concerns about reproducibility.<br /> • Underdeveloped software testing standards and inability to document AI-based prac-<br /> tices to the standard expected of traditionally engineered software for all but the<br /> simplest of cases.<br /> • Difficulty in performing regular AI-based software testing, or determining what to<br /> test, since AI systems are not subject to the same controls as traditional code devel-<br /> opment.<br /> Page 38</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> • Computational costs for developing AI systems and their impact on the environment<br /> and planet.<br /> • Inability to predict or detect the side effects of AI-based systems beyond statistical<br /> measures.<br /> Privacy and cybersecurity risk management considerations and approaches are applicable<br /> in the design, development, deployment, evaluation, and use of AI systems. Privacy and<br /> cybersecurity risks are also considered as part of broader enterprise risk management con-<br /> siderations, which may incorporate AI risks.</p>
Show original text

Organizations should integrate AI risk management into their broader enterprise risk strategies. To improve AI trustworthiness—specifically regarding security and privacy—organizations can use established standards like the NIST Cybersecurity Framework, NIST Privacy Framework, NIST Risk Management Framework, and the Secure Software Development Framework. These frameworks, which are outcome-based and structured around core functions, can help inform the MAP, MEASURE, and MANAGE functions of the AI RMF. However, existing frameworks are insufficient for addressing unique AI challenges, such as harmful bias, generative AI risks, specific machine learning attacks (like model extraction or evasion), complex AI attack surfaces, and risks involving third-party technologies or off-label use.

<p>and approaches are applicable<br /> in the design, development, deployment, evaluation, and use of AI systems. Privacy and<br /> cybersecurity risks are also considered as part of broader enterprise risk management con-<br /> siderations, which may incorporate AI risks. As part of the effort to address AI trustworthi-<br /> ness characteristics such as “Secure and Resilient” and “Privacy-Enhanced,” organizations<br /> may consider leveraging available standards and guidance that provide broad guidance to<br /> organizations to reduce security and privacy risks, such as, but not limited to, the NIST Cy-<br /> bersecurity Framework, the NIST Privacy Framework, the NIST Risk Management Frame-<br /> work, and the Secure Software Development Framework. These frameworks have some<br /> features in common with the AI RMF. Like most risk management approaches, they are<br /> outcome-based rather than prescriptive and are often structured around a Core set of func-<br /> tions, categories, and subcategories. While there are significant differences between these<br /> frameworks based on the domain addressed – and because AI risk management calls for<br /> addressing many other types of risks – frameworks like those mentioned above may inform<br /> security and privacy considerations in the MAP, MEASURE, and MANAGE functions of the<br /> AI RMF.<br /> At the same time, guidance available before publication of this AI RMF does not compre-<br /> hensively address many AI system risks. For example, existing frameworks and guidance<br /> are unable to:<br /> • adequately manage the problem of harmful bias in AI systems;<br /> • confront the challenging risks related to generative AI;<br /> • comprehensively address security concerns related to evasion, model extraction, mem-<br /> bership inference, availability, or other machine learning attacks;<br /> • account for the complex attack surface of AI systems or other security abuses enabled<br /> by AI systems; and<br /> • consider risks associated with third-party AI technologies, transfer learning, and off-<br /> label use where AI systems may be trained for decision-making outside an organiza-<br /> tion’s security controls</p>
Show original text

Organizations should address security risks related to AI, including third-party tools, transfer learning, and off-label use where systems are adapted outside of internal controls. Because technology evolves quickly, organizations must monitor advancements to ensure AI remains trustworthy and responsible. According to the NIST AI RMF 1.0 (Appendix C), managing AI risks requires understanding human-AI interaction. A major challenge is that AI models often convert complex human behavior into mathematical data, which can strip away essential context and obscure societal impacts. To manage these risks, organizations must clearly define human roles in AI oversight. These roles vary depending on the system, ranging from fully autonomous models that require no human intervention to systems that function as decision-support tools or require constant human supervision.

<p>AI systems or other security abuses enabled<br /> by AI systems; and<br /> • consider risks associated with third-party AI technologies, transfer learning, and off-<br /> label use where AI systems may be trained for decision-making outside an organiza-<br /> tion’s security controls or trained in one domain and then “fine-tuned” for another.<br /> Both AI and traditional software technologies and systems are subject to rapid innovation.<br /> Technology advances should be monitored and deployed to take advantage of those devel-<br /> opments and work towards a future of AI that is both trustworthy and responsible.<br /> Page 39</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Appendix C:<br /> AI Risk Management and Human-AI Interaction<br /> Organizations that design, develop, or deploy AI systems for use in operational settings<br /> may enhance their AI risk management by understanding current limitations of human-<br /> AI interaction. The AI RMF provides opportunities to clearly define and differentiate the<br /> various human roles and responsibilities when using, interacting with, or managing AI<br /> systems.<br /> Many of the data-driven approaches that AI systems rely on attempt to convert or represent<br /> individual and social observational and decision-making practices into measurable quanti-<br /> ties. Representing complex human phenomena with mathematical models can come at the<br /> cost of removing necessary context. This loss of context may in turn make it difficult to<br /> understand individual and societal impacts that are key to AI risk management efforts.<br /> Issues that merit further consideration and research include:<br /> 1. Human roles and responsibilities in decision making and overseeing AI systems<br /> need to be clearly defined and differentiated. Human-AI configurations can span<br /> from fully autonomous to fully manual. AI systems can autonomously make deci-<br /> sions, defer decision making to a human expert, or be used by a human decision<br /> maker as an additional opinion. Some AI systems may not require human oversight,<br /> such as models used to improve video compression. Other systems may specifically<br /> require human oversight.<br /> 2.</p>
Show original text

According to the NIST AI RMF 1.0, AI systems can either assist human experts or operate independently, such as in video compression. However, human oversight is often necessary because AI development and use are prone to systemic and cognitive biases. These biases, which can be worsened by a lack of transparency, may influence organizational structures and lead to negative outcomes for end users and policymakers. Human-AI interaction is complex; while it can sometimes amplify bias in judgment tasks, it can also improve performance through effective teamwork. Because humans interpret AI outputs differently based on their individual skills and preferences, the 'GOVERN' function is essential for organizations to clearly define roles and responsibilities for those managing and overseeing AI systems.

<p>decision making to a human expert, or be used by a human decision<br /> maker as an additional opinion. Some AI systems may not require human oversight,<br /> such as models used to improve video compression. Other systems may specifically<br /> require human oversight.<br /> 2. Decisions that go into the design, development, deployment, evaluation, and use<br /> of AI systems reflect systemic and human cognitive biases. AI actors bring their<br /> cognitive biases, both individual and group, into the process. Biases can stem from<br /> end-user decision-making tasks and be introduced across the AI lifecycle via human<br /> assumptions, expectations, and decisions during design and modeling tasks. These<br /> biases, which are not necessarily always harmful, may be exacerbated by AI system<br /> opacity and the resulting lack of transparency. Systemic biases at the organizational<br /> level can influence how teams are structured and who controls the decision-making<br /> processes throughout the AI lifecycle. These biases can also influence downstream<br /> decisions by end users, decision makers, and policy makers and may lead to negative<br /> impacts.<br /> 3. Human-AI interaction results vary. Under certain conditions – for example, in<br /> perceptual-based judgment tasks – the AI part of the human-AI interaction can am-<br /> plify human biases, leading to more biased decisions than the AI or human alone.<br /> When these variations are judiciously taken into account in organizing human-AI<br /> teams, however, they can result in complementarity and improved overall perfor-<br /> mance.<br /> Page 40</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> 4. Presenting AI system information to humans is complex. Humans perceive and<br /> derive meaning from AI system output and explanations in different ways, reflecting<br /> different individual preferences, traits, and skills.<br /> The GOVERN function provides organizations with the opportunity to clarify and define<br /> the roles and responsibilities for the humans in the Human-AI team configurations and<br /> those who are overseeing the AI system performance.</p>
Show original text

The NIST AI RMF 1 outlines two key functions for managing human-AI teams: GOVERN and MAP. The GOVERN function helps organizations define clear roles and responsibilities for human team members and AI supervisors, while establishing transparent decision-making processes to reduce systemic bias. The MAP function focuses on building internal expertise by documenting standards, certifications, and procedures for evaluating AI performance, trustworthiness, and real-world impacts throughout the AI lifecycle. Both functions emphasize the need for diverse, interdisciplinary teams that incorporate feedback from the community to ensure AI systems align with societal values and user intentions. Finally, the framework notes that ongoing research is needed to understand how humans interact with AI, specifically regarding how often and why people choose to challenge or overrule AI-generated outputs.

<p>reflecting<br /> different individual preferences, traits, and skills.<br /> The GOVERN function provides organizations with the opportunity to clarify and define<br /> the roles and responsibilities for the humans in the Human-AI team configurations and<br /> those who are overseeing the AI system performance. The GOVERN function also creates<br /> mechanisms for organizations to make their decision-making processes more explicit, to<br /> help counter systemic biases.<br /> The MAP function suggests opportunities to define and document processes for operator<br /> and practitioner proficiency with AI system performance and trustworthiness concepts, and<br /> to define relevant technical standards and certifications. Implementing MAP function cat-<br /> egories and subcategories may help organizations improve their internal competency for<br /> analyzing context, identifying procedural and system limitations, exploring and examining<br /> impacts of AI-based systems in the real world, and evaluating decision-making processes<br /> throughout the AI lifecycle.<br /> The GOVERN and MAP functions describe the importance of interdisciplinarity and demo-<br /> graphically diverse teams and utilizing feedback from potentially impacted individuals and<br /> communities. AI actors called out in the AI RMF who perform human factors tasks and<br /> activities can assist technical teams by anchoring in design and development practices to<br /> user intentions and representatives of the broader AI community, and societal values. These<br /> actors further help to incorporate context-specific norms and values in system design and<br /> evaluate end user experiences – in conjunction with AI systems.<br /> AI risk management approaches for human-AI configurations will be augmented by on-<br /> going research and evaluation. For example, the degree to which humans are empowered<br /> and incentivized to challenge AI system output requires further studies. Data about the fre-<br /> quency and rationale with which humans overrule AI system output in deployed systems<br /> may be useful to collect and analyze.<br /> Page 41</p> <p>NIST AI 100-1<br /> AI RMF 1.</p>
Show original text

Future research should examine how and why humans choose to override AI system decisions. The NIST AI RMF 1.0 (Appendix D) outlines seven core principles that guided its development: 1) It is risk-based, efficient, pro-innovation, and voluntary. 2) It is developed through an open, consensus-driven process involving all stakeholders. 3) It uses plain language accessible to both non-experts and technical practitioners to facilitate clear communication. 4) It provides a standardized vocabulary, including definitions and metrics, for managing AI risk. 5) It is designed to be intuitive and easily integrated into existing organizational risk management strategies. 6) It is universally applicable across all sectors and AI technologies. 7) It focuses on desired outcomes rather than prescribing specific methods.

<p>further studies. Data about the fre-<br /> quency and rationale with which humans overrule AI system output in deployed systems<br /> may be useful to collect and analyze.<br /> Page 41</p> <p>NIST AI 100-1<br /> AI RMF 1.0<br /> Appendix D:<br /> Attributes of the AI RMF<br /> NIST described several key attributes of the AI RMF when work on the Framework first<br /> began. These attributes have remained intact and were used to guide the AI RMF’s devel-<br /> opment. They are provided here as a reference.<br /> The AI RMF strives to:<br /> 1. Be risk-based, resource-efficient, pro-innovation, and voluntary.<br /> 2. Be consensus-driven and developed and regularly updated through an open, trans-<br /> parent process. All stakeholders should have the opportunity to contribute to the AI<br /> RMF’s development.<br /> 3. Use clear and plain language that is understandable by a broad audience, including<br /> senior executives, government officials, non-governmental organization leadership,<br /> and those who are not AI professionals – while still of sufficient technical depth to<br /> be useful to practitioners. The AI RMF should allow for communication of AI risks<br /> across an organization, between organizations, with customers, and to the public at<br /> large.<br /> 4. Provide common language and understanding to manage AI risks. The AI RMF<br /> should offer taxonomy, terminology, definitions, metrics, and characterizations for<br /> AI risk.<br /> 5. Be easily usable and fit well with other aspects of risk management. Use of the<br /> Framework should be intuitive and readily adaptable as part of an organization’s<br /> broader risk management strategy and processes. It should be consistent or aligned<br /> with other approaches to managing AI risks.<br /> 6. Be useful to a wide range of perspectives, sectors, and technology domains. The AI<br /> RMF should be universally applicable to any AI technology and to context-specific<br /> use cases.<br /> 7. Be outcome-focused and non-prescriptive.</p>
Show original text

The AI RMF is designed to be a flexible, evolving resource that helps organizations manage AI risks. Its core principles include: 6. Universal applicability across all AI technologies and sectors. 7. A focus on outcomes rather than rigid, one-size-fits-all rules. 8. Integration with existing best practices while highlighting the need for new tools. 9. Compatibility with all domestic and international laws and regulations. 10. A 'living document' status that allows for updates as technology and understanding advance. This publication is available for free at https://doi.org/10.6028/NIST.AI.100-1.

<p>risks.<br /> 6. Be useful to a wide range of perspectives, sectors, and technology domains. The AI<br /> RMF should be universally applicable to any AI technology and to context-specific<br /> use cases.<br /> 7. Be outcome-focused and non-prescriptive. The Framework should provide a catalog<br /> of outcomes and approaches rather than prescribe one-size-fits-all requirements.<br /> 8. Take advantage of and foster greater awareness of existing standards, guidelines, best<br /> practices, methodologies, and tools for managing AI risks – as well as illustrate the<br /> need for additional, improved resources.<br /> 9. Be law- and regulation-agnostic.<br /> The Framework should support organizations’<br /> abilities to operate under applicable domestic and international legal or regulatory<br /> regimes.<br /> 10. Be a living document. The AI RMF should be readily updated as technology, under-<br /> standing, and approaches to AI trustworthiness and uses of AI change and as stake-<br /> holders learn from implementing AI risk management generally and this framework<br /> in particular.<br /> Page 42</p> <p>This publication is available free of charge from:<br /> https://doi.org/10.6028/NIST.AI.100-1</p>

Entities

Accountable and Transparent framework_component

A characteristic of trustworthy AI that relates to processes and activities internal and external to the system.
  • AI RMF 1.0: The framework defines accountable and transparent as a core characteristic.

Affected individuals/communities person

Groups or individuals directly or indirectly impacted by AI systems or their outputs.

AI actors person

Individuals or groups involved in the design, development, deployment, evaluation, and use of AI systems.
  • AI RMF: AI actors are the primary audience and users of the AI RMF.

AI actors organization

Diverse groups including civil society, end users, and communities involved in the AI system lifecycle.
  • AI RMF 1.0: AI actors are responsible for performing functions and managing risks within the AI RMF 1.0.

AI Deployer person

An individual or entity responsible for implementing pre-trained AI models into specific real-world use cases.
  • AI Lifecycle: Deployers operate within the AI lifecycle to implement models.

AI deployers organization

Entities that implement and use AI systems, often in cooperation with developers.
  • AI systems: Deployers are responsible for the implementation and operational use of AI systems.

AI Deployment framework_component

A phase in the AI lifecycle focused on contextual decisions, system piloting, regulatory compliance, and production integration.

AI Design framework_component

A category of tasks within the AI lifecycle focused on planning, design, and data collection.
  • AI RMF 1.0: AI Design is defined as a task category within the AI RMF lifecycle.

AI Developer person

An individual or entity responsible for creating AI software, such as pre-trained models.
  • AI Lifecycle: Developers operate within the AI lifecycle to create software.

AI developers organization

Entities responsible for creating AI systems and adjusting transparency and accountability practices.
  • AI systems: Developers are responsible for the creation and maintenance of AI systems.

AI Development framework_component

A phase in the AI lifecycle encompassing tasks such as model building, training, testing, and calibration.

AI Impact Assessment Teams organization

Groups responsible for evaluating AI system accountability, bias, safety, and security.
  • Human Factors Professionals: Human factors professionals provide expertise to AI impact assessment teams.
  • Domain Experts: Domain experts interpret outputs and provide guidance to AI impact assessment teams.

AI Lifecycle framework_component

The various stages of AI development, deployment, and operation where risks can be measured and managed.
  • AI Developer: Developers operate within the AI lifecycle to create software.
  • AI Deployer: Deployers operate within the AI lifecycle to implement models.
  • Inscrutability: Inscrutability is a characteristic that can manifest during the AI lifecycle.
  • Human Baseline: Human baselines are used as a component for risk management within the AI lifecycle.

AI Risk Management Document document

A document discussing trustworthiness characteristics, risk management, and the roles of AI actors in the AI lifecycle.
  • Valid and Reliable: The document outlines the importance of validity and reliability as trustworthiness characteristics.
  • ISO 9000:2015: The document cites ISO 9000:2015 to define the concept of validation.
  • ISO/IEC TS 5723:2022: The document cites ISO/IEC TS 5723:2022 to define the concept of reliability.

AI Risk Management Framework Roadmap document

An associated document that captures priority research and guidance for the AI RMF.
  • AI RMF 1.0: The roadmap provides additional guidance for the AI RMF.

AI RMF document

A framework providing guidance on managing risks associated with AI systems.

AI RMF framework_component

A voluntary, outcome-focused, and non-prescriptive framework designed to help organizations manage AI risks and promote the development of trustworthy AI systems.

AI RMF 1 framework_component

The AI Risk Management Framework version 1, which provides guidance for managing AI risks.
  • NIST AI 100-1: The document contains the AI RMF 1 framework.
  • Organizational Management: Organizational management is responsible for implementing AI governance frameworks.
  • Third-party entities: Third-party entities are involved in the development tasks governed by AI frameworks.

AI RMF 1 document

The AI Risk Management Framework 1.0, a guidance document for managing AI risks.
  • GOVERN: The AI RMF 1 includes the GOVERN function as a core component.
  • MAP: The AI RMF 1 includes the MAP function as a core component.
  • NIST AI 100-1: NIST AI 100-1 is the identifier for the AI RMF 1 document.

AI RMF 1.0 document

The NIST AI Risk Management Framework version 1.0 is a guidance document designed to help organizations manage risks and improve the trustworthiness of AI systems.

AI RMF 1.0 framework_component

The first version of the NIST AI Risk Management Framework, which provides guidelines, core functions, and strategies to help organizations manage AI risks and ensure system trustworthiness.
  • NIST AI 100-1: The NIST AI 100-1 document defines, contains, and outlines the AI RMF 1.0 framework.
  • NIST: The AI RMF 1.0 framework was developed and published by NIST.
  • NIST: The AI RMF 1.0 framework was developed and published by NIST.
  • ISO 31000:2018: The AI RMF adapts risk definitions from ISO 31000:2018.
  • AI Risk Management Framework Roadmap: The roadmap provides additional guidance for the AI RMF.
  • ISO GUIDE 73: The AI RMF 1.0 references ISO GUIDE 73 for definitions regarding risk tolerance.
  • NIST AI 100-1: The AI RMF 1.0 is the core framework component contained within and identified by the NIST AI 100-1 document.
  • Organizations: Organizations are advised to use the AI RMF to manage risks and document processes.
  • OECD Framework for the Classification of AI systems: The AI RMF references the OECD framework for its lifecycle and dimension diagrams.
  • NIST AI 100-1: The AI RMF 1.0 is the core framework component contained within and identified by the NIST AI 100-1 document.
  • AI actors: AI actors are responsible for performing functions and managing risks within the AI RMF 1.0.
  • ISO/IEC TS 5723:2022: The AI RMF 1.0 framework utilizes definitions from the ISO/IEC TS 5723:2022 standard.
  • ISO/IEC TS 5723:2022: The AI RMF 1.0 references the ISO/IEC TS 5723:2022 standard to define safe AI systems.
  • NIST AI 100-1: The NIST AI 100-1 document defines, contains, and outlines the AI RMF 1.0 framework.
  • AI RMF Core: The AI RMF 1.0 includes the AI RMF Core as a primary component.
  • NIST AI RMF Playbook: The Playbook serves as an online companion resource to the AI RMF.
  • NIST Trustworthy and Responsible AI Resource Center: The AI RMF is part of the NIST Trustworthy and Responsible AI Resource Center.
  • GOVERN: The GOVERN function is a primary component of the AI RMF 1.0 framework.
  • MAP function: The MAP function is a core component of the AI RMF 1.0 used for risk assessment.
  • MEASURE function: The MEASURE function is a core component of the AI RMF 1.0 used for risk evaluation.
  • MANAGE function: The MANAGE function is a core component of the AI RMF 1.0 used for risk oversight and mitigation.
  • NIST AI 100-1: The AI RMF 1.0 is the core framework component contained within and identified by the NIST AI 100-1 document.
  • MAP function: The MAP function is a core component of the AI RMF 1.0 framework.
  • MAP: MAP is a core function within the AI RMF.
  • MEASURE: MEASURE is a core function and essential component of the AI RMF 1.0 framework.
  • AI RMF cross-sectoral profiles: AI RMF 1.0 includes cross-sectoral profiles for risk management.
  • AI Design: AI Design is defined as a task category within the AI RMF lifecycle.
  • AI Development: AI Development is defined as a task category within the AI RMF lifecycle.
  • TEVV: The AI RMF 1.0 framework incorporates TEVV tasks for AI lifecycle management.
  • Human Factors: The AI RMF 1.0 framework includes Human Factors tasks and activities.
  • Board of Directors: The Board of Directors is responsible for governance tasks outlined in the AI RMF.
  • Appendix C: AI Risk Management and Human-AI Interaction: Appendix C is a component of the AI RMF 1.0 framework.

AI RMF Core framework_component

The central component of the AI RMF that provides outcomes and actions for managing AI risks through four core functions: Govern, Map, Measure, and Manage.
  • AI RMF Playbook: The AI RMF Core is a structural part of the AI RMF Playbook document.
  • AI RMF 1.0: The AI RMF document contains the Core component.
  • AI RMF 1.0: The AI RMF 1.0 includes the AI RMF Core as a primary component.

AI RMF cross-sectoral profiles framework_component

Components of the AI RMF that address risks common across various use cases or sectors.
  • AI RMF 1.0: AI RMF 1.0 includes cross-sectoral profiles for risk management.

AI RMF Current Profile framework_component

A description of the current state of AI risk management activities.

AI RMF Playbook document

A document providing guidance on AI risk management, featuring a version control system and frequent updates.
  • NIST: NIST is the organization that plans, maintains, and updates the AI RMF Playbook.
  • AIframework@nist.gov: The email address provided for public feedback on the document.
  • AI RMF Core: The AI RMF Core is a structural part of the AI RMF Playbook document.

AI RMF Playbook resource

A collection of additional resources related to the AI RMF.

AI RMF Profiles framework_component

A component of the AI RMF that provides specific implementations of functions, categories, and subcategories tailored for particular settings or applications.

AI RMF Target Profile framework_component

A description of the desired or target state of AI risk management activities.

AI systems framework_component

Complex technological systems whose operation, transparency, and accountability are the subject of the document.
  • AI developers: Developers are responsible for the creation and maintenance of AI systems.
  • AI deployers: Deployers are responsible for the implementation and operational use of AI systems.
  • Transparency tools: Transparency tools are used to provide documentation and insights into AI systems.
  • Explainability: Explainability is a core component for understanding AI system mechanisms.
  • Interpretability: Interpretability is a core component for understanding AI system outputs.

AIframework@nist.gov url

The official email address for submitting comments and feedback regarding the AI RMF Playbook.
  • AI RMF Playbook: The email address provided for public feedback on the document.

Appendix C: AI Risk Management and Human-AI Interaction framework_component

A specific section within the AI RMF 1.0 focusing on the intersection of human roles and AI systems.
  • AI RMF 1.0: Appendix C is a component of the AI RMF 1.0 framework.

Artificial Intelligence in Society—OECD iLibrary resource

A 2019 publication by the OECD defining AI actors.
  • OECD: The OECD published this resource defining AI actors.
  • AI RMF: The AI RMF references the OECD definition of AI actors.

Board of Directors organization

The governing body responsible for the oversight and sustainability of an organization.
  • AI RMF 1.0: The Board of Directors is responsible for governance tasks outlined in the AI RMF.

Domain Experts person

Practitioners or scholars providing specialized knowledge about specific industry sectors or application areas.

End users person

Individuals or groups that interact with an AI system for specific purposes.

Executive leadership person

The individuals responsible for organizational decision-making regarding AI risks.
  • GOVERN: Executive leadership is responsible for the governance decisions outlined in the GOVERN function.

Explainability framework_component

A representation of the mechanisms underlying the operation of AI systems.
  • AI systems: Explainability is a core component for understanding AI system mechanisms.

Explainable and Interpretable framework_component

A characteristic of trustworthy AI systems.
  • AI RMF 1.0: The framework defines explainable and interpretable as a core characteristic.

F 1.0 document

A technical document outlining the MEASURE function and its associated subcategories for AI risk management.
  • MEASURE function: The document details the categories and subcategories of the MEASURE function.

Fair with harmful bias managed framework_component

A characteristic of trustworthy AI systems.
  • AI RMF 1.0: The framework defines fair with harmful bias managed as a core characteristic.

Four Principles of Explainable Artificial Intelligence resource

A reference document detailing core principles for explainable AI.
  • NIST AI 100-1: The document references the principles of explainable AI.

General public person

The broader population that experiences the impacts of AI technologies.

Gina M. Raimondo person

The Secretary of the U.S. Department of Commerce.

GOVERN framework_component

A core function of the AI RMF that establishes the foundation for risk management by defining organizational culture, roles, responsibilities, and oversight processes.
  • AI RMF 1.0: The AI RMF includes the GOVERN function.
  • AI RMF 1.0: The GOVERN function is a primary component of the AI RMF 1.0 framework.
  • Executive leadership: Executive leadership is responsible for the governance decisions outlined in the GOVERN function.
  • MANAGE: The MANAGE function utilizes risk definitions and processes established in the GOVERN function.
  • AI RMF 1: The AI RMF 1 includes the GOVERN function as a core component.

GOVERN 4.3 framework_component

A framework component focused on organizational practices for AI testing, incident identification, and information sharing.

GOVERN 5 framework_component

A framework component regarding processes for robust engagement with relevant AI actors.
  • GOVERN 5.1: GOVERN 5.1 is a subcategory of GOVERN 5.
  • GOVERN 5.2: GOVERN 5.2 is a subcategory of GOVERN 5.

GOVERN 5.1 framework_component

A subcomponent of GOVERN 5 focused on collecting and integrating feedback from external stakeholders regarding AI risks.
  • GOVERN 5: GOVERN 5.1 is a subcategory of GOVERN 5.

GOVERN 5.2 framework_component

A subcomponent of GOVERN 5 focused on incorporating adjudicated feedback into AI system design.
  • GOVERN 5: GOVERN 5.2 is a subcategory of GOVERN 5.

GOVERN 6 framework_component

A framework component addressing AI risks and benefits arising from third-party software, data, and supply chain issues.
  • GOVERN 6.1: GOVERN 6.1 is a subcategory of GOVERN 6.
  • GOVERN 6.2: GOVERN 6.2 is a subcategory of GOVERN 6.

GOVERN 6.1 framework_component

A subcomponent of GOVERN 6 addressing risks associated with third-party entities and intellectual property.
  • GOVERN 6: GOVERN 6.1 is a subcategory of GOVERN 6.

GOVERN 6.2 framework_component

A subcomponent of GOVERN 6 focused on contingency processes for high-risk third-party failures.
  • GOVERN 6: GOVERN 6.2 is a subcategory of GOVERN 6.

GOVERN function framework_component

A core function of the NIST AI RMF that establishes the organizational structures, policies, and procedures necessary for effective risk management.
  • NIST AI RMF Playbook: The playbook describes the categories and subcategories of the GOVERN function.
  • NIST AI RMF 1.0: The GOVERN function is a core component defined within the AI RMF 1.0.

https://doi.org/10.6028/NIST.AI.100-1 url

The official digital object identifier URL where the NIST AI RMF 100-1 publication is hosted.
  • NIST AI 100-1: The document is accessible via this specific URL.

Human Baseline framework_component

Metric standards used to compare AI performance against human activity in tasks like decision-making.
  • AI Lifecycle: Human baselines are used as a component for risk management within the AI lifecycle.

Human Factors framework_component

Tasks and activities focused on human-centered design and the integration of human dynamics in AI systems.
  • AI RMF 1.0: The AI RMF 1.0 framework includes Human Factors tasks and activities.

Human Factors Professionals person

Multidisciplinary experts who focus on user experience, human-centered evaluation, and context of use in AI systems.

Inscrutability framework_component

The opaque nature of AI systems, including limited explainability and lack of documentation, which complicates risk measurement.
  • AI Lifecycle: Inscrutability is a characteristic that can manifest during the AI lifecycle.

Interpretability framework_component

The meaning of AI systems' output in the context of their designed functional purposes.
  • AI systems: Interpretability is a core component for understanding AI system outputs.

ISO 26000:2010 resource

An international standard providing guidance on social responsibility.
  • NIST AI 100-1: The document references ISO 26000:2010 to define social responsibility.

ISO 31000:2018 resource

An international standard for risk management used to adapt definitions in the AI RMF.
  • AI RMF 1.0: The AI RMF adapts risk definitions from ISO 31000:2018.

ISO 31000:2018 document

An international standard providing guidelines for risk management.
  • AI RMF: The AI RMF incorporates concepts and definitions from ISO 31000:2018.

ISO 9000:2015 resource

An international standard providing definitions for validation and quality management.

ISO GUIDE 73 resource

An international standard that provides essential vocabulary and guidelines for general risk management, including definitions for terms like residual risk.
  • AI RMF 1.0: The AI RMF 1.0 references ISO GUIDE 73 for definitions regarding risk tolerance.
  • AI RMF 1.0: The AI RMF 1.0 references ISO GUIDE 73 to define the term residual risk.

ISO/IEC organization

A joint technical committee of the International Organization for Standardization and the International Electrotechnical Commission.

ISO/IEC 22989:2022 document

An international standard defining terminology and concepts for artificial intelligence.
  • AI RMF: The AI RMF adapts definitions from the ISO/IEC 22989 standard.
  • ISO/IEC: ISO/IEC is the publishing body for the standard.

ISO/IEC TR 24368:2022 resource

An international technical report concerning AI sustainability and professional responsibility.
  • NIST AI 100-1: The document references ISO/IEC TR 24368:2022 for definitions of sustainability and professional responsibility.

ISO/IEC TR 24368:2022 document

A technical report that discusses the unique position of AI to influence people, society, and the future.

ISO/IEC TS 5723:2022 resource

An international technical specification defining reliability in the context of system performance.

ISO/IEC TS 5723:2022 document

An international technical specification that provides standardized definitions for key concepts such as reliability, accuracy, robustness, and system resilience.
  • AI RMF 1.0: The AI RMF 1.0 framework utilizes definitions from the ISO/IEC TS 5723:2022 standard.
  • AI RMF 1.0: The AI RMF 1.0 references the ISO/IEC TS 5723:2022 standard to define safe AI systems.
  • NIST AI 100-1: The document references ISO/IEC TS 5723:2022 for definitions of resilience.

January 2023 date

The publication date of the AI RMF 1.0.

Laurie E. Locascio person

The Director of NIST and Under Secretary of Commerce for Standards and Technology.

MANAGE framework_component

A core function of the AI RMF focused on allocating resources to prioritize, respond to, and recover from identified AI risks.
  • AI RMF 1.0: The AI RMF includes the MANAGE function.
  • MEASURE: MEASURE informs the MANAGE function.
  • NIST AI RMF 1.0: The MANAGE function is a core component defined within the AI RMF.
  • GOVERN: The MANAGE function utilizes risk definitions and processes established in the GOVERN function.
  • MAP: The MANAGE function utilizes contextual information and risk assessments carried out in the MAP function.
  • MEASURE: The MANAGE function utilizes analytical output and documentation practices from the MEASURE function.

MANAGE 1 framework_component

A sub-component of the MANAGE function focused on AI risk assessment and prioritization.

MANAGE 2 framework_component

A sub-component of the MANAGE function focused on strategies to maximize AI benefits and minimize negative impacts.

MANAGE 3 framework_component

A sub-category of the MANAGE function focusing on third-party AI risks and benefits.
  • MANAGE function: MANAGE 3 is a subcategory under the MANAGE function.

MANAGE 4 framework_component

A sub-category of the MANAGE function focusing on risk treatments, response, and recovery.
  • MANAGE function: MANAGE 4 is a subcategory under the MANAGE function.

MANAGE function framework_component

A core function of the AI RMF focused on prioritizing, responding to, and mitigating AI risks and benefits.
  • AI RMF 1.0: The MANAGE function is a core component of the AI RMF 1.0 used for risk oversight and mitigation.
  • MAP function: The MANAGE function uses outcomes from the MAP function.
  • NIST AI RMF 1.0: The MANAGE function is a core component defined within the AI RMF 1.0.
  • MANAGE 1: MANAGE 1 is a sub-category of the MANAGE function.
  • MANAGE 2: MANAGE 2 is a sub-category of the MANAGE function.
  • MANAGE 3: MANAGE 3 is a subcategory under the MANAGE function.
  • MANAGE 4: MANAGE 4 is a subcategory under the MANAGE function.

MAP framework_component

A core function of the AI RMF focused on identifying, contextualizing, and framing AI risks and impacts within specific operational environments.
  • AI RMF 1.0: The AI RMF includes the MAP function.
  • AI RMF 1.0: MAP is a core function within the AI RMF.
  • MEASURE: MEASURE uses knowledge identified in the MAP function.
  • MANAGE: The MANAGE function utilizes contextual information and risk assessments carried out in the MAP function.
  • AI RMF 1: The AI RMF 1 includes the MAP function as a core component.

MAP 2 framework_component

A sub-component of the MAP function focused on the categorization of AI systems.
  • MAP function: MAP 2 is a specific category within the MAP function.

MAP 3 framework_component

A sub-component of the MAP function focused on understanding AI capabilities, usage, goals, and costs.
  • MAP function: MAP 3 is a specific category within the MAP function.

MAP function framework_component

A core function of the AI RMF designed to establish the context of AI systems and identify, categorize, and understand potential risks and contributing factors.
  • AI RMF 1.0: The MAP function is a core component of the AI RMF 1.0 used for risk assessment.
  • MEASURE function: The MEASURE function uses outcomes from the MAP function.
  • MANAGE function: The MANAGE function uses outcomes from the MAP function.
  • NIST AI RMF 1.0: The MAP function is a core component defined within the AI RMF 1.0.
  • NIST AI RMF Playbook: The Playbook describes practices related to mapping AI risks using the MAP function.
  • AI RMF 1.0: The MAP function is a core component of the AI RMF 1.0 framework.
  • MAP 2: MAP 2 is a specific category within the MAP function.
  • MAP 3: MAP 3 is a specific category within the MAP function.
  • AI RMF 1.0: The MAP function is a defined category/subcategory structure within the AI RMF 1.0.
  • MEASURE 2.6: Safety risks evaluated in MEASURE 2.6 are identified in the MAP function.
  • MEASURE 2.7: Security and resilience evaluated in MEASURE 2.7 are identified in the MAP function.
  • MEASURE 2.8: Transparency and accountability risks evaluated in MEASURE 2.8 are identified in the MAP function.
  • MEASURE 2.9: Model explanation and validation in MEASURE 2.9 are informed by the MAP function.
  • MEASURE 2.10: Privacy risks evaluated in MEASURE 2.10 are identified in the MAP function.
  • MEASURE 2.11: Fairness and bias evaluated in MEASURE 2.11 are identified in the MAP function.
  • MEASURE 2.12: Environmental impact evaluated in MEASURE 2.12 is identified in the MAP function.

MEASURE framework_component

A core function of the AI RMF focused on the quantitative and qualitative assessment, analysis, and documentation of AI risks.
  • AI RMF 1.0: The AI RMF includes the MEASURE function.
  • AI RMF 1.0: MEASURE is a core function and essential component of the AI RMF 1.0 framework.
  • MAP: MEASURE uses knowledge identified in the MAP function.
  • MANAGE: MEASURE informs the MANAGE function.
  • TEVV: The MEASURE function includes TEVV processes.
  • NIST AI RMF Playbook: The Playbook describes practices related to the MEASURE function.
  • MANAGE: The MANAGE function utilizes analytical output and documentation practices from the MEASURE function.

MEASURE 2.10 framework_component

A subcategory of the MEASURE function focusing on privacy risk examination.
  • MAP function: Privacy risks evaluated in MEASURE 2.10 are identified in the MAP function.

MEASURE 2.11 framework_component

A subcategory of the MEASURE function focusing on fairness and bias evaluation.
  • MAP function: Fairness and bias evaluated in MEASURE 2.11 are identified in the MAP function.

MEASURE 2.12 framework_component

A subcategory of the MEASURE function focusing on environmental impact and sustainability.
  • MAP function: Environmental impact evaluated in MEASURE 2.12 is identified in the MAP function.

MEASURE 2.13 framework_component

A subcategory of the MEASURE function focusing on the effectiveness of TEVV metrics and processes.

MEASURE 2.6 framework_component

A subcategory of the MEASURE function focusing on regular safety risk evaluation and system reliability.
  • MAP function: Safety risks evaluated in MEASURE 2.6 are identified in the MAP function.

MEASURE 2.7 framework_component

A subcategory of the MEASURE function focusing on AI system security and resilience.
  • MAP function: Security and resilience evaluated in MEASURE 2.7 are identified in the MAP function.

MEASURE 2.8 framework_component

A subcategory of the MEASURE function focusing on transparency and accountability risks.
  • MAP function: Transparency and accountability risks evaluated in MEASURE 2.8 are identified in the MAP function.

MEASURE 2.9 framework_component

A subcategory of the MEASURE function focusing on AI model explanation, validation, and documentation.
  • MAP function: Model explanation and validation in MEASURE 2.9 are informed by the MAP function.

MEASURE 3 framework_component

A subcategory of the MEASURE function focusing on mechanisms for tracking AI risks over time.

MEASURE function framework_component

A core function of the AI RMF focused on assessing, documenting, and monitoring AI system performance, safety, and risks based on outcomes from the MAP function.
  • AI RMF 1.0: The MEASURE function is a core component of the AI RMF 1.0 used for risk evaluation.
  • MAP function: The MEASURE function uses outcomes from the MAP function.
  • NIST AI RMF 1.0: The MEASURE function is a core component defined within the AI RMF 1.0.
  • F 1.0: The document details the categories and subcategories of the MEASURE function.

National AI Initiative Act of 2020 document

A legislative act that directs and informs the development of national AI efforts and standards by NIST.
  • AI RMF 1.0: Development of the AI RMF is consistent with the National AI Initiative Act of 2020.

National Artificial Intelligence Initiative Act of 2020 document

Legislation (P.L. 116-283) that directs national AI efforts and mandated the creation of the NIST AI RMF.
  • AI RMF: The Act directed the creation and goal of the AI RMF.

National Institute of Standards and Technology organization

A U.S. federal agency that develops standards and technology, responsible for the AI RMF.

National Security Commission on Artificial Intelligence organization

A commission that provided recommendations for AI efforts.

NIST organization

The National Institute of Standards and Technology (NIST) is a U.S. government agency responsible for developing and maintaining standards and frameworks, including the AI Risk Management Framework.
  • AI RMF Playbook: NIST is the organization that plans, maintains, and updates the AI RMF Playbook.
  • AI RMF 1.0: NIST is the publisher of the AI RMF 1.0 document.
  • AI RMF 1.0: The AI RMF 1.0 framework was developed and published by NIST.
  • AI RMF: NIST manages the development and alignment of the AI RMF.
  • AI RMF 1.0: NIST developed and authored the AI RMF 1.0.
  • AI RMF 1.0: The AI RMF 1.0 framework was developed and published by NIST.
  • NIST AI 100-1: The document was published by the National Institute of Standards and Technology (NIST).
  • AI RMF: NIST developed and modified the AI RMF.
  • NIST Special Publication 1270: NIST is the publisher of the Special Publication 1270.
  • NIST AI RMF Playbook: The playbook is published by NIST.

NIST AI 100-1 document

The official NIST publication identifier for the AI Risk Management Framework (AI RMF 1.0), providing technical guidance and standards for managing AI risks.

NIST AI RMF 1.0 document

The AI Risk Management Framework version 1.0 published by NIST.
  • MAP function: The MAP function is a core component defined within the AI RMF 1.0.
  • MEASURE function: The MEASURE function is a core component defined within the AI RMF 1.0.
  • MANAGE function: The MANAGE function is a core component defined within the AI RMF 1.0.
  • GOVERN function: The GOVERN function is a core component defined within the AI RMF 1.0.
  • NIST AI 100-1: NIST AI 100-1 is the publication identifier for the AI RMF 1.0 content.
  • MANAGE: The MANAGE function is a core component defined within the AI RMF.
  • NIST AI RMF Playbook: The framework document references the Playbook for practical implementation.

NIST AI RMF Playbook resource

An online companion resource to the AI RMF that provides practical guidance, tactical actions, and practices for mapping and managing AI risks.
  • AI RMF 1.0: The Playbook serves as an online companion resource to the AI RMF.
  • NIST Trustworthy and Responsible AI Resource Center: The Playbook is part of the NIST Trustworthy and Responsible AI Resource Center.
  • MAP function: The Playbook describes practices related to mapping AI risks using the MAP function.
  • MEASURE: The Playbook describes practices related to the MEASURE function.
  • NIST AI RMF 1.0: The framework document references the Playbook for practical implementation.

NIST AI RMF Playbook document

A document providing guidance and practices related to governing AI risks.
  • GOVERN function: The playbook describes the categories and subcategories of the GOVERN function.
  • NIST: The playbook is published by NIST.

NIST AI RMF website url

The official web portal for the NIST AI Risk Management Framework.
  • AI RMF Playbook: The Playbook is accessible via the NIST AI RMF website URL.

NIST Cybersecurity Framework framework_component

A framework developed by NIST to provide organizations with guidance on identifying and reducing cybersecurity risks.
  • NIST AI 100-1: The document references the NIST Cybersecurity Framework for security guidance.
  • AI RMF: The NIST Cybersecurity Framework shares common features with the AI RMF and informs its functions.

NIST Privacy Framework framework_component

A framework providing guidance to organizations to reduce privacy risks.
  • AI RMF: The NIST Privacy Framework shares common features with the AI RMF and informs its functions.

NIST Risk Management Framework framework_component

A framework providing broad guidance for enterprise risk management.
  • AI RMF: The NIST Risk Management Framework shares common features with the AI RMF and informs its functions.

NIST Special Publication 1270 document

A NIST publication titled 'Towards a Standard for Identifying and Managing Bias in Artificial Intelligence'.
  • NIST AI 100-1: The document references SP 1270 for further information on bias.
  • NIST: NIST is the publisher of the Special Publication 1270.

NIST Trustworthy and Responsible AI Resource Center organization

A central hub for resources related to trustworthy and responsible AI, housing the AI RMF and the Playbook.
  • NIST AI RMF Playbook: The Playbook is part of the NIST Trustworthy and Responsible AI Resource Center.
  • AI RMF 1.0: The AI RMF is part of the NIST Trustworthy and Responsible AI Resource Center.

OECD organization

The Organisation for Economic Co-operation and Development (OECD) is an international organization that defines AI actors and publishes frameworks for classifying AI lifecycle activities.

OECD Framework for the Classification of AI systems resource

An OECD digital economy paper that provides a classification system for AI, serving as a reference for the AI RMF lifecycle dimensions.
  • AI RMF 1.0: The AI RMF references the OECD framework for its lifecycle dimensions.
  • AI RMF 1.0: The AI RMF references the OECD framework for its lifecycle and dimension diagrams.
  • OECD: The OECD is the publisher of the classification framework.

OECD Framework for the Classification of AI systems document

A document published by the OECD detailing socio-technical dimensions for AI policy and governance.
  • OECD: The OECD is the publisher of the classification framework document.
  • AI RMF: The AI RMF incorporates and modifies dimensions from the OECD framework.

OECD Recommendation on AI:2019 document

An international policy document providing recommendations on artificial intelligence.
  • AI RMF: The AI RMF adapts definitions from the OECD Recommendation on AI.
  • OECD: The OECD is the publishing body for the recommendation.

OMB Circular A-130:2016 document

A policy document from the Office of Management and Budget regarding the management of federal information resources.
  • AI RMF: The AI RMF utilizes risk definitions adapted from OMB Circular A-130:2016.

Operation and Monitoring framework_component

A phase in the AI lifecycle focused on operating the system and assessing system output and impacts.

Organizational Management organization

Internal leadership responsible for AI governance and organizational sustainability.
  • AI RMF 1: Organizational management is responsible for implementing AI governance frameworks.

Organizations organization

Entities that develop or deploy AI systems and are responsible for managing associated risks.
  • AI RMF 1.0: Organizations are advised to use the AI RMF to manage risks and document processes.

Privacy-enhanced framework_component

A characteristic of trustworthy AI systems.
  • AI RMF 1.0: The framework defines privacy-enhanced as a core characteristic.

Privacy-enhancing technologies (PETs) framework_component

Technologies designed to protect privacy in AI systems.
  • AI RMF 1.0: The framework discusses PETs as a method to support privacy-enhanced AI.

Psychological Foundations of Explainability and Interpretability in Artificial Intelligence resource

A reference document exploring the psychological aspects of AI explainability.
  • NIST AI 100-1: The document references the psychological foundations resource.

Safe framework_component

A characteristic of trustworthy AI systems.
  • AI RMF 1.0: The framework defines safe as a core characteristic.

Secure and Resilient framework_component

A characteristic of trustworthy AI systems.
  • AI RMF 1.0: The framework defines secure and resilient as a core characteristic.

Secure Software Development Framework framework_component

A framework focused on integrating security into the software development lifecycle.
  • AI RMF: The Secure Software Development Framework shares common features with the AI RMF and informs its functions.

Test, Evaluation, Verification, and Validation (TEVV) framework_component

A set of tasks performed throughout the entire AI lifecycle to ensure system quality and reliability.
  • AI Development: TEVV tasks are performed throughout the AI lifecycle, including the development phase.
  • AI Deployment: TEVV tasks are performed throughout the AI lifecycle, including the deployment phase.
  • Operation and Monitoring: TEVV tasks are performed throughout the AI lifecycle, including the operation and monitoring phase.

TEVV framework_component

Test, evaluation, verification, and validation processes performed throughout the AI lifecycle to assess system performance.
  • MEASURE: The MEASURE function includes TEVV processes.
  • AI RMF 1.0: The AI RMF 1.0 framework incorporates TEVV tasks for AI lifecycle management.

The NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management document

A document providing a framework for organizations to manage privacy risks.
  • AI RMF 1.0: The text cites the Privacy Framework while discussing AI risk management.

Third-party entities organization

External providers, developers, and vendors of AI technologies and services.
  • AI RMF 1: Third-party entities are involved in the development tasks governed by AI frameworks.

Transparency tools resource

Mechanisms and documentation used to enhance the clarity and accountability of AI systems.
  • AI systems: Transparency tools are used to provide documentation and insights into AI systems.

U.S. Department of Commerce organization

The cabinet-level department that oversees the National Institute of Standards and Technology.

Valid and Reliable framework_component

A foundational characteristic of trustworthy AI systems that requires them to meet specific intended use requirements and perform consistently without failure.
  • AI RMF 1.0: The framework defines valid and reliable as a core characteristic.
  • AI Risk Management Document: The document outlines the importance of validity and reliability as trustworthiness characteristics.

Version Control Table framework_component

A component within the AI RMF used to track the history of changes, versions, and dates.
  • NIST AI 100-1: The framework includes a version control table to track revisions.