(0)
1-855-732-3348
+
Learn About Our CCSK X CCSP Training Week

AI Governance Series: NIST AI RMF

Introduction to the NIST AI Risk Management Framework (RMF)

The NIST AI Risk Management Framework (RMF) is a guide for organizations to use to identify and manage the risks associated with artificial intelligence (AI) systems as part of their implementation of AI governance. The document is divided into two parts: the first part discusses the risks associated with AI systems, and the second part provides a framework for managing those risks.

Risks Associated with AI Systems

The AI RMF identifies a number of risks that can be associated with AI systems, including:

Bias

AI systems can be biased, which can lead to unfair or discriminatory outcomes. Bias can occur in AI systems in a number of ways, including:

 

  • Data bias: AI systems can be biased if they are trained on data that is biased. For example, if an AI system is trained on a dataset of resumes that is mostly from men, the system may be more likely to recommend men for jobs.

 

  • Algorithmic bias: AI systems can also be biased if the algorithms they use are biased. For example, if an AI system is using an algorithm that was designed to predict recidivism rates, the system may be biased against people of color.

 

  • Human bias: Finally, AI systems can be biased if they are used by people who are biased. For example, if a human is using an AI system to make decisions about who to hire, the human’s biases may be reflected in the system’s decisions.

 

The AI RMF document provides a number of recommendations for organizations to mitigate the risk of bias in AI systems. These recommendations include:

 

  • Use a diverse dataset: Organizations should use a diverse dataset to train their AI systems. This will help to ensure that the systems are not biased against any particular group of people.

 

  • Use transparent algorithms: Organizations should use transparent algorithms in their AI systems. This will make it easier to identify and address any biases that may exist in the systems.

 

  • Monitor for bias: Organizations should monitor their AI systems for bias. This can be done by tracking the systems’ performance and by conducting audits.

 

  • Address bias: If bias is found in an AI system, organizations should take steps to address it. This may involve retraining the system on a different dataset, changing the algorithm, or using a different system altogether.

 

Privacy

AI systems can collect and use personal data, which can raise privacy concerns. AI systems can pose a risk to privacy in a number of ways, including:

 

  • Data collection: AI systems can collect personal data about people, including their names, addresses, phone numbers, and other sensitive information.

 

  • Data use: AI systems can use personal data to make decisions about people, such as whether to approve them for a loan or a job.

 

  • Data sharing: AI systems can share personal data with other organizations, such as marketing companies or government agencies.

 

The AI RMF document provides a number of recommendations for organizations to mitigate the risk of privacy in AI systems. These recommendations include:

 

  • Obtain consent: Organizations should obtain consent from people before collecting or using their personal data.

 

  • Minimize data collection: Organizations should only collect the personal data that is necessary for the AI system to function.

 

  • Encrypt data: Organizations should encrypt personal data to protect it from unauthorized access.

 

  • Delete data: Organizations should delete personal data when it is no longer needed.

 

The AI RMF document is a valuable resource for organizations that are developing or using AI systems. The document provides guidance on how to identify and mitigate the risk of privacy in AI systems, and it can help organizations to use AI systems safely and responsibly.

 

In addition to the above, the AI RMF document also discusses the following privacy-related risks associated with AI systems:

 

  • Surveillance: AI systems can be used to collect and track people’s movements and activities. This can raise privacy concerns, especially in areas where people have a reasonable expectation of privacy, such as their homes or their bodies.

 

  • Discrimination: AI systems can be used to make decisions about people that could have a discriminatory impact. For example, an AI system that is used to make hiring decisions could be biased against people of color or women.

 

  • Manipulation: AI systems can be used to manipulate people’s thoughts and behaviors. This could be used for harmful purposes, such as spreading misinformation or propaganda.

 

Security

AI systems can be hacked or manipulated, which can lead to data breaches or other security incidents. AI systems can pose a risk to security in a number of ways, including:

 

  • Data breaches: AI systems can be hacked or manipulated, which could lead to a data breach. This could expose sensitive data, such as personal information or financial data, to unauthorized individuals.

 

  • Malware: AI systems can be infected with malware, which could be used to damage the system or steal data.

 

  • Denial-of-service attacks: AI systems can be targeted by denial-of-service attacks, which could make the system unavailable to users.

 

The AI RMF document provides a number of recommendations for organizations to mitigate the risk of security in AI systems. These recommendations include:

 

  • Use strong security measures: Organizations should use strong security measures to protect their AI systems, such as firewalls, intrusion detection systems, and encryption.

 

  • Keep systems up to date: Organizations should keep their AI systems up to date with the latest security patches.

 

  • Monitor systems for threats: Organizations should monitor their AI systems for threats, such as unauthorized access or malicious activity.

 

  • Have a plan for responding to security incidents: Organizations should have a plan for responding to security incidents, such as data breaches or malware infections.

 

In addition to the above, the AI RMF document also discusses the following security-related risks associated with AI systems:

 

  • Attacks on AI algorithms: AI algorithms can be attacked by hackers, who could try to manipulate the algorithms to produce incorrect or misleading results.

 

  • Attacks on AI infrastructure: AI systems are often reliant on complex infrastructure, such as cloud computing platforms and data centers. These infrastructures can be targeted by hackers, who could try to disrupt or disable them.

 

  • Attacks on AI users: AI systems can be used to target individuals with malicious intent, such as phishing attacks or identity theft.

 

Safety

AI systems can malfunction or make mistakes, which can lead to physical harm or property damage. AI systems can pose a risk to safety in a number of ways, including:

 

  • Malfunction: AI systems can malfunction, which could lead to physical harm or property damage. For example, an AI-powered car that malfunctions could crash into another car or pedestrian.

 

  • Misuse: AI systems can be misused, which could lead to physical harm or property damage. For example, an AI-powered weapon could be used to harm people or destroy property.

 

  • Unintended consequences: AI systems can have unintended consequences, which could lead to physical harm or property damage. For example, an AI-powered system that is designed to optimize traffic flow could end up causing more traffic congestion.

 

The AI RMF document provides a number of recommendations for organizations to mitigate the risk of safety in AI systems. These recommendations include:

 

  • Design for safety: Organizations should design their AI systems with safety in mind. This includes using safe algorithms, testing the systems for safety, and having a plan for responding to safety incidents.

 

  • Use safe controls: Organizations should use safe controls to mitigate the risk of accidents or injuries. This includes using physical barriers, warning signs, and procedures to prevent people from being harmed by AI systems.

 

  • Educate users: Organizations should educate users about the risks of AI systems and how to use them safely. This includes providing training on how to operate the systems and how to identify and avoid potential hazards.

 

In addition to the above, the AI RMF document also discusses the following safety-related risks associated with AI systems:

 

  • Errors in AI algorithms: AI algorithms can contain errors, which could lead to the system making incorrect decisions. This could lead to physical harm or property damage.

 

  • Lack of transparency: AI systems are often opaque, which makes it difficult to understand how they work. This can make it difficult to identify and mitigate potential safety risks.

 

  • Bias in AI systems: AI systems can be biased, which could lead to the system making unfair or discriminatory decisions. This could lead to physical harm or property damage.

 

Framework for Managing Risks as Part of AI Governance

The AI RMF provides a framework for organizations to use to manage these risks as part of a holistic approach to AI governance. The framework includes the following steps:

 

Identify the risks 

Organizations need to identify the risks that are associated with their AI systems. This can be done by considering the potential impacts of the risks, the likelihood of the risks occurring, and the controls that are currently in place to mitigate the risks.

 

Main items to consider are listed as:

 

  • The purpose of the AI system: What is the AI system designed to do? What are the potential benefits of the system?

 

  • The data used to train the AI system: Where did the data come from? Is the data accurate and reliable?

 

  • The AI algorithm: How does the AI algorithm work? Is the algorithm transparent and understandable?

 

  • The AI system’s environment: How will the AI system be used? What are the potential risks in the environment where the system will be used?

 

Assess the risks

  1. Once the risks have been identified, they need to be assessed. This involves evaluating the likelihood of the risks occurring and the potential impacts of the risks.
  2. Organizations should then develop controls to mitigate the risks. Controls can include technical controls, such as firewalls and intrusion detection systems, and non-technical controls, such as policies and procedures.
  3. Finally, organizations should monitor and evaluate the controls to ensure that they are effective. This can be done by tracking the performance of the controls and by conducting audits.

 

Here are some additional tips for assessing risks in AI systems:

 

  • Use a risk assessment framework. There are a number of risk assessment frameworks available, such as ISO 31000 and NIST Cybersecurity Framework. These frameworks can help organizations to identify and assess risks in a systematic way.

 

  • Consider the potential for unintended consequences. AI systems can sometimes have unexpected effects, so it is important to think about all the possible outcomes.

 

  • Be aware of the latest research on AI risks. The field of AI is constantly evolving, so it is important to stay up-to-date on the latest research on AI risks.

 

 

Develop controls

Once the risks have been assessed, organizations need to develop controls to mitigate the risks. Like “normal” systems, controls can include technical controls, such as firewalls and intrusion detection systems, and non-technical controls, such as policies and procedures.

 

  • Technical controls: Technical controls can include firewalls, intrusion detection systems, and encryption. These controls can help to protect AI systems from unauthorized access, malicious activity, and data breaches.

 

  • Non-technical controls: Non-technical controls can include policies and procedures, training, and awareness. These controls can help to ensure that AI systems are used safely and responsibly, and that people are aware of the risks associated with AI systems.

 

Implement the controls

Once the controls have been developed, they need to be implemented. This involves putting the controls in place and ensuring that they are working effectively. Implementation can include the following activities:

 

  • Assigning responsibilities: Organizations should assign responsibilities for implementing and maintaining the controls.

 

  • Providing training: Organizations should provide training to employees on how to use the controls.

 

Monitor and evaluate the controls

Once the controls have been implemented, they need to be monitored and evaluated to ensure that they are still effective. This involves reviewing the controls on a regular basis and making changes as needed. Monitoring and evaluation can include the following activities:

 

  • Tracking the performance of the controls: Organizations should track the performance of the controls to make sure they are still effective in mitigating the risks.

 

  • Conducting audits: Organizations should conduct audits to make sure the controls are being implemented and used correctly.

 

  • Making changes to the controls: Organizations should make changes to the controls if they are not effective or if the risks change.

 

Conclusion

The AI RMF is a valuable resource for organizations that are developing or using AI systems. The document provides guidance on how to identify and manage the risks associated with AI systems, and it can help organizations to use AI systems safely and responsibly. Stay tuned for the next entry in this AI Governance series that looks at the similarities and differences between global recommendations on the implementation and use of AI within your organization.

 

Posted under:

Graham Thompson is an Information Security professional with over 25 years of enterprise experience across engineering, architecture, assessment and training disciplines. He is the founder and CEO of Intrinsec Security, a leading training company that is solely focused on delivering leading authorized IT security training from partners such as the Cloud Security Alliance, ISC2, ISACA, EC-Council and CompTIA.

CCSK | CCSP: The Industry’s Leading Cloud Security Certifications - learn more

Upgrade your Skills. Secure your Potential.

Our experts provide hands-on and on-demand training that helps IT and data security professionals meet today's cyber security challenges and prepares you for a successful future.

Training Schedule Contact Us