(0)
1-855-732-3348
+
Learn About Our CCSK X CCSP Training Week

AI Governance Series: Singapore PDPC

AI Governance Series: Singapore PDPC Introduction

This blog is part of a series on AI Governance recommendations from global organizations. The goal of this series to to look at AI Governance recommendations around the world and to understand the similarities and differences across the various organizations. In this blog, we break down the Singapore Model AI Governance Framework by the Personal Data Protection Commission.

 

The Singapore Model AI Governance Framework (SG-MAF) is a set of principles and guidelines for the responsible use of artificial intelligence (AI). The framework was developed by the Personal Data Protection Commission (PDPC) of Singapore, and it is designed to help organizations ensure that their AI systems are fair, transparent, and accountable.

 

The SG-MAF is divided into four main pillars. The pillars of a good AI Governance Framework are Governance, Data, Algorithm and Human Oversight. Below is coverage of each of these pillars:

 

Pillar One: AI Governance

This pillar focuses on the need for organizations to have a clear governance framework for their AI systems. This framework should include:

Policy on the use of AI

This policy should define the organization’s overall approach to the use of AI, and should set out the principles that will guide the development, deployment, and use of AI systems.

Procedure for the Development of AI Systems

This procedure should set out the steps that will be taken to develop AI systems, and should ensure that the systems are developed in a fair, transparent, and accountable way.

Procedure for the Deployment of AI Systems

This procedure should set out the steps that will be taken to deploy AI systems, and should ensure that the systems are deployed in a way that minimizes the risk of harm to individuals.

Procedure for the use of AI Systems

This procedure should set out the rules and guidelines that will govern the use of AI systems, and should ensure that the systems are used in a fair, transparent, and accountable way.

Pillar Two: Data

 

This pillar focuses on the need for organizations to use high-quality data in their AI systems. The data should be accurate, complete, and representative of the population that the AI system is intended to serve. Data quality is the most important part of having This can be performed using the following approaches:

Use a Variety of Data Sources

No single data source is perfect. By using a variety of data sources, you can help to ensure that your AI system is trained on a wide range of information and is less likely to be biased.

Clean and Preprocess your Data

Before you train your AI system, it’s important to clean and preprocess your data. This means removing any errors or inconsistencies in the data and transforming it into a format that your AI system can understand.

Monitor your AI System’s Performance

Once your AI system is trained, it’s important to monitor its performance over time. This will help you to identify any problems with the data or the AI system itself and make necessary adjustments.

Use a Variety of Evaluation Metrics

When evaluating your AI system’s performance, it’s important to use a variety of metrics. This will help you to get a more complete picture of the system’s performance and identify any areas that need improvement.

 

By following these tips, you can help to ensure that your AI systems are trained on high-quality data and are less likely to be biased. Here are some additional considerations that organizations should take into account when ensuring high-quality data in their AI systems:

 

Data Collection

The organization should ensure that data is collected in a fair and transparent way, and that individuals have the right to access and correct their data.

Data Preparation

The organization should ensure that data is prepared in a way that is appropriate for the AI system, and that any biases in the data are mitigated.

Data Storage

The organization should ensure that data is stored securely and that access to the data is restricted to authorized personnel.

Data Governance

The organization should establish a data governance framework to ensure that data is managed in a consistent and secure way.

 

Pillar Three: Algorithm

 

This pillar focuses on the need for organizations to use algorithms that are fair and unbiased. The algorithms should be transparent and auditable, and they should be subject to regular review.

 

The algorithm section of the SG-MAF focuses on the need for organizations to use algorithms that are fair and unbiased. The algorithms should be transparent and auditable, and they should be subject to regular review.

 

There are a number of factors that organizations should consider when choosing algorithms for their AI systems. These factors include:

 

The Purpose of the AI System

The organization should consider the purpose of the AI system and the type of decisions that it will be making.

The Type of Data the AI System will use

The organization should consider the type of data that the AI system will use and the potential for bias in the data.

The Complexity of the AI System

The organization should consider the complexity of the AI system and the resources that will be required to develop and maintain it.

The Cost of the AI System

The organization should consider the cost of the AI system and the benefits that it is expected to provide.

 

Once an organization has chosen an algorithm, it is important to test the algorithm to ensure that it is fair and unbiased. This can be done by using a variety of methods, such as:

 

Testing the Algorithm on a Variety of Data Sets

This will help to identify any biases in the algorithm.

Using a Variety of Evaluation Metrics

This will help to get a more complete picture of the algorithm’s performance and identify any areas that need improvement.

Having Humans Review the Algorithm’s Decisions

This will help to identify any biases in the algorithm that may not be apparent from the data.

By following these steps, organizations can help to ensure that their AI systems are using algorithms that are fair and unbiased. In addition to the factors listed above, organizations should also consider the following when choosing algorithms for their AI systems:

 

The Explainability of the Algorithm

The organization should consider whether the algorithm is explainable, meaning that it is possible to understand how the algorithm makes decisions. This is important for ensuring that the algorithm is fair and unbiased, and for providing transparency to users of the AI system.

The Robustness of the Algorithm

The organization should consider whether the algorithm is robust to changes in data or to attacks from malicious actors. This is important for ensuring that the AI system is reliable and that its decisions are not easily manipulated.

By taking these factors into account, organizations can help to choose algorithms that are fair, unbiased, explainable, and robust.

 

Pillar Four: Human Oversight

This pillar focuses on the need for organizations to have human oversight of their AI systems. Humans should be able to intervene in cases where AI systems make decisions that are considered to be unfair or biased. There are a number of ways that organizations can implement human oversight of their AI systems. Some common methods of implementing the human oversight pillar include:

 

Having Humans Review AI System Decisions

This is the most common method of human oversight. Humans can review AI system decisions to ensure that they are fair and unbiased.

Having Humans Train AI Systems

Humans can train AI systems to ensure that they are fair and unbiased. This can be done by providing the AI system with examples of fair and unbiased decisions.

Having Humans Develop AI Systems

Humans can develop AI systems to ensure that they are fair and unbiased. This can be done by carefully considering the factors that could lead to bias in the AI system.

The level of human oversight that is required will depend on the nature of the AI system and the potential impact that it could have on individuals. For example, an AI system that is used to make decisions about who gets a loan may require more human oversight than an AI system that is used to recommend products.

 

By implementing human oversight of their AI systems, organizations can help to ensure that their AI systems are used in a fair, transparent, and accountable way. Here are some additional considerations that organizations should take into account when implementing human oversight of their AI systems:

 

The Purpose of the AI System

The organization should consider the purpose of the AI system and the potential impact that it could have on individuals.

The Type of Data that the AI System will use

The organization should consider the type of data that the AI system will use and the potential for bias in the data.

The Algorithms that the AI system will use

The organization should consider the algorithms that the AI system will use and the potential for bias in the algorithms.

The Level of Human Oversight

The organization should consider the level of human oversight that will be required for the AI system.

The Process for Monitoring and Evaluating the AI System

The organization should develop a process for monitoring and evaluating the AI system to ensure that it is performing as intended and that it is not causing any harm to individuals.

 

Conclusion

The SG-MAF is a valuable resource for organizations that are developing or using AI systems. By following the principles and guidelines in the framework, organizations can help to ensure that their AI systems are used in a fair, transparent, and accountable way. We will continue with this AI Governance series in upcoming entries.

Posted under:

Graham Thompson is an Information Security professional with over 25 years of enterprise experience across engineering, architecture, assessment and training disciplines. He is the founder and CEO of Intrinsec Security, a leading training company that is solely focused on delivering leading authorized IT security training from partners such as the Cloud Security Alliance, ISC2, ISACA, EC-Council and CompTIA.

CCSK | CCSP: The Industry’s Leading Cloud Security Certifications - learn more

Upgrade your Skills. Secure your Potential.

Our experts provide hands-on and on-demand training that helps IT and data security professionals meet today's cyber security challenges and prepares you for a successful future.

Training Schedule Contact Us