July 11, 2024

Implementing AI Governance: A Focus on Risk Management

An Introduction to AI Governance

Since the launch of ChatGPT, AI tools have become much more widely used in many organisations. This technology opens up many new opportunities, such as automating customer service or improving content creation. However, it also introduces significant risks. Headlines in press articles frequently mention concerns such as deep fakes, ethical dilemmas around AI replacing artists, legal disputes over copyright between LLM providers and media companies, and privacy issues.

The growing use of Generative AI tools has increased the need for strong AI Governance practices in organisations. Many organisations are now drafting policies concerning the use of ChatGPT in their business activities to prevent leaks of confidential data. Internally, they share guidelines to ensure employees can adopt the technology while minimising risks.

However, AI Governance itself is not a new concept. AI Governance refers to the set of frameworks, policies, and processes to ensure that AI systems are designed, deployed, used, and managed to maximise benefits while preventing harm. In practice, AI Governance operates on multiple levels, from formulating AI policy to raising awareness about AI among employees through internal training programs.

The initial step in implementing AI Governance is to create an overview of the organisation's current and future AI projects. This provides insights into AI usage, associated risks, and compliance with legal and ethical standards. One approach is to use risk management frameworks to identify potential risks and develop mitigation strategies. This process can help identify recurring risks and integrate global policies into practical guidelines. In this blog post, we will specifically focus on risk management frameworks and share our insights.

Overview of Risk Management Frameworks

When searching for “risk management frameworks for AI” on Google, you will quickly discover that such frameworks are not uncommon. Many esteemed research organisations have published their own frameworks on AI Governance and Risk Management. Instead of starting from scratch, we have built our Risk Management Framework based on insights from these existing frameworks and our hands-on experience with client projects. This section will go deeper into the existing frameworks used for risk management. 

When considering risk management frameworks, it's important to take their varying nature and emphasis into account. Some frameworks focus on risk identification, offering tools for risk assessment and evaluation, while others focus on mitigating risks, providing guidelines for addressing specific risks. In this blog post, we consider risk management frameworks that combine both risk identification and risk mitigation efforts.

In 2019, the High-Level Expert Group on AI (AI HLEG) released ethics guidelines for Trustworthy AI and an Assessment List for Trustworthy Artificial Intelligence (ALTAI). The Assessment List includes a series of questions to evaluate potential risks across seven dimensions: human oversight, technical robustness, privacy and data governance, transparency, diversity and fairness, societal well-being, and accountability. These questions encourage looking at AI risks from all angles, considering things you might not think of at first. But answering each one can be time-consuming, and may not be necessary for AI projects with minimal risks. Also, because the guidelines were published in 2019, they don't cover new developments like Generative AI or the EU AI Act.

More recently, in April 2024, the National Institute of Standards and Technology (NIST) introduced the “Artificial Intelligence Risk Management Framework: Generative Artificial  Intelligence Profile”. The framework is an important resource for organisations seeking to identify and mitigate risks associated with Generative AI. It specifically targets risks, like hallucinations, harmful content, data privacy, intellectual property, and toxicity. The framework also provides a list of guidelines to address these risks, focusing on governance, mapping, measurement, and management processes. Thus, the focus is more on procedural aspects, like documentation, transparency and metrics, rather than on technical implementation. Since NIST is an American organisation, the framework doesn’t contain a direct mapping with the requirements of the EU AI Act. However, many recommendations are still applicable and may overlap with those of the AI Act, although a direct mapping between the two has not been undertaken.

The Alan Turing Institute released a series of 8 workbooks to help the public sector apply AI Ethics and Governance Principles in designing, developing, and deploying AI solutions safely. The framework introduces AI and ethics concepts and includes many practical templates and activities for organisations to identify risks. Researchers advocated for a Process-Based Governance approach to integrate principles, like accountability, fairness, explainability, and data stewardship throughout every stage of an ML project lifecycle.

​​In Flanders, Digitaal Vlaanderen also published Guidelines for the use of publicly accessible Generative AI. In addition to these guidelines, they provide examples demonstrating how this technology can be used by employees of the Flemish Government, along with tips for writing prompts.

Practically applying Risk Management Frameworks

 

At ML6, we have conducted multiple risk assessments and offered recommendations to our clients for addressing these risks. In this section, we share our practical insights and tips gained from these hands-on experiences.

  • Continuous analysis: Ethical risk assessment isn’t a one-time task. It’s important to conduct the initial analysis at the beginning of the project to better understand and estimate potential risks before development. But, this process shouldn’t end there. As you actively work on developing your AI system, you’ll gain new insights, such as deeper understanding of data sources and receiving feedback from users. It’s not unusual for project scopes to evolve during development, making it important to re-evaluate the results from the initial analysis and ensure it aligns with these changes.

  • Context is key: It’s important to start by describing the context of your AI project; don’t rush into identifying potential risks. Begin by understanding the project’s motivation, target audience, and its impact on the organisation and society. A clear context and defined scope help identify relevant risks associated with the AI project.

  • Risk matrix: Not all risks carry the same weight. When answering the questions from the ALTAI, you are likely to get a long list of risks and potential mitigation strategies. The challenge then becomes determining where to allocate your focus first. By employing a risk matrix that considers both the probability and consequences of each risk, you can prioritise and allocate resources to those risks that have higher consequences and a higher probability of occurrence.

  • Staying informed about technological and legal developments: The AI landscape evolves rapidly, necessitating awareness of the latest technological and legal developments. For example, guidelines for Trustworthy AI may overlook new regulations, such as the AI Act, or emerging technologies, such as Generative AI, which introduce additional risks. Think about copyright infringement or lack of transparency when building chatbots using Large Language Models. 

  • Collaborative approach: AI Governance should involve stakeholders from diverse backgrounds, including technologists, ethicists, legal experts and end-users. This ensures a comprehensive understanding of risks and benefits. This collaborative approach is also echoed in the Dutch Government’s vision on the responsible use of Generative AI, emphasising the need for collective action to address the opportunities and challenges presented by Generative AI, fostering public trust and facilitating open discussions across various sectors. 

A final work

Risk management frameworks constitute only a part of an AI Governance framework. Given the potential opportunities and risks presented by new technological advancements, it's important to adopt a proactive approach towards AI governance. AI Governance not only enhances clarity for employees but also ensures that risks are mitigated before any potential reputational damage occurs. If you're interested in exploring how AI Governance could be implemented to meet the specific needs of your organisation, feel free to reach out.