REPORT

AI Governance in Practice Report 2024

This report by the IAPP and FTI Consulting aims to inform AI governance professionals of the most significant challenges to be aware of when building and maturing an AI governance program.


Published: 3 June 2024

View report

Recent and rapidly advancing breakthroughs in machine learning technology have forever transformed the landscape of AI.

AI systems have become powerful engines capable of autonomous learning across vast swaths of information and generating entirely new data. As a result, society is in the midst of significant disruption with the surge in AI sophistication and the emergence of a new era of technological innovation.

As businesses grapple with a future in which the boundaries of AI only continue to expand, their leaders face the responsibility of managing the various risks and harms of AI, so its benefits can be realized in a safe and responsible manner.

Critically, these benefits are accompanied by serious considerations and concerns about the safety of this technology and the potential for it to disrupt the world and negatively impact individuals when left unchecked. Confusion about how the technology works, the introduction and proliferation of bias in algorithms, dissemination of misinformation, and privacy rights violations represent only a sliver of the potential risks.

The practice of AI governance is designed to tackle these issues. It encompasses the growing combination of principles, laws, policies, processes, standards, frameworks, industry best practices and other tools incorporated across the design, development, deployment and use of AI.

While relatively new, the field of AI governance is maturing, with government authorities around the world beginning to develop targeted regulatory requirements and governance experts supporting the creation of accepted principles, such as the Organisation for Economic Co-Operation and Development's AI Principles, emerging best practices and tools for various uses of AI in different domains.

There are many challenges and potential solutions for AI governance, each with unique proximity and significance based on an organization's role, footprint, broader risk- governance profile and maturity. This report aims to inform the growing, increasingly empowered and increasingly important community of AI governance professionals about the most common and significant challenges to be aware of when building and maturing an AI governance program. It offers actionable, real-world insights into applicable law and policy, a variety of governance approaches, and tools used to manage risk. Indeed, some of the challenges to AI governance overlap and run through a range of themes. Therefore, an emerging solution for one thematic challenge may also be leveraged for another. Conversely, in certain circumstances, specific challenges and associated solutions may conflict and require reconciliation with other approaches. Some of these potential overlaps and conflicts have been identified throughout the report.

Questions about whether and when organizations should prioritize AI governance are being answered: "yes" and "now," respectively. This report is, therefore, focused on how organizations can approach, build and leverage AI governance in the context of the increasingly voluminous and complex applicable landscape.

Given the complexity and transformative nature of AI, significant work has been done by law and policymakers on what is now a vast and growing body of principles, laws, policies, frameworks, declarations, voluntary commitments, standards and emerging best practices that can be challenging to navigate. Many of these various sources interact with each other, either directly or by virtue of the issues covered.

The following are examples of some of the most prominent and consequential AI governance efforts:

With private investment, global adoption rates and regulatory activity on the rise, as well as the growing maturity of the technology, AI is increasingly becoming a strategic priority for organizations and governments worldwide. Organizations of all sizes and industries are increasingly engaging with AI systems at various stages of the technology product supply chain.

Global AI private investment
(USD billion, 2021)

ai_governance_in_practice_report_landing_page_graph_mobile.png
The exceptional dependence on high volumes of data and endless practical applicability that make AI technology a disruptive opportunity can also generate uniquely multifaceted risks for businesses and individuals. These include legal, regulatory, reputational and/or financial risks to organizations, but also risks to individuals and the wider society.
AI Risks
  • Individuals and society: Risk of bias or other detrimental impact on individuals.
  • Legal and regulatory: Risk of noncompliance with legal and contractual obligations.
  • Financial: Risk of financial implications, e.g., fines, legal or operational costs, or lost profit.
  • Reputational: Risk of damage to reputation and market competitiveness.
Understanding that AI systems, like all products, follow a life cycle is important as there are governance considerations across the life cycle. The National Institute of Standards and Technology AI Risk Management Framework sets out a comprehensive articulation of the AI system life cycle and includes considerations for testing, evaluation, validation, verification and key stakeholders for each phase. A more simplified sample life cycle is included in this report, along with some top-level considerations.
The AI life cycle

Organizations may seek to leverage existing organizational risk frameworks to tackle AI risk at enterprise, product and operational levels. Tailoring their approach to AI governance to their specific AI product risks, business needs and broader strategic objectives can help organizations establish the building blocks of trustworthy and responsible AI. A key goal of the AI governance program is to facilitate responsible innovation. Flexibly adapting existing governance processes can help businesses to move forward with exploring the disruptive competitive opportunities that AI technologies present, while minimizing associated financial, operational and reputational risks.

ai_governance_in_practice_report_2024_mockup_mobile.png
CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Joe Jones

Research and Insights Director, IAPP

Uzma Nazir Chaudhry

Former AI Governance Center Fellow, IAPP

CIPP/E

Ashley Casovan

Managing Director, AI Governance Center, IAPP


Tags:

AI and machine learningEthicsProgram managementRisk managementStrategy and governanceTesting and evaluationTechnologyAI governance
REPORT

AI Governance in Practice Report 2024

This report by the IAPP and FTI Consulting aims to inform AI governance professionals of the most significant challenges to be aware of when building and maturing an AI governance program.

Published: 3 June 2024

View report

Contributors:

Joe Jones

Research and Insights Director, IAPP

Uzma Nazir Chaudhry

Former AI Governance Center Fellow, IAPP

CIPP/E

Ashley Casovan

Managing Director, AI Governance Center, IAPP


Recent and rapidly advancing breakthroughs in machine learning technology have forever transformed the landscape of AI.

AI systems have become powerful engines capable of autonomous learning across vast swaths of information and generating entirely new data. As a result, society is in the midst of significant disruption with the surge in AI sophistication and the emergence of a new era of technological innovation.

As businesses grapple with a future in which the boundaries of AI only continue to expand, their leaders face the responsibility of managing the various risks and harms of AI, so its benefits can be realized in a safe and responsible manner.

Critically, these benefits are accompanied by serious considerations and concerns about the safety of this technology and the potential for it to disrupt the world and negatively impact individuals when left unchecked. Confusion about how the technology works, the introduction and proliferation of bias in algorithms, dissemination of misinformation, and privacy rights violations represent only a sliver of the potential risks.

The practice of AI governance is designed to tackle these issues. It encompasses the growing combination of principles, laws, policies, processes, standards, frameworks, industry best practices and other tools incorporated across the design, development, deployment and use of AI.

While relatively new, the field of AI governance is maturing, with government authorities around the world beginning to develop targeted regulatory requirements and governance experts supporting the creation of accepted principles, such as the Organisation for Economic Co-Operation and Development's AI Principles, emerging best practices and tools for various uses of AI in different domains.

There are many challenges and potential solutions for AI governance, each with unique proximity and significance based on an organization's role, footprint, broader risk- governance profile and maturity. This report aims to inform the growing, increasingly empowered and increasingly important community of AI governance professionals about the most common and significant challenges to be aware of when building and maturing an AI governance program. It offers actionable, real-world insights into applicable law and policy, a variety of governance approaches, and tools used to manage risk. Indeed, some of the challenges to AI governance overlap and run through a range of themes. Therefore, an emerging solution for one thematic challenge may also be leveraged for another. Conversely, in certain circumstances, specific challenges and associated solutions may conflict and require reconciliation with other approaches. Some of these potential overlaps and conflicts have been identified throughout the report.

Questions about whether and when organizations should prioritize AI governance are being answered: "yes" and "now," respectively. This report is, therefore, focused on how organizations can approach, build and leverage AI governance in the context of the increasingly voluminous and complex applicable landscape.

Given the complexity and transformative nature of AI, significant work has been done by law and policymakers on what is now a vast and growing body of principles, laws, policies, frameworks, declarations, voluntary commitments, standards and emerging best practices that can be challenging to navigate. Many of these various sources interact with each other, either directly or by virtue of the issues covered.

The following are examples of some of the most prominent and consequential AI governance efforts:

With private investment, global adoption rates and regulatory activity on the rise, as well as the growing maturity of the technology, AI is increasingly becoming a strategic priority for organizations and governments worldwide. Organizations of all sizes and industries are increasingly engaging with AI systems at various stages of the technology product supply chain.

Global AI private investment
(USD billion, 2021)

ai_governance_in_practice_report_landing_page_graph_mobile.png
The exceptional dependence on high volumes of data and endless practical applicability that make AI technology a disruptive opportunity can also generate uniquely multifaceted risks for businesses and individuals. These include legal, regulatory, reputational and/or financial risks to organizations, but also risks to individuals and the wider society.
AI Risks
  • Individuals and society: Risk of bias or other detrimental impact on individuals.
  • Legal and regulatory: Risk of noncompliance with legal and contractual obligations.
  • Financial: Risk of financial implications, e.g., fines, legal or operational costs, or lost profit.
  • Reputational: Risk of damage to reputation and market competitiveness.
Understanding that AI systems, like all products, follow a life cycle is important as there are governance considerations across the life cycle. The National Institute of Standards and Technology AI Risk Management Framework sets out a comprehensive articulation of the AI system life cycle and includes considerations for testing, evaluation, validation, verification and key stakeholders for each phase. A more simplified sample life cycle is included in this report, along with some top-level considerations.
The AI life cycle

Organizations may seek to leverage existing organizational risk frameworks to tackle AI risk at enterprise, product and operational levels. Tailoring their approach to AI governance to their specific AI product risks, business needs and broader strategic objectives can help organizations establish the building blocks of trustworthy and responsible AI. A key goal of the AI governance program is to facilitate responsible innovation. Flexibly adapting existing governance processes can help businesses to move forward with exploring the disruptive competitive opportunities that AI technologies present, while minimizing associated financial, operational and reputational risks.

ai_governance_in_practice_report_2024_mockup_mobile.png
CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Tags:

AI and machine learningEthicsProgram managementRisk managementStrategy and governanceTesting and evaluationTechnologyAI governance

Related resources