Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

From viral social media trends to philosophical debates on socio-environmental impacts, artificial intelligence and its applications are widely recognized as one of the world's most disrupting novelties. While there is broad consensus on the need for AI regulation, the challenge lies in design frameworks that can keep pace with rapid technological advances.

That is why risk-based approaches — like the EU AI Act — have emerged as a promising solution.

Instead of regulating the technology itself, or some of its characteristics, the risk-based regulation is focused on the impacts of AI's application on specific subjects or society as a whole. Yet, foundational questions remain: what constitutes an AI risk? And how can different frameworks be compared or harmonized?

Attempts to organize AI risks are not in short supply. From industry white papers to academic proposals and regulatory definitions, the field is abundant in models that promise to capture what could go wrong when intelligent systems meet the real world.

Yet, paradoxically, this wealth of frameworks can leave professionals more confused than confident. Overlapping terms, inconsistent categories and diverging methodologies make it difficult to compare risks, prioritize mitigation efforts, or align practices with regulatory expectations.

Thus enters the MIT AI Risk Repository. Rather than creating a new model from scratch, the repository synthesizes existing work, acting as a meta-framework that brings order to a fragmented field.

It is presented as a living inventory of risks, built on the results of a systematic review of 65 documents — ranging from academic papers to policy guidelines and technical assessments. Through this synthesis, researchers identified and cataloged more than 1,600 distinct risk formulations.

The repository doesn't offer a static list of dangers. Instead, it presents a flexible taxonomy that enables classification, comparison and aggregation of risks according to two main axes: causality and domain.

The Causal Taxonomy of AI Risks classifies each risk according to three dimensions: the type of entity responsible — human, model, socio-technical system; whether the risk is intentional or not; and the moment it arises along the AI life cycle — pre-deployment, deployment, post-deployment.

The Domain Taxonomy of AI Risks, in turn, organizes risks across seven broad domains and 24 subdomains, reflecting the spheres of human life and systemic structures that can be affected by AI applications.

The seven domains include: discrimination and toxicity, privacy and security, misinformation, malicious actors and misuse, human-computer interaction, socioeconomic and environmental harms, and AI system safety, failures and limitations.

Each is further broken down into 24 subdomains — for example, "exposure to toxic content," "system vulnerabilities," "pollution of the information ecosystem," or "AI pursuing its own goals in conflict with human intent."

For privacy and compliance professionals, the repository offers a conceptual map that bridges the gap between AI risk theory — and its numerous iterations and variations — and the specific obligations found in legal frameworks such as the EU General Data Protection Regulation, the EU AI Act and forthcoming regional regulations.

Many of the risks cataloged, from algorithmic discrimination to lack of transparency and failures in accountability, have clear parallels in existing legal requirements like fairness, explainability and security by design. As a result, the repository can serve as a valuable resource when conducting data protection impact assessments, auditing AI systems, or designing governance structures.

It also supports efforts to test AI system compliance under Article 22 of the GDPR or under Brazilian regulatory guidelines concerning automated decision-making. In Brazil, Bill 2338/2023 follows a similar approach, emphasizing the importance of AI governance structures and requiring risk management mechanisms for high-risk systems — an area where the AI Risk Repository could directly support risk classification and mitigation planning.

The systematic analysis presented in the AI Repository yields several important insights, some of which are pointed out  in the document's Plain Language Summary.

First, the majority of identified risks are post-deployment phenomena, at 62%, suggesting traditional forms of ex ante control may not be sufficient to address the bulk of real-world harms. Second, most risks are unintentional, which underscores the importance of systemic thinking, inclusive design and ongoing monitoring. Among the domains, three stood out for their frequency: security and system limitations 26%, socioeconomic and environmental risks at 19%, and discrimination and toxicity at 15%.

These findings are not only statistically relevant but politically charged, reflecting both technical challenges and long-standing social debates. At the same time, other domains — such as AI well-being, competitive dynamics and informational pollution — appeared as underexplored, pointing to blind spots that merit further investigation.

Within the most discussed domains, the subdomains most frequently cited include "lack of capability or robustness," "exposure to toxic content," and "AI pursuing its own goals." In contrast, domains such as "AI welfare and rights," "competitive dynamics," and "pollution of the information ecosystem" were significantly underrepresented, each accounting for 1% or less of total risks.

These findings are not just interesting; they are actionable. The AI Risk Repository can be applied in several specific ways by professionals working at the intersection of AI, privacy and ethics. First, it provides a structured vocabulary to map risks across the AI life cycle, enabling teams to identify risks not only at the design stage, but also during development, deployment and post-deployment monitoring.

Second, it can support regulatory engagement — for instance, by grounding public consultation responses in a shared language of risk, or by informing internal positions in response to enforcement actions.

Third, the repository helps teams prioritize risk mitigation efforts, by drawing attention to high-frequency or high-impact risks.

And finally, it can serve as a training and capacity-building tool: the taxonomies proposed are accessible enough to be shared across legal, technical, and operational teams, promoting alignment and shared understanding.

For companies developing or deploying AI systems, the repository can also guide product teams in pre-launch risk assessments, support procurement processes with due diligence checklists, or help legal departments anticipate obligations under upcoming AI legislation. Its structure makes it adaptable to different organizational maturities, from startups to highly regulated enterprises.

Ultimately, initiatives such as the AI Risk Repository are relevant for advancing responsible AI governance. As technology continues to evolve — and as regulatory frameworks try to catch up — having a shared, dynamic and well-structured map of risks becomes indispensable.

But tools alone are not enough. Translating this map into operational practice requires institutions willing to embed risk thinking into their processes and professionals able to interpret and apply these insights with nuance.

In this sense, professional communities and legal experts in the fields of data protection, human rights and compliance play a critical role as its translators and interpreters — the bridge-builders, in a sense. Through their expertise, taxonomies can be transformed into checklists, insights into safeguards, and abstract categories into real protections.

As such, the AI Risk Repository should not be seen as a finished product, but rather as a foundation that invites adoption, adaptation and contribution.

Maria Beatriz Previtali is associate lawyer and Tiago Neves Furtado, CIPP/E, CIPM, CDPO/BR, FIP, leads the Data Protection and Artificial Intelligence and the Incident Response Team at Opice Blum Advogados.