Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

For organizations aiming to scale with artificial intelligence, a new reality is becoming harder to ignore: pilot projects are no longer enough. The focus has shifted to achieving AI maturity, a source of pressure particularly for those committed to doing things ethically, responsibly and with long-term impact in mind. 

Governance, risk management, strategy and audit efforts around AI are increasingly being brought together under a single, more structured umbrella: an AI governance framework. It's no longer a buzzword, it's becoming the foundation for sustainable, scalable AI success.

But a common blind spot emerges in these well-meaning programs: managing the risks in the vendor and third-party ecosystem. The unique risks tied to the AI supply chain often get overlooked. Program managers are busy trying to build comprehensive frameworks and manage complex pipelines. Procurement teams lean heavily on updated contract language and existing relations with big players. Legal teams feel reassured by stronger clauses around intellectual property and indemnification. However, the deeper risks tied to external dependencies remain under-addressed, especially when there is limited visibility or knowledge about what the third-party system involves or intends to.

ADVERTISEMENT

PLI,  Earn privacy CPE and CLE credits: Watch anytime online or on our mobile app, topics include AI, privacy, cybersecurity, and data law

AI applications are rarely standalone, they rely on deeply entwined third-party ecosystems, spanning data sourcing, model training, application programming interfaces and cloud infrastructure. Many fail to recognize the intricate and opaque supply chains that power AI systems and initiatives. Just like with other disruptive shifts AI is driving, traditional approaches to vendor due diligence and risk management are quickly proving inadequate in this evolving context. 

For this reason, downplaying the need to reassess and revamp the third-party risk management approach could end up with serious consequences.

Three key areas of impact

For the sake of argument, we can highlight three key areas that significantly impact any organization's business continuity and risk management.

First, an AI pipeline often ends up relying heavily on third-party tools and services, commonly called vendor lock-in. This happens when a company becomes so tied to one AI or cloud provider that switching to another is hard. In simple terms, building AI systems from scratch is complex and expensive, especially when it comes to data training and inferencing and the major tech players aren't making things easier. Their platforms are often tightly controlled, with proprietary tools, limited access to source code, and data formats that aren't easily transferable. 

The issue of switching between cloud providers has been a long-standing challenge that even prompted the U.K.'s Ofcom to refer the cloud market to the Competition and Markets Authority for investigation. What's changed is the scale and complexity of AI development pipelines, which has made the problem more acute. Add to that the massive investments these tech giants are making, often driven by ambitious revenue targets, and it is clear why vendor lock-in is becoming a bigger concern for organizations. 

Second, the fast-moving AI market is putting pressure on every team, from operations to procurement. As a result, strategic planning around third-party and vendor management often gets pushed aside. Ongoing current relationships are preferred, and critical gaps in vendor capabilities are overlooked. In many cases, organizations rely on measures that give a false sense of control, like overemphasizing contracts while overlooking deeper risks and inefficiencies.

It's no surprise that the digital economy has reshaped procurement and vendor management roles, turning them into de facto risk specialists. But the pressure to keep up with the market is so intense that teams are now leaning on a few extra AI-related questions recently added to their due diligence checklists, often without stopping to ask the hard questions — such as do procurement teams actually understand AI well enough to assess it? Are legal, tech and procurement teams working together throughout the process? Who's responsible for ongoing vendor oversight and monitoring? 

These aren't easy questions. They require time, effort and cross-functional collaboration to answer properly. A 2023 study by the think tank The GovLab, which interviewed a wide range of public officials taking active roles in AI procurement, paints a clear picture of these challenges, many of which apply just as much to the private sector. One standout issue was the "ambiguous definitions of AI," which complicate everything from writing contracts to assessing risk. Without a shared way to classify AI systems, managing those risks, especially to civil liberties, becomes even harder. Some officials suggested a simple improvement in the process — such as using categories like simple vs. compound AI and embedded vs. stand-alone AI — to help make those distinctions clearer. 

Third, there is a disconnect between policy and practice, where contracts, procedures, certifications and governance frameworks show little evidence of real-world implementation. The collapse of Builder.ai has been a wake-up call for the market. It exposed a critical vulnerability which, in fact, has been there for long time: it's nearly impossible to properly audit or verify third-party systems during the due diligence process. Most vendor due diligence relies on declarations and good faith, with organizations trusting the information provided. Visibility into third-party systems is minimal. 

These blind spots can lead to serious issues, such as IP infringements from poorly managed training data or hidden malware in third-party AI models. On the other hand, verifying practices within these ecosystems is complex and inefficient. Conducting deep audits of every vendor's AI pipeline and data management would be costly and impractical, potentially slowing market growth. Yet, relying solely on declarations and certifications carries significant risks — risks that have already resulted in high-profile failures.

Reduce risk

Third-party and vendor management is a critical pillar of AI governance, yet it is often overlooked despite its complexity. There's no single solution, and practical challenges make deep oversight difficult. Organizations can reduce risk, however, by focusing on these key principles:

  • Diversify vendor dependencies. Avoid overreliance on one provider. Use multicloud or hybrid strategies and build contingency plans.
  • Strengthen cross-functional collaboration. Align procurement, legal and technical teams. Define shared standards for AI risk, harm assessment and accountability.
  • Adopt dynamic governance. Move beyond static contracts. Implement continuous monitoring, technical audits and adaptive frameworks.
  • Enhance transparency and verification. Push for vendor transparency and independent audits where feasible. Don't rely solely on declarations.
  • Invest in AI literacy. Equip teams with foundational AI knowledge to make informed decisions on data, models and ethics.
  • Plan for ethical and societal impact. Include ethical and reputational risks in vendor assessments as strategic priorities.

Merve Gozukucuk-Ugurlu, CIPP/E, CIPM, is senior manager, data privacy and AI governance lead, at Protiviti.