Nigeria moves toward comprehensive AI regulation

Nigeria is a strong proponent for AI regulation, made clear through various attempts at regulating AI and governance proposals made in the National AI Strategy.

Contributors:
Victoria Adaramola
CIPP/E, CIPM, FIP
Tech policy analyst
Tech Hive Advisory
The age of artificial intelligence has unlocked new opportunities, as well as significant challenges, and many countries are facing the dilemma of whether to allow innovation to progress freely or introduce safeguards to manage AI-related risks and protect users. The European Union took the lead with its 2024 AI Act, and several African countries — including Angola, Egypt, Kenya and Morocco — have since adopted similar stances toward governing AI through national strategies, policy frameworks and regulation.
In Nigeria, the Artificial Intelligence and Robotics Research Regulatory Agency Bill was introduced in 2021, but did not complete its legislative cycle. In 2023, the National AI and Robotics Sciences Bill and the Control of Usage of Artificial Intelligence Technology were introduced into the House of Representatives, both scaling through the first reading. In 2024, the National Artificial Intelligence Regulatory Authority Bill was also presented for its first reading. However, in December 2024, the bills introduced to Parliament between 2021 and 2024 were consolidated for second reading.
In 2024, Nigeria also published its National AI Strategy, which highlights crucial AI governance proposals, including: developing national AI principles; establishing an AI governance regulatory body; publishing transparent terms and guidelines for responsible AI development and deployment; and developing a comprehensive risk management framework that minimizes the potential negative impacts of AI deployment and use.
In 2025, Nigeria maintained momentum on AI regulation with the introduction of the National Artificial Intelligence Commission (Establishment) Bill, which proposes the establishment of a National AI Commission to oversee AI development and the Nigeria Digital Sovereignty and Fair Data Compensation Bill, 2025. Considerable progress was also made with the National Digital Economy and E-Governance Bill.
The National Digital Economy and E-Governance Bill
The National Digital Economy and E-Governance Bill is designed as a foundational statute for Nigeria's digital public infrastructure and governance ecosystem, aiming to modernize public sector administration by establishing a legal equivalence between electronic and paper-based processes. It applies to individuals and public institutions engaged in electronic transactions and records, and trust service providers operating within or in relation to Nigeria.
The bill seeks to address key aspects of the digital economy, including electronic commerce, digital government transformation, consumer protection in online transactions and cybersecurity compliance.
On AI governance, it would position the National Information Technology Development Agency as the central authority responsible for regulating the digital economy, including AI systems deployed in both public and private sectors. The bill would introduce a risk-based framework for AI systems, requiring that they be designed and deployed in a manner that is fair, transparent, secure, nondiscriminatory and subject to human oversight. Developers and deployers of AI systems would be required to implement governance and risk management measures proportional to the potential impact of their systems.
While the bill does not specifically classify AI risks, it would empower the NITDA to develop regulations considering: the purpose and context for use of the AI system; the nature and extent of its application; the likelihood and severity of damages and impacts that may result from its use; the reversibility of its effects on the rights of affected persons; its degree of autonomy and the possibility of human oversight or intervention; and the level of control exercised by an AI agent over the system function and outcomes.
The bill would also require AI agents to cooperate with the NITDA and the Nigeria Data Protection Commission to ensure compliance with relevant data protection laws.
A notable feature of the proposal is its formal establishment of a regulatory sandbox for digital and AI innovation. This mechanism would allow companies to test new AI-driven products and services within a controlled regulatory environment, subject to oversight and predefined safeguards. While intended to promote innovation, participation in the sandbox would also provide regulators with early visibility into emerging technologies and associated risks.
Enforcement powers under the bill would be extensive. It would authorize the NITDA to conduct compliance audits, accredit AI auditors, require corrective measures, suspend AI systems deemed to pose imminent risks, and impose administrative penalties. Depending on the severity of the breach, sanctions could include fines of up to NGN10 million, approximately USD7,332, or up to 2% of an entity's preceding annual gross revenue in Nigeria.
The Digital Sovereignty and Fair Data Compensation Bill
This bill seeks to establish a framework for Nigeria's digital sovereignty, AI governance and fair compensation for the use of Nigerian data. It applies primarily to foreign digital companies generating revenue from Nigerian users, particularly those involved in online advertising, cloud computing, AI training or digital transactions.
It aims to ensure data generated within Nigeria remains in Nigeria by requiring foreign digital companies to contribute fairly to the Nigerian economy, preventing unregulated extraction of Nigerian data, promoting local AI innovation and research, and strengthening national security through local data storage. Additionally, the bill aims to promote AI innovation and research within Nigeria, strongly advocating for AI-focused research and development centered on Nigeria, with a strong focus on financial compensation mechanisms and regulatory penalties.
The bill would require that all data belonging to Nigerian users collected by foreign digital companies be stored and processed within Nigeria, and that such companies establish local data centers or mirror servers in the country. It would also require that all cross-border data transfers be approved by the NITDA, supported by mandatory proof of compliance with Nigeria's data security laws.
The Nigeria AI Development Fund would be established under the bill and foreign companies that use Nigerian data to train AI models would be required to contribute 2% of their annual revenue from Nigeria to the fund. Additionally, any AI company operating in Nigeria would be required to ensure that at least 30% of its AI research and development using Nigerian data is conducted in the country. Financial sanctions under the bill would range from 10% of annual Nigerian turnover to NGN1 billion, approximately USD733,259.
The bill raises potential conflicts with existing laws, particularly the Nigeria Data Protection Act. The NDPA permits cross-border data transfers based on adequacy decisions, appropriate safeguards and recognized derogations, without requiring prior authorization from a regulatory authority. However, privacy professionals can navigate this tension by paying close attention to the purpose of the transfer and the context of data use, ensuring that any required approvals for cross-border transfers are obtained where applicable.
What AI governance professionals should watch closely
Both bills make it clear that the NITDA will be at the forefront of AI regulation in Nigeria. While the DPC will be actively involved, the NITDA will take the lead. This will require close monitoring of the agency's activities, guidance notes and AI-related regulations. Additionally, it would be helpful for privacy professionals to seek guidance on AI risk classification, audit expectations and transparency requirements, especially after the bills take effect.
Data localization and AI training obligations may require fundamental changes to AI system architecture, data pipelines and vendor arrangements. This requires privacy and AI professionals to assess whether AI models rely on Nigerian data and whether existing cross-border processing arrangements can be reconciled with the proposed localization regime.
The financial implications of both bills are quite significant. Digital services taxes, AI data compensation contributions, and revenue-based penalties pose material regulatory risk that must be factored into product design, market-entry strategies and contractual arrangements. Compliance with these regulations should never be compromised, given the potential financial sanctions.
Finally, both bills signal a clear expectation that organizations deploying and developing AI systems in Nigeria adopt mature, documented AI governance frameworks. This includes formal risk assessments, human oversight mechanisms, auditability and clear internal accountability for AI-related decisions. Organizations that proactively embed these controls will be better positioned as Nigeria transitions from legislative experimentation to active enforcement.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Victoria Adaramola
CIPP/E, CIPM, FIP
Tech policy analyst
Tech Hive Advisory



