Vietnam is accelerating its push to emerge as a regional technology powerhouse, with strategic investments in artificial intelligence and semiconductors underscoring this ambition. Government officials highlighted the technologies as twin engines for innovation and global competitiveness, aiming to nurture talent and build intelligent manufacturing centers.
To realize this goal, the country passed a principle-based framework under the Law on Digital Technology Industry in mid-2025, and within less than three months expedited a draft Law on Artificial Intelligence to replace that framework.
Enacted 10 Dec. 2025, and effective 1 March 2026, Vietnam's first standalone AI law positions the country among early adopters in the region, emphasizing a pro-innovation stance that balances growth with safeguards. As a clear manifestation of the Brussels effect, the law focuses on risk-based management with risk classification and concepts similar to that under the EU's AI Act.
Notably, it fully supersedes the nascent AI provisions in the Law on Digital Technology Industry, which took effect 1 Jan. 2026, consolidating oversight under a unified framework to streamline compliance.
Fundamental principles
The AI Law's foundational principles, outlined in Article 4, prioritize human-centered AI that safeguards human rights, privacy, national interests and security while ensuring compliance with Vietnam's Constitution and laws.
Key tenets include maintaining human control over AI decisions, promoting fairness, transparency, non-bias and accountability, and aligning with ethical standards and the country's cultural values. The law also encourages green, inclusive and sustainable AI development, focusing on energy efficiency and environmental protection. This approach mirrors the EU AI Act's emphasis on trustworthy AI, particularly in human-centric design and transparency.
Prohibited acts
Article 7 of the AI Law establishes a list of unlawful AI-related activities, prohibiting exploiting AI for unlawful purposes, including infringing on rights, simulating real people or events to deceive or manipulate perceptions, exploiting vulnerable groups or disseminating harmful forged material that threatens national security or public order. It also bans unlawful data processing in violation of data protection, intellectual property or cybersecurity laws, as well as obstructing human oversight or concealing mandatory disclosures.
These prohibitions serve as a baseline standard for all activities, regardless of the stage of the AI life cycle or the risk classification of the system.
Similar to the EU AI Act's bans on manipulative AI or social scoring, Vietnam's prohibitions target unacceptable-harm practices but lack the EU's detailed categories like untargeted facial scraping. This intentional broadness in prohibitions within Vietnam's laws grants local authorities extensive enforcement powers, enabling flexible interpretation and application down the line.
Risk-based classification and governance
AI systems are classified into: high-risk — potential significant harm to life, health, rights or national security; medium-risk — risk of user confusion from undisclosed AI interactions or generated content; and low-risk — all others.
Classification criteria in Article 9 include impact on rights and safety, user scope, influence scale and application fields — such as essential sectors like health care. These criteria will be subject to further elaboration by the government. Unlike the EU's exhaustive high-risk annexes, Vietnam defers detailed lists to the prime minister, potentially enabling quicker adaptations to emerging risks.
Providers must self-classify their AI system before putting it into use and notify the Ministry of Science and Technology for medium- or high-risk systems through the national one-stop AI portal, and coordinate reclassification if modifications result in new or higher risks. Vietnam's system promotes proportionate governance by allowing voluntary low-risk disclosures.
Role-driven accountability
The law's Article 3 defines roles across the AI supply chain: developers — design and training; providers — market placement; deployers — professional use; users — direct interaction; and affected persons — impacted parties.
This chain-of-responsibility parallels the EU AI Act's provider-deployer distinctions but slightly deviates from the international standards by isolating research and development roles, exempting nonmarket efforts to incentivize innovation.
Throughout the law, the provider and the deployer shoulder most obligations while the developer is only subject to some general incident response requirements.
AI incident response
Under Article 12, all stakeholders share the responsibility for maintaining system safety, security and reliability, including proactive detection and remediation of potential harms to individuals, property, data or social order.
In the event of a serious incident — defined as events causing or risking significant damage to life, health, rights, property, cybersecurity or critical systems — developers and providers must urgently implement technical fixes, suspend operations or withdraw the system while notifying state authorities. The law, however, remains silent on exact timelines, which could complicate rapid response protocols.
Deployers and users, meanwhile, are obligated to log incidents, report them promptly and collaborate on remediation efforts.
All incident reporting and resolution processes must funnel through the centralized one-stop AI portal, a mechanism intended for efficiency but whose operational details, including data submission formats and access protocols, await further clarification via a guiding decree of the government.
Transparency responsibilities
Both providers and deployers must uphold transparency obligations under Article 11 throughout the life cycle of AI systems, products or content delivery to users.
For providers, AI systems designed for direct human interaction must enable clear recognition of the artificial nature, and generated audio, image or video content requires machine-readable markings.
For deployers, clear notifications are required when providing public-facing text, audio, images or videos created or edited by AI if they risk misleading about real events or characters, and specific, easily discernible labeling must be applied to simulated content mimicking real persons, voices or events. Those operating in the entertainment or creative industries must ensure labeling in cinematographic, artistic or creative works are appropriate to avoid hindering display or enjoyment.
The government will specify detailed formats for notifications and labels to standardize these transparency measures across applications.
Management of high-risk AI systems
Periodic audits and post-market surveillance apply to high-risk systems, with heightened scrutiny in critical sectors like health care and education.
High-risk systems require rigorous compliance measures including risk assessments, human oversight, registration in a national database and incident reporting. Drawing inspirations from the EU AI Act's pre-market requirements, Vietnam's law introduces a tiered approach with mandatory conformity certification for select systems on a prime minister-issued list. For other high-risk systems, providers can opt for self-assessment or hire registered organizations, a deviation from the EU's more uniform third-party involvement, potentially easing administrative burdens.
Notably, foreign providers of high-risk AI systems in Vietnam must establish a local contact point, escalating those requiring pre-use conformity certification to a commercial presence or authorized representative.
The law again defers further requirements and elaborations on the local contact point and commercial presence to the government's guiding decree.
Management of medium- and low-risk systems
Medium-risk systems are supervised through reports, sample audits or assessments by independent organizations, while low-risk systems are monitored and audited upon incidents, complaints or when necessary to ensure safety, without imposing undue obligations.
Providers of medium-risk AI bear accountability duties, detailing system purposes, operational principles, key inputs and safety measures upon agency requests during audits or incident-risk signals, but the provision explicitly shields source code, algorithms and trade secrets from disclosure.
Deployers face similar accountability for operations, risk controls and incident responses in medium-risk scenarios, triggered by inspections or harms, yet the law's vague phrasing on "protecting legitimate rights" risks inconsistent enforcement without defined thresholds.
Low-risk systems face minimal obligations, with encouragement for voluntary standards, self-regulation and basic disclosures to build trust.
Handling violations and liability to compensate for damage
Article 29 of the AI Law establishes a general foundation for liability and enforcement against noncompliance, including administrative sanctions, potential penal liability and civil damages.
Unlike Vietnam's Personal Data Protection Law, which sets specific fine caps, the AI Law provides no framework for administrative sanctions, deferring details entirely to a government decree.
Differing from the EU AI Act's fines of up to 6% of global turnover, Vietnam's framework focuses on proportionality without caps, potentially leading to lower financial risks but broader civil remedies. It emphasizes fault-based allocation, contrasting the EU's stricter no-fault elements in some liability proposals.
In particular, for high-risk systems, deployers must compensate the victim first and may seek reimbursement from the developer and/or the provider per agreements. Exemptions apply for victim fault or force majeure. Third-party intrusions or hijacking of the system will impose joint liability if providers/deployers are at fault.
Incentive policies
To spur innovation, the law offers support like the National AI Development Fund for research and development grants, regulatory sandboxes with simplified procedures and exemptions, and AI clusters in high-tech parks with tax breaks and infrastructure perks. Enterprises sharing data or models also receive preferences.
Unlike the EU AI Act's rule-focused harmonization without direct funding, the law's incentives clearly demonstrate the country's core objective of attracting investments amid global AI competition.
Grace periods for pre-existing AI systems
Transitional provisions grant grace periods for AI systems being put into the market before the law's 1 March effective date: 18 months, until 1 Sept. 2027, for those in health care, education and finance; and 12 months, until 1 March 2027, for others. Systems may operate during this time unless deemed to pose a risk of causing serious damage, in which case local authorities may order suspension or termination of operations.
This phased rollout exceeds the EU AI Act's transitions, where prohibitions applied from 2 Feb. 2025, general-purpose AI and penalties from 2 Aug. 2025, and high-risk systems generally from 2 Aug. 2026, with pre-market models required to be compliant by 2 Aug. 2027.
The timelines partially coincide, with Vietnam's effective date following the EU's initial applications but aligning on full compliance for existing systems around mid-2027, facilitating potential cross-jurisdictional harmonization.
Expanded legal landscape on the horizon
In the coming months, local lawmakers will release draft versions of several key implementing documents for public comments: the government's guiding and sanctioning decrees, the prime minister's list of high-risk AI systems, and the Ministry of Science and Technology's National AI Ethics Framework. These will clarify risk criteria, procedures and penalties, shaping practical enforcement.
Stakeholders should closely track the development of these guiding regulations, offer comments and recommendations, and conduct preliminary compliance assessments to prepare for upcoming requirements.
Thu Minh Le, CIPP/E is a senior associate at BMVN International, and Alex Do, CIPP/E, is an IPTech executive cum patent coordinator at BMVN International, in alliance with Baker McKenzie Vietnam.


