Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
The Texas Responsible Artificial Intelligence Governance Act will take effect 1 Jan. 2026; it applies broadly to all businesses operating in Texas that use an artificial intelligence system in the state as well as companies whose products or services are used by Texas residents.
The TRAIGA targets AI systems, defined as "any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations that can influence physical or virtual environments." Its scope includes both the developer and deployer of the AI system. Businesses can use the following sample policy framework as a starting point for TRAIGA compliance.
Policy
Purpose. Provide a high-level description of the intended use, deployment, context and associated benefits of the AI system with which the business is affiliated.
Type of data. Offer a detailed description of the data used to program or train the AI system in use.
Categories of data. Classify the categories of data processed as inputs and produced outputs.
AIS performance evaluation. Establish metrics to evaluate the performance of the AI system. How will the business know if the AI system is doing what it is supposed to do? A recommended approach is to conduct performance evaluations semi-annually for the first two years and annually thereafter.
Post-deployment monitoring. Conduct post-deployment monitoring and a risk assessment following the U.S. National Institute of Standards and Technology's most current "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile" or another nationally or internationally recognized risk management framework. The framework should be used to document any known limitations of the AI system and implemented efforts to improve them.
For deployers, this stage should also include how the AI system is overseen, used and its learning process to address issues that may arise from the system's deployment.
User safeguards. Identify safeguards in place for the business's AI system; designate oversight responsibilities; and describe how the system is monitored, corrected and refined. Businesses should consider testing protocols — including adversarial testing or red teaming — and seek feedback from the developer, deployer or other entity who believes a violation has occurred.
Anti-discrimination provisions. AnAI system should not be developed or deployed with the sole intent to infringe, restrict or otherwise impair an individual's rights guaranteed under the U.S. Constitution. Nor should one be deployed with the intent to unlawfully discriminate against a group or class of persons with a characteristic, quality, belief or status protected from discrimination by state or federal civil rights laws — including race, color, national origin, sex, age, religion or disability — in violation of state or federal law.
Reporting mechanism. Consider implementing an anonymous reporting mechanism that enables individuals to provide information if they suspect a violation of the policy has occurred.
Disclosures. Disclose to anyone using the AI system — either before or at the time of the interaction — that they are, in fact, interacting with an AI system. This disclosure should be clear and conspicuous, written in plain language, and may not use dark patterns as defined under the law.
Compliance. Comply with all federal, state and local laws and regulations; do not engage in any illegal activity, especially using AI systems.
Curing violations. If the Office of the Attorney General issues a written notice of a violation of the TRAIGA, a business has 60 days to cure the violation. Businesses should provide the attorney general with written notice of the cure, provide supporting documentation to show the manner in which it was conducted, and revise policies or procedures as necessary to reasonably prevent further TRAIGA violations.
If a written notice of violation cannot be cured within 60 days, the business is not to use that portion of the AI system until the violation has been cured, unless the attorney general grants an agreed-upon extension or the business uses another AI system.
AI regulatory sandbox program
Under TRAIGA, the state's Department of Information Resources administers the AI regulatory sandbox program. This initiative enables businesses or individuals to test innovative AI systems in the Texas market with limited access and legal protections without obtaining a license, registration or other regulatory authorization. During the testing period, the attorney general and/or a state agency may not file or pursue charges or take punitive action against a program participant.
To participate in the sandbox program, businesses or programs should incorporate the following safeguards and procedural requirements:
- Do not deploy an AI system until it has been approved by the DIR and its intended use is described in detail.
- Conduct a benefit assessment to determine any potential consumer, privacy and public safety impacts.
- Implement a plan for mitigating any adverse consequences that may occur during the testing process and business operations.
- Incorporate a mechanism for seeking feedback from consumers and affected stakeholders who are using the AI system being tested.
- Provide proof of compliance with any applicable federal AI laws and regulations.
- Abide by the testing period, which should last no more than 36 months unless there is good cause, and the DIR approves the extension.
- Provide a quarterly report to the DIR that includes metrics on the AI system's performance, updates on how it mitigates any operational risks, and feedback from consumers and affected stakeholders.
As the enforcement date of the TRAIGA approaches, it is important for businesses that use or develop AI systems in Texas — or whose products are used by Texas residents — begin drafting policies and procedures to ensure adherence with the law.
This policy framework provides a strong foundation for businesses that are new to using an AI system or even for those familiar with AI systems but seeking to comply with the TRAIGA. Companies must ensure compliance and continue monitoring for future guidance.
Fatima Naeem is an attorney at Naeem Law Firm, PLLC. The views expressed in this article belong solely to the author.