Taking effect on January 1, 2026, The Texas Responsible Artificial Intelligence Governance Act, often called TRAIGA, is a new law that sets rules for how artificial intelligence (AI) can be used in Texas. TRAIGA is designed to make sure AI is used responsibly, to protect people from harm, and to encourage innovation.

Need legal advice?
Having trouble finding answers to your legal questions? Amy can help!
TRAIGA applies to usinesses and organizations that use, develop, or sell AI systems in Texas, and Government agencies in Texas that use AI.
TRAIGA represents a significant but targeted approach to AI regulation, distinguishing itself from broader, risk-based frameworks seen in other jurisdictions. Its primary focus is on preventing intentional misuse of AI, rather than imposing sweeping compliance obligations on all AI systems or high-risk categories. This intent-based liability framework means that only deliberate, purposeful conduct—such as intentional discrimination, manipulation, or unauthorized biometric data collection—will result in enforcement action. Accidental or unintentional harms are not actionable under the Act, which reduces compliance burdens for businesses and government agencies while still addressing the most egregious risks associated with AI.
The Act’s scope is broad in terms of covered entities, applying to any business or government agency that develops, deploys, or provides AI systems affecting Texas residents, regardless of where the entity is based. However, the definition of “consumer” is limited to individuals acting in a personal or household capacity, excluding employment and commercial contexts. This means that many business-to-business and employment-related uses of AI are outside the Act’s direct reach.
TRAIGA’s prohibitions are clear and specific. For example, it bans the use of AI to intentionally manipulate human behavior, incite harm, or discriminate against protected classes. It also prohibits the intentional generation or distribution of illegal sexual content, including deepfakes and child pornography, and restricts government use of AI for social scoring or biometric identification without consent. These prohibitions are designed to address the most serious and widely recognized risks of AI, while avoiding overregulation of beneficial or low-risk uses.
The Act’s disclosure and consent requirements are particularly important in consumer-facing and healthcare contexts. Any entity deploying AI systems that interact with consumers must provide clear, conspicuous, and plain-language disclosures, and must avoid manipulative interface designs. In healthcare, providers must disclose AI use to patients at the time of treatment, or as soon as possible in emergencies. These requirements promote transparency and informed consent, helping to build public trust in AI systems. Tex. Bus. and Comm. Code § 552.051.
TRAIGA also addresses privacy concerns by amending Texas’s biometric privacy law to clarify and strengthen notice and consent requirements for the collection and use of biometric identifiers by AI systems. Government agencies are generally prohibited from using AI for biometric identification without the individual’s consent, except for routine photos and audio, and only if such use does not violate rights. This reflects a cautious approach to sensitive personal data, balancing innovation with individual privacy rights.
The regulatory sandbox program is a notable innovation, allowing developers to test AI systems in a controlled environment with certain legal protections and waivers from licensing or regulatory requirements during the testing period. This encourages innovation and experimentation while maintaining oversight and consumer protection. However, public information requirements under the Texas Public Information Act cannot be waived, ensuring continued transparency and accountability. Tex. Bus. and Comm. Code § 553.051.
The Texas Artificial Intelligence Council serves as an advisory body, providing guidance to the legislature and state agencies, identifying laws that impede innovation, evaluating regulatory capture, and overseeing the regulatory sandbox program. The Council does not have rulemaking authority, but its reports and recommendations may influence future legislative and regulatory developments. Tex. Bus. and Comm. Code § 554.001.
Enforcement is centralized with the Texas Attorney General, who has exclusive authority to investigate and prosecute violations. There is no private right of action, and the Attorney General must provide notice and a 60-day cure period before bringing an enforcement action. Civil penalties start at $10,000 per violation and may scale up, with a maximum penalty of $100,000 in some cases. No action may be brought for AI systems that have not been deployed. Affirmative defenses and safe harbors are available, particularly for entities that have adopted recognized risk management frameworks.
For public sector entities, TRAIGA imposes additional requirements, including the adoption of an AI system code of ethics aligned with the NIST AI Risk Management Framework, and the development and adoption of minimum risk management and governance standards for “heightened scrutiny” AI systems. These requirements promote responsible AI use, human oversight, fairness, transparency, privacy, accountability, and regular evaluation in government operations. Tex. Gov’t. Code § 2054.702; Tex. Gov’t. Code § 2054.703.
Do you have questions about a your claim or dispute?
Amy can help! Schedule your free initial consultation with The Gustafson Firm now!
