The White House executive order aimed at limiting the impacts of U.S. state-level AI laws has finally landed. U.S. President Donald Trump signed the order 11 Dec., effectuating agency mandates that will seek to halt the enforcement of existing state AI laws characterized as "burdensome" while also discouraging state legislatures from passing new laws.

The Department of Justice will launch a task force charged with challenging the constitutionality of state AI laws. Within 90 days, the Department of Commerce will provide the task force with analysis and referrals to help the DOJ identify which laws to target.

To support preemption goals, the Federal Communications Commission and Federal Trade Commission will each put forth initiatives toward setting federal standards for AI development and use. The FCC is ordered to consider AI model reporting and disclosure standards while the FTC will draft a policy statement outlining how the FTC Act preempts state deception statutes that apply to AI.

ADVERTISEMENT

PLI,  Earn privacy CPE and CLE credits: Watch anytime online or on our mobile app, topics include AI, privacy, cybersecurity, and data law

With the aim to dissuade new legislation, the administration will draft a policy statement outlining states' federal funding eligibility and how AI laws characterized as unconstitutional may impact funding status. The order indicates eligibility for Broadband Equity Access and Deployment funds and related discretionary grant programs will be impacted.

The final order mirrors a leaked draft that sat idle since late November. One notable addition to the final text's section on future federal AI legislation is fresh commitments to avoid preempting state laws in a few specific areas, including those covering children's online safety, and state government procurement and use.

During a signing ceremony in the Oval Office, President Trump insisted the U.S. needs "a central source of approval" for AI development and use, noting "50 different approvals from 50 different states" will make it impossible for the country to continue its AI leadership. "We have to be unified," he added.

Trump also took aim at "hostile actors" with onerous AI laws already on the books, citing California, Illinois and New York. Meanwhile, the final order uses Colorado's AI Act as an example of the type of law that counters innovation goals.

The order carves out a big role for White House Special Advisor for AI and Crypto David Sacks. He will be the chief consultant on all department mandates laid out in the order, while joining White House Office of Science and Technology Policy Director Michael Kratsios on delivering policy recommendations to Congress for appropriate federal regulation.

"It basically states the policy of your administration is to create that federal framework," Sacks said during the signing ceremony. "We're going to work with Congress ... to define that framework, but in the meantime, this (order) gives (Trump) tools to push back on the most onerous and excessive state regulations."

The order comes following congressional shortcomings on state AI moratorium proposals that Trump supported. Senate Committee on Commerce, Science and Transportation Chair Ted Cruz, R-Texas, raised the initial moratorium proposal that was left out of July's reconciliation bill and reconsidered again in National Defense Authorization Act negotiations before ultimately being abandoned again.

"There was a similar inflection point with the dawn of the internet. Bill Clinton was president at the time and signed an executive order like (Trump is) doing that put into law a light-touch regulatory approach to the internet," Cruz said at the signing. "The result was incredible economic growth and jobs in the U.S."

House Republicans have warmed to Trump and Cruz's sentiments on how state laws restrict AI innovation and global leadership. Cruz's Senate colleagues have proven to be the hurdle, with bipartisan opposition to the moratorium and any other vehicle to limit state laws without a federal backstop in place.

Outlook for operationalization

Though the final order tightened language around agency mandates, the specificity around how determinations and processes will be carried out under agency mandates is not yet clear.

Particularly with the Department of Commerce's evaluation of state AI laws, there are questions about the full criteria for determining whether the DOJ task force will litigate against an existing statute. The order indicates Commerce will "at a minimum, identify laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution."

Center for Democracy and Technology Director of State Engagement Travis Hall spent more than six years working on digital policy matters at the National Telecommunications and Information Administration, a bureau of the Department of Commerce. He indicated the outlined determinations process will "likely drive even greater regulatory uncertainty than the current development of state laws."

"From what we have seen I do not believe that this would be a process that would be run in a manner that was based on technical or legal analysis rather than political considerations," Hall said.

The cues for potential DOJ task force litigation also raise ambiguity. The unit will work off Commerce recommendations and sue states "on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General's judgment."

Hall said the setting a threshold for impacts to interstate commerce "would likely be arbitrary."

Suing a state to enjoin an allegedly unconstitutional law is not common in the scope of consumer protection, according to Womble Bond Dickinson Partner Tyler Bridegan, CIPP/E, CIPP/US, CIPM, who formerly served as director of privacy and technology enforcement for the Texas attorney general's office. He said the strategy is more common with voting and immigration matters.

The administration may seek to pause or nullify parts of a law instead of entire statutes through the task force's work, which Bridegan said courts are more prone to do. That scenario could leave companies in a more complex compliance scenario depending on the situation and jurisdiction.

"In other contexts, some state attorneys general have proactively said they won’t enforce a law during the pendency of a legal challenge. Where that hasn’t happened, companies often find themselves in a bind where they're potentially operating in violation of a state law that might ultimately be enjoined," Bridegan said.

Another impactful aspect of the order with potential question marks is the FCC's role. Under the order, the agency will "initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws."

Former FCC Senior Agency Official for Privacy Elliot Tarloff does not see an express or implicit authority the administration is relying on to give the FCC the powers outlined in the order.

"I can't think of any statutory hook for the FCC to engage in such preemption, let alone a statutory hook that would satisfy the more stringent standard of review that would apply to the FCC's interpretation of the Communications Act after Loper Bright's rejection of Chevron deference."

As to whether the FCC is a fit for promulgating AI standards, Tarloff "wouldn't say that AI itself fits organically into any of the FCC's domains of expertise" while adding Congress, not any federal agency, is the more appropriate place to create rules.

However, the FCC will play its role regardless of fit. That will likely start, according to Tarloff, with a notice of inquiry rather than kicking off a rulemaking endeavor.

"In fairness, the order directs the FCC chair only to 'initiate a proceeding,'" he said. "In an NOI, the agency will not set the proverbial point through an initial proposal. And all interested parties, including the states that might be subject to preemption, have an opportunity to establish the record that would inform whatever standards the FCC might ultimately propose or adopt."

States, stakeholders weigh in

New York's proposed Responsible AI Safety and Education Act and California's Transparency in Frontier Artificial Intelligence Act are among the laws the Trump administration is taking aim at.

Gov. Gavin Newsom, D-Calif., signed California's AI transparency law in September after vetoing a similarly formatted framework the year prior. It contains product safety provisions, including transparency standards incorporated into development and a method for concerned individuals to report safety incidents to the state.

In response to the executive order, Newsom's press office issued a statement opposing the attempts to limit state AI laws' impacts while highlighting how California balances innovation and meaningful guardrails.

In the statement, Newsom alleged the administration's order was designed to "push the limits to see how far they can take it."

"California is working on behalf of Americans by building the strongest innovation economy in the nation while implementing commonsense safeguards and leading the way forward," he added.

Gov. Kathy Hochul, D-N.Y., is weighing her options with the RAISE Act. Axios reports she is pitching amendments to the legislature-approved bill. The proposed changes include requirements for public disclosures about a developer's safety practices and to report safety incidents. Hochul has until the end of the year to sign the bill into law.

Despite signs of indecision, Hochul characterized her bill as having "some of the nation’s strongest AI safeguards to protect kids, workers and consumers" while rebuking the White House's action.

"Now, the White House is threatening to withhold hundreds of millions of dollars in broadband funding meant for rural upstate communities, all to shield big corporations from taking basic steps to prevent potential harm from AI," she said in a statement.

R Street Institute Resident Senior Fellow, Technology and Innovation Adam Thierer testified before Congress twice this year advocating for state AI law preemption as a common sense solution to a growing and potentially onerous legal patchwork. He was also a proponent of Congress' proposed AI moratorium.

In his analysis of the executive order, Thierer characterized the executive order as "a welcome development that can help recalibrate AI policy."

"The new (order) represents an attempt by the Trump administration to do what it can in the short term to discourage state and local regulatory overreach and safeguard American AI innovation and leadership going forward," Thierer wrote, noting Congress must step up with a legislative framework that codifies preemption provisions.

U.S. Chamber of Commerce Technology Engagement Center Senior Vice President Jordan Crenshaw also welcomed the order, highlighting how the order will support the AI endeavors of small businesses and startups. Like Thierer, Crenshaw implored federal lawmakers to work toward a preemptive standard.

"It is also essential that Congress act to establish a federal AI framework to make permanent a clear, consistent national policy to deliver the certainty and stability the business community needs to harness the full potential of artificial intelligence and give American businesses an edge," he said in a statement.

Joe Duball is the news editor for the IAPP.