The White House released its vision for artificial intelligence policy 23 July with a heavy focus on breaking down barriers to the technology’s innovation and adoption, including another attempt to stop states from enacting their own AI regulation.
The 28-page "America’s AI Action Plan" stems from President Donald Trump's January executive order on AI and is part of a marked tone shift toward policy aimed at fostering U.S. AI dominance in the face of fierce competition from China. Trump directed agencies to come up with a plan after extensive public comment from academia, civil society and industry. Additional executive orders putting some of the plan's points into action are expected, Reuters reports.
Key features of the plan include leveraging federal agencies to develop new standards and reimagine some existing ones, such as the National Institute of Standards and Technology's AI Risk Management Framework. It includes direction to revisit current regulations to see if any pose a hinderance to AI development and a focus on protecting free speech and fairness in large language models.
"This plan galvanizes Federal efforts to turbocharge our innovation capacity, build cutting-edge infrastructure, and lead globally, ensuring that American workers and families thrive in the AI era. We are moving with urgency to make this vision a reality,” said White House Office of Science and Technology Policy Director Michael Kratsios in a press release on the plan.
Removing regulatory barriers
The first pillar of the plan is aimed at fostering AI innovation through speeding up adoption, investing in worker training and removing red tape while protecting free speech, according to the plan.
It directs the Office of Management and Budget to work with federal agencies with AI-related discretionary funding to consider a state's regulatory landscape when deciding whether to award money. It also recommends the Federal Communication Commission evaluate "whether state AI regulations interfere with the agency's ability to carry out its obligations and authorities under the Communications Act of 1934.5"
The plan also calls on the Federal Trade Commission to review investigations from previous administrations "to ensure that they do not advance theories of liability that unduly burden AI innovation" as well as "all FTC final orders, consent decrees, and injunctions, and where appropriate, seek to modify or set-aside that unduly burden AI innovation."
Both provisions are aimed at limiting states' willingness to regulate AI, an argument promoted by technology companies that do not want to see a patchwork of different laws to comply with.
The Trump administration tried to restrict states' ability to do so during the recessions bill fight this summer through a 10-year moratorium on legislation in the reconciliation bill; this was ultimately defeated in the U.S. Senate. The plan would have tied states' broadband funding to following the law, but it was eventually removed after advocates for states' rights, AI and children's online safety argued consumers would be disproportionately harmed by the provision.
"AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level," the plan reads. "The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restrictive to innovation."
The plan also calls for a revision to federal procurement standards, arguing any AI that does business with the government must reflect "truth rather than social engineering agendas." It directs NIST to revise its management framework to remove references to misinformation, diversity, equity and inclusion, as well as climate change.
New sandboxes, evaluations
But the plan also recommends several ways to evaluate AI and develop standards.
It calls for establishing regulatory sandboxes and "AI Centers of Excellence" around the country to help businesses and researchers test AI tools and share data with the government before they go to market. Development of national standards for AI systems and their effect on certain sectors, such as health care, energy and agriculture, would be led by NIST.
To make better-quality data available, the plan recommends incentivizing researchers to release datasets by tying their cooperation to future reviews for funding proposals. It would require researchers with federal funding to disclose "non-proprietary, non-sensitive datasets" used by AI models during experimentation.
A section of the plan is dedicated to building an "evaluations ecosystem" in the U.S., calling rigorous evaluations a "critical tool in defining and measuring AI reliability and performance in regulated industries." It recommends creating guidelines for federal agencies to evaluate their own AI use and convening the NIST AI Consortium to establishment new measurement metrics to promote AI's development.
Preventing risks
The plan does touch the need for better cyber incident response protocols, highlighting potential national security risks and vulnerabilities such as data poisoning and privacy attacks, which can affect AI system’s outputs. This would include working with frontier AI models to evaluate any potential national security risks they might pose.
"Because America currently leads on AI capabilities, the risks present in American frontier models are likely to be a preview for what foreign adversaries will possess in the near future," the plan reads. "Understanding the nature of these risks as they emerge is vital for national defense and homeland security."
It recommends the U.S. Department of Defense work with NIST to continue working on the agency's responsible AI and generative AI frameworks. It charges the U.S. Office of the Director of National intelligence to publish an IC Standard on AI assurance under the auspices of Intelligence Community Directive 505 on Artificial Intelligence.
The U.S. government should also promote the creation of AI incident response plans, the roadmap recommends and incorporate them into best practice standards for both private and public sectors. It calls for modifying the Cybersecurity and Infrastructure Security Agency's Cybersecurity Incident & Vulnerability Response Playbooks to consider AI systems and require chief information security officers to work with AI-related agency officials in developing those updates.
Initial reactions
As news of the plan and orders began trickling out in the media, civil society groups began to rally. Dozens of privacy and AI safety groups banded together ahead of the plan's release to sign the People's AI Action Plan, a joint statement urging the White House to focus on the environmental and social needs of Americans' over the technology industry's desires.
"We can't let Big Tech and Big Oil lobbyists write the rules for AI and our economy at the expense of our freedom and equality, workers and families' well-being, even the air we breathe and the water we drink — all of which are affected by the unrestrained and unaccountable roll-out of AI," the statement reads.
After the plan was released, the Center for Democracy and Technology Vice President of Policy Samir Jain characterized the plan as "unbalanced," saying its positives, including the promotion of open-source and open-weight systems, support for evaluations and focus on security, did not outweigh preventing state-level regulation and trying to regulate AI's truthfulness.
"The government should not be acting as a Ministry of AI Truth or insisting that AI models hew to its preferred interpretation of reality," he said in a statement. "There is no reason to weaken the AI Risk Management Framework by eliminating references to some of the real risks that AI poses."
Consumer Reports highlighted the plan's directive to the FTC, saying it could result in giving AI developers a "free rein" to create products which cause harm — such as sexually explicit deepfakes, voice cloning and therapy chatbots — without consequences.
"When a company makes design choices that increase the risk their product will be used for harm, or when the risks are particularly serious, companies should bear legal responsibility,” Consumer Reports Director of Technology Policy Justin Brookman said.
Industry groups were more receptive. The U.S. Chamber of Commerce called it a "forward-thinking plan" that would fix a "regulatory landscape hobbled by conflicting state-level laws and activist-driven overreach, streamlining permitting for critical AI infrastructure, ensuring reliable and affordable energy for consumers and businesses, and advancing U.S. leadership in AI diplomacy."
The Business Software Alliance praised the administration's approach on talent and workforce development, data infrastructure and AI governance as the way to boost AI adoption. It also lauded the administration upholding NIST and the Center for AI Standards and Innovation's continued involvement in standard setting.
"Policymakers who are thinking about AI competitiveness must focus on AI adoption; countries that effectively adopt AI are best positioned to lead economically and benefit across sectors," BSA CEO Victoria Espinel said.
But Dentons US AI Advisory Team Lead Peter Stockburger told the IAPP a lighter federal touch does not necessarily mean quicker AI adoption, noting the plan's approach toward state regulation creates additional uncertainty.
"For organizations with mature governance frameworks, this means navigating a more complex compliance landscape, but also gaining new opportunities: the plan’s emphasis on open-source models, expanded access to high-quality datasets, infrastructure, and improved federal guidance on AI evaluation and transparency could strengthen responsible deployment practices," he said.
Stockburger added, "Ultimately, much of the plan's impact will depend on how federal agencies implement these directives and how the balance between federal leadership and state autonomy evolves in the coming months.
Caitlin Andrews is a staff writer for the IAPP.