US Sen. Blackburn proposes AI framework to protect children, copyrights

New discussion draft offers Congress a negotiating position as the White House mulls preemption options.

Published
Subscribe to IAPP Newsletters

Contributors:

Joe Duball

News Editor

IAPP

U.S. Congress' slow move on federal artificial intelligence policy may be coming to an end. U.S. Sen. Marsha Blackburn, R-Tenn., introduced a fresh discussion draft 18 March aimed at kickstarting lawmaker dialogue toward delivering on the White House's goal to preempt state-level AI legislation, as outlined in its December 2025 executive order.

Blackburn's draft framework primarily focuses on protections and requirements around children's online safety and copyright issues, combining some of her previously introduced bills to create preemptive legislation. The children's provisions are based on the proposed Kids Online Safety Act while copyright portions are taken from the NO FAKES Act.

"Instead of pushing AI amnesty, President (Donald) Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation," Blackburn said in a statement. "Congress must answer his call to establish one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance."

For children under age 17, the framework would place a duty of care on developers while requiring AI chatbot safeguards, data protection standards and a consumer mechanism to report AI harms. 

A private right of action is also included for child harms "caused by the AI system for defective design, failure to warn, express warranty, and unreasonably dangerous or defective product claims." Litigation would be viable with a proposed sunset of platform liability protections under Section 230 of the Communications Act.

Copyright provisions are highlighted by "new federal transparency guidelines for marking, authenticating and detecting AI-generated content." The framework would also task the U.S. National Institute of Standards and Technology with creating cybersecurity standards that "prevent tampering with provenance and watermarking on AI content."

The draft also requires third-party audits for bias and discrimination based on political affiliation, and measures to boost AI innovation.

Blackburn's discussion runs counter to the Trump administration's executive order, which indicated a forthcoming policy recommendation to Congress would avoid proposals that preempt state laws covering children's online safety and "other topics as shall be determined."

According to Axios, Blackburn has been in close contact with the White House, which is soon expected to introduce a separate legislative recommendation that will create a fluid policy discussion alongside Blackburn's draft. The goal is to blend the proposals, as deemed fit and appropriate, and arrive at the "uniform" policy mandated under the executive order.

"It basically states the policy of (the) administration is to create that federal framework," White House Special Advisor for AI and Crypto David Sacks said when the order was signed. "We're going to work with Congress ... to define that framework, but in the meantime, this (order) gives (Trump) tools to push back on the most onerous and excessive state regulations."

How KOSA fits

Blackburn's KOSA efforts have spanned multiple sessions of Congress, with its application in the AI context the latest attempt to get it over the finish line.

There is wide bipartisan support for KOSA in the Senate, where it advanced alongside the Children and Teens' Online Privacy Protection Act on a 91-3 vote in July 2024. That version has stalled since that passage due to First Amendment concerns in the House, which continues to run its own version of the bill.

KOSA's inclusion will test Senate Democrats, particularly Sen. Richard Blumenthal, D-Conn., KOSA's co-sponsor. With expected opposition to the broader Republican approach to AI legislation, Democrats could be left to explain why they would forego an opportunity to pass legislation they've long sought to finalize.

Digital Smarts Law & Policy Principal Ariel Fox Johnson, CIPP/US, was not surprised to see KOSA pop up in the discussion draft given "online safety concerns for kids on AI are no less great than for kids on all the platforms and apps for which KOSA was initially drafted." Though one unexpected nuance she highlighted was KOSA's preemption provision, which will allow states to go beyond the federal statute where they see fit to protect their children.

"Possibly lawmakers understand that with respect to kids, it may be very difficult to have a federal ceiling, especially when the states have been so active in passing a variety of kids privacy and safety laws, whereas Congress has been less so," Johnson told the IAPP.

AI chatbot safety

The framework's chatbot and AI companion safety provisions under the Guard Act rely heavily on age verification that applies to accounts belonging to minors under 18.

For both existing and new chatbot users, covered entities will need to collect age-related data and information from a government-issued ID or other "reasonable" verification methods defined by the bill. Verification reviews of previously verified accounts will continue on a rolling basis.

The bill also includes verification data security measures, including specified retention periods and necessity and proportionality standards.

The application of age verification is particularly relevant after the Federal Trade Commission recently issued a policy statement encouraging the use of age verification technologies while forgoing enforcement of verification data practices.

When the statement was released, FTC Bureau of Consumer Protection Director Christopher Mufarrige said the agency's new stance "incentivizes operators to use these innovative tools, empowering parents to protect their children online."

Blackburn's chatbot safeguards also call for required disclosures regarding interactions with technologies. There are separate reminders about conversations with non-humans and non-professionals.

Stakeholders weigh in

The Trump administration's preemption goals have raised questions about the fate of state-level digital governance laws that cover areas of AI. Blackburn's proposal addresses preemption in different ways, but ZwillGen Director of AI Division Brenda Leong, AIGP, CIPP/US, told the IAPP she does not see the bill impacting state's prior or future work on AI bias and automated decision-making.

"The full bill's general preemption provision in Section 1701 broadly preserves all 'generally applicable' state and local AI laws," she said. "State or local bias audit requirements, automated decision-making obligations, transparency requirements, and algorithmic accountability frameworks would likely survive, so companies operating in states like Colorado, Illinois and New York should expect those regimes to remain in force even if this legislation passes, and the door seems to remain open for state action."

Leong also called attention to the potential for "an extraordinary federal 'ask'" with covered entities deploying "advanced artificial intelligence systems" being left open to potential enforcer requests for code, training data, model weights and more.

"No U.S. regulatory regime has ever conditioned the right to operate on surrendering your entire intellectual property to a government agency on demand — not in pharmaceuticals, not in defense, not in finance," she said, noting those potential requests raise "profound constitutional questions about regulatory takings, due process and controls on government use and profit from this information."

On the general safety premise of the bill, Electronic Privacy Information Center Senior Counsel Calli Schroder told the IAPP the framework "suffers from trying to appeal to both the president and those concerned with AI's demonstrable harms."

"By attempting to cover so many parts of a broad-reaching technology at once, it fails to meaningfully address AI's problems and instead enshrines industry interests," she added.

Computer & Communications Industry Association Vice President of Federal Affairs Brian McMillan did not indicate a particular stance on Blackburn's bill, but noted CCIA supports legislation that "sets the global stage for AI leadership."

"While youth safety and transparency are important shared goals, unworkable provisions that unnecessarily hinder innovation or raise serious constitutional questions are fundamentally at odds with an approach that is primarily designed to promote the development and deployment of cutting-edge AI technologies," McMillan told the IAPP.

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Joe Duball

News Editor

IAPP

Tags:

AI and machine learningChildren’s privacy and safetyIntellectual propertyLaw and regulationU.S. federal regulationAI governance

Related Stories