IAPP Global Summit 2026: FTC Commissioner Meador stresses agency preference for 'case-by-case' enforcement

The agency does not plan to roll out prescriptive rules to address digital issues, instead planning to tackle agenda as a traditional law enforcer.

Published
Subscribe to IAPP Newsletters

Contributors:

Joe Duball

News Editor

IAPP

U.S. President Donald Trump's digital policy agenda during his second presidential term has placed a great deal of focus on removing innovation and competition barriers. In line with that objective, the Federal Trade Commission under Chair Andrew Ferguson has adopted an enforcement strategy that positions the agency as a "cop on the beat" rather than a rulemaking authority.

FTC Commissioner Mark Meador expanded on the agency's approach and where its current priorities lie during a fireside chat with IAPP Vice President and Chief Knowledge Officer Caitlin Fennessy, CIPP/US, at the IAPP Global Summit 2026 in Washington, D.C. Meador, speaking on his own behalf, explicitly stated the current FTC administration will not promulgate "ex ante regulations to set rules of the road for everybody," stressing the agency is "not looking to step in and tell companies how to run their business."

Meador's enforcement comments came in various contexts. He raised the "case-by-case" plan in response to specific questions from Fennessy regarding the potential for "more prescriptive security standards and expectations" and the FTC's "deception prong of its authority in a world of AI-generated content."

"We're approaching this as enforcers who, again, are trying to spot harm, address it, prevent it from occurring, and remedy it for the injured consumers as much as we can," Meador said.

The agency's approach showed in its OkCupid settlement announced hours before Meador's chat.

In a federal complaint, the FTC alleged the dating app and its parent company, Match Group Americas, shared user data with an unrelated third party without consent while taking "extensive steps" to conceal the sharing since 2014. OkCupid and Match Group are "prohibited from misrepresenting or assisting others in misrepresenting" collection practices and privacy controls moving forward, but there were no further fines or corrective measures included in the settlement.

"The FTC enforces the privacy promises that companies make," FTC Bureau of Consumer Protection Director Christopher Mufarrige said in a statement. "We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through — even if that means we have to enforce our Civil Investigative Demands in court."

Meador did not immediately comment on the new settlement when asked by the IAPP.

The FTC's current enforcement posture will continue to include "evolving" guidance, according to Meador, who noted enforcement actions will continue to act as notices to the marketplace. He also indicated the rolling nature of new and updated agency guidance is nonpartisan and more a product of how "these markets are always evolving."

"You're going to have new fact patterns, new problems that we hadn't thought of even a year ago," he added. "And when those arise, that requires us to figure out how the law adapts to that new situation."

On the radar

Age verification is front and center across global jurisdictions, including the U.S. The issue is top of mind at the FTC, which held an age assurance workshop in January that helped inform a recent policy statement outlining how the agency will not enforce certain elements of the Children's Online Privacy Protection Act to allow organizations to verify users' ages without obtaining parental consent.

Meador views adult websites as "low-hanging fruit" in context of age verification, noting those platforms need to ensure minors are not accessing illicit content. Beyond those platforms, he said content "falls on a spectrum," which is why there are so many conversations around where age verification should be required.

"Admittedly, I think these are sort of shills for the tech companies," Meador said. "They will throw up their hands and say, 'How are we supposed to make heads or tails of (a user's age)? This is so difficult.' I find that kind of silly."

Meador pointed to societal norms about verification as the barometer, using identification requirements for alcohol purchases and R-rated films as examples. "Nobody objects to that as an invasion of privacy," he added.

In a separate Summit session, Annie Chiang, an attorney advisor to FTC Chair Ferguson, alluded to upcoming announcements on agency actions around age assurance, but she did not specify what types of actions are in the works.

Along the global children's online safety theme, Meador said the growing exploration of social media bans around the world is warranted; however, the FTC will not act unless Congress passes a law that gives the agency authority.

"So as a parent whose job it is to protect your child ... you want to have all the tools in your toolkit to protect them," he said. "To make sure that if you miss something, not through negligence but because stuff happens, you want to know there's a backstop. I think that's why we've seen this sort of movement across the state, federal and international landscape."

The FTC's ongoing artificial intelligence enforcement has mostly pertained to companies' misrepresentations about their AI use. The agency is working under the Trump administration's AI Action Plan and its innovation goals, which led to a prior enforcement action against Rytr to be reopened and set aside.

The handling of deception claims around AI-generated content is an area the Meador and the agency expect to continue to tackle objectively.

"The key concept for me is what is a reasonable user's expectation. What do we actually expect to see and what representations were made at the outset?" He said. "That's going to be very fact-specific, which is a very unsatisfying lawyerly answer, but it's true. In order to get it right, we can't be painting with broad brushstrokes."

Meador said AI should not be misconstrued as "a monolith" and current model competition, particularly around training data, is "pretty vibrant." He indicated the agency could intervene more as new training data sources diminish.

Where the FTC could most likely act on AI moving forward is its applications to scams. Meador said AI tools being used to "turbocharge" impersonations is becoming commonplace.

"It's lowering the barriers to entry into scamming," he said. "That's probably the first place we're seeing it. ... You can use that in a scam to deceive someone, and there it's less about deceiving them as to the nature of whether that content is real."

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Joe Duball

News Editor

IAPP

Tags:

AI and machine learningEnforcementIdentity and verificationChildren’s privacy and safetyAI governancePrivacy

Related Stories