A view from DC: Data pollution and the problem with individual privacy

Observations from the U.S. Federal Trade Commission's latest workshop on the potential injuries and benefits stemming from the data-driven economy.

Published:
Contributors:
Cobun Zweifel-Keegan
CIPP/US, CIPM
Managing Director, D.C.
IAPP
Editor's note
The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains.
In some ways, this week felt like a return to 2017 at the U.S. Federal Trade Commission, as the agency hosted a traditional staff-led workshop titled "Measuring Injuries and Benefits in the Data-Driven Economy."
For the first-time since the pandemic, an FTC workshop was refreshingly in-person and open to the public. Like the old days, it was also a decidedly scholarly endeavor, composed of social science paper presentations and discussions led by staff of the Bureau of Economics and the Division of Privacy and Identity Protection.
More explicitly, the focus on injuries and benefits was a direct throwback to the FTC's landmark 2017 workshop on informational injury, convened under acting Chair Maureen Ohlhausen. That workshop was a foundational attempt to categorize qualitatively different types of harm in a post-Spokeo world, helping to refine theories of how financial, medical and reputational harms can all play a role in regulatory interventions for data privacy. Nevertheless, the tone of the workshop was characterized by Ohlhausen's distinctive philosophy of "regulatory humility."
The 2026 update had a more urgent and operational tone, reflecting the rapid growth of the information economy over the past decade. There was plenty to unpack in the economic literature presented, but one thing that stuck with me was a discussion spanning multiple panels on the externalities of data privacy.
Through the economic lens, an externality is a side effect of a transaction that spills over onto people who were not involved in the deal. In a classic negative externality, the buyer and seller agree on a price, but they don't account for the costs imposed on the rest of society, like a factory that produces cheap widgets but poisons the local river. Because the factory owner and the widget buyer don't pay for the water cleanup, the widget is artificially cheap, and the social cost is ignored.
Government interventions are often a means of shifting the costs of externalities toward the parties of the transactions or otherwise restructuring the economic landscape to prevent the negative side effects in the first place. This can be accomplished ex ante through a Pigouvian tax or through strict limitations on certain activities, like a carbon tax or emissions caps. Or it can be accomplished ex post via private liability and regulatory enforcement, which is the area the FTC usually occupies.
The first two panels of the day were structured explicitly to create some dialogue among scholars focused on privacy risks and those who measure the benefits of the algorithmic economy. In this regard, they did not disappoint, and there was even some explicit dialog between the separate discussions.
Alessandro Acquisti, a powerhouse privacy scholar at MIT Sloan School of Management who often bridges the world of economics and other social sciences, used the workshop as an opportunity to showcase a recent series of experiments he has run on behavioral advertising. In collaboration with other researchers, he has designed studies to better understand whether the advertising ecosystem is designed in a way that maximizes economic welfare, and whether increasing or decreasing information sharing among firms contributes to more economic value.
On the panel, Acquisti focused on how these studies contribute to questions of whether a privacy-invasive advertising ecosystem gets the balance of harms and benefits right, even setting aside questions of individual privacy harm. Some of his results suggest that a high information sharing advertising model leads to consumers being connected with higher prices and lower quality vendors when compared with traditional contextual search advertising. In another forthcoming paper, he builds on this body of research to compare "economic rationales for regulating behavioral ads."
This is where externalities come in. For Acquisti, a privacy externality can take a variety of forms, from information spillovers, where my choice to share my data affects the privacy of others due to price discrimination and suboptimal purchasing decisions.
In the second panel, University of Chicago Law School's Omri Ben-Shahar argued for a narrower perspective on externalities, one more focused on those effects that are totally outside of the transaction itself. And though his discussion was focused on the overall positive impact of the data economy, he was clear to distance himself from the "do-nothing libertarian Chicago School types." Still, he argued against a general precautionary principle when it comes to privacy regulation. Instead, the paradigm should be focused on public injuries, not private harms.
When it comes to privacy externalities, Ben-Shahar is a strict constructionist. If you get creeped out or price-discriminated against, that's just a bad deal you made. To him, an externality only exists when the data usage harms innocent bystanders or society at large. His paradigmatic example is the Strava case where the running routes of military personnel publicly disclosed secret military bases.
Building on this idea, in his forthcoming book "Why Fear Data," Ben-Shahar argues that our current obsession with individual privacy is not only ineffective but actively harmful. He posits that the most significant risks in the data economy are not the type of externalities that lead to real social costs, rather than individual privacy harms.
The book is an attempt to bring home the lessons from environmental law — which is all about externalities — to the world of data privacy. Just as a factory owner and a consumer can agree to a transaction that poisons the local air for third parties, a user and a platform can agree to a data exchange that creates national security risks or systemic discrimination. Ben-Shahar suggests that we must begin to rethink our model to focus on data pollution, shifting the regulatory toolkit away from individual rights and toward structural interventions.
What could this look like? Perhaps structural restrictions on certain types of data processing or even emissions-style liability for the systemic social debris left behind by algorithmic decision-making. We will have to wait for the book to come out, though you can read an excerpt here.
Of course, a variety of scholars have long critiqued the flaw in our individualized approach to data privacy that treats personal data as a private concern between two parties.
Salomé Viljoen and Julie Cohen have each argued convincingly that this model fails because data is fundamentally relational. When I disclose my information, I am contributing to a population-level dataset that predicts and modulates the behavior of everyone who shares some aspect of my data identity. Cohen frames this as a matter of "biopolitical public goods," where privacy is a structural necessity for democratic functioning rather than a personal preference. Viljoen's Relational Theory of Data Governance is a must-read on this topic.
There are plenty of big ideas for enforcers like those at the FTC to learn from, but it is legislators who are ultimately best suited to rethink our legal framework for privacy in ways that respond to collective harms. Though we don't see much in the way of data taxes yet, strict bans and limitations are increasingly showing up in legislative vehicles, at least at the U.S. state level.
Maybe we are collectively working toward internalizing privacy's externalities, but it is a long and iterative process.
Please send feedback, updates and relational datasets to cobun@iapp.org.
This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEs


