ANALYSIS

AI, identity and the limits of consent: Why child protection must begin upstream

When applied to children interacting with AI-driven systems, Sandor Szabo writes consent-based governance reveals a critical limitation.

Published
Subscribe to IAPP Newsletters

Contributors:

Sandor Szabo

CIPP/US

CEO and founder

EthyicaAI

Modern privacy and artificial intelligence governance frameworks are built on principles such as transparency, accountability, proportionality and respect for individual rights. In many regulatory contexts, consent remains a central mechanism for operationalizing those principles. This allows individuals to understand how data is used and to exercise control over that use.

When applied to children interacting with AI-driven systems, however, consent-based governance reveals a critical limitation. Identity formation begins long before meaningful consent is even possible.

As AI systems increasingly shape the environments where children learn, socialize and explore the world, influence does not follow choice. It precedes it. This is why this distinction exposes a governance gap that existing privacy and AI frameworks are not yet fully equipped to address.

From information systems to formative environments

In the past, AI systems were understood primarily as tools that processed information or delivered discrete outputs. Today, recommendation engines, personalization algorithms and engagement-optimization models function more like environments than applications. They shape what is visible, what is rewarded and what feels normal.

For adults, these systems tend to influence preferences and behavior. For children, the impact is deeper. Repetition, reinforcement and algorithmic feedback loops contribute to how values, aspirations and identity itself begin to take shape.

Children do not interact with AI as fully autonomous data subjects capable of contextualizing influence or recognizing persuasion. Instead, they absorb patterns. Over time, what is repeatedly surfaced, rewarded or normalized becomes part of the world they understand. In this way, AI moves from being a content delivery mechanism to an active participant in identity formation.

Why notice and choice arrive too late

Privacy professionals have long acknowledged that consent is not always sufficient as a safeguard, particularly for vulnerable populations. Laws such as the EU General Data Protection Regulation emphasize fairness, necessity and heightened protections for children. Even so, many governance mechanisms continue to rely on notice-and-choice models that assume a level of cognitive and emotional maturity children do not yet possess.

For children, AI-mediated interactions are rarely experienced as discrete decision points. They occur continuously, often passively and within social or entertainment contexts. Influence accumulates gradually, without a clear moment at which consent can meaningfully intervene.

By the time parental permissions are granted, transparency notices are read or enforcement actions occur, formative effects may already be embedded. Governance that activates only after exposure struggles to address influence that operates through duration rather than transaction.

Identity as an upstream governance concern

Much of today's AI governance discourse focuses on downstream risks such as bias, discrimination, misinformation and unsafe outputs. These concerns are real and demand regulatory attention. For children, however, an earlier governance surface exists.

Identity develops through reinforcement rather than deliberation. AI systems designed to optimize engagement inevitably participate in that process, even when they are not explicitly designed to influence identity. When governance frameworks concentrate solely on outputs and outcomes, they risk overlooking the conditions under which influence is produced.

This creates a blind spot. Harm may occur before a child has the capacity to recognize, resist or contextualize the forces shaping their perceptions of themselves and the world around them.

Pre-consent harm and developmental vulnerability

To address this gap, it is useful to name a category of risk that sits upstream of traditional consent-based protections. Pre-consent harm refers to harm that occurs before an individual can meaningfully understand or resist influence.

Children exist almost entirely within this zone. Pre-consent harm does not depend on unlawful data processing or overt misuse. It can arise when systems normalize distorted representations of success or authority, reinforce identity through engagement metrics or reward behavior without proportional context or consequence.

Importantly, such harm may occur even when systems are technically compliant with existing privacy requirements. Legal compliance does not necessarily equate to developmental protection.

Why diagnosis and enforcement are insufficient

Rising rates of anxiety, attention disorders and behavioral challenges among children are often discussed in clinical or educational terms. While medical and therapeutic interventions may be necessary, diagnosis alone cannot explain the broader pattern.

Children today navigate environments shaped by economic stress, reduced adult mediation, persistent digital engagement and algorithmic reinforcement. What appears as individual pathology may instead reflect adaptive responses to dysregulated systems.

Governance approaches that focus solely on enforcement or remediation risk treating symptoms while leaving underlying structural influences unexamined. When harm is framed exclusively as individual failure, systemic accountability remains elusive.

Implications for privacy and AI governance

Recognizing identity formation as an upstream governance concern does not undermine existing privacy law. Rather, it clarifies where protection must begin.

For regulators and practitioners, this perspective suggests the need to evaluate cumulative and developmental effects, not just isolated processing events. It encourages impact assessments that account for duration, reinforcement and vulnerability, and it supports designing AI systems with developmental safeguards rather than relying solely on consent mechanisms.

Governance that arrives only after identity has already been shaped will remain reactive by design.

Governance must start upstream 

Before regulating AI systems, governance must first ask who those systems are shaping. Before auditing outputs, it should consider what is being normalized through repeated exposure. And before focusing solely on data protection, it must recognize its role in protecting identity.

Children cannot opt out of the environments adults create for them. When governance begins only after AI systems are deployed, it becomes reactive by design — focused on reconstruction rather than prevention. Meaningful AI governance must start upstream, where identity is still forming and influence first takes hold.

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Sandor Szabo

CIPP/US

CEO and founder

EthyicaAI

Tags:

AI and machine learningChildren’s privacy and safetyData securityStrategy and governance

Related Stories