OPINION

Power in the margins: Why diversity is foundational to privacy excellence

Published
Subscribe to IAPP Newsletters

Contributors:

Drew Bjerken

CIPP/E, CIPM, CIPT, FIP

Chief Privacy Officer, Global Privacy Office

Marriott Vacations Worldwide

Editor's note

This article is the first in a series, Power in the Margins, authored by former and existing members of the IAPP's Diversity in Privacy Advisory Board. The series explores how diversity, equity and inclusion intersect with the privacy profession and why those considerations matter as the profession continues to mature.

Every privacy decision is a choice about what data to collect, how long to keep it, which trade-offs to privilege and whose risks to take seriously. These choices are never neutral. When the people making them share similar backgrounds, experiences and assumptions, the result is often a narrow view of risk and fairness. 

A homogeneous team, despite its best intentions, will produce homogeneous risk models.

Diversity is not just a moral imperative or a staffing metric, it is a structural safeguard for privacy excellence. A diverse team is a living control: it widens what is seen, debated, tested and ultimately protected. Privacy programs with diversity embedded into the architecture, from governance and design to vendor oversight and incident response, are better equipped to detect risk early, mitigate harm effectively and earn durable trust.

Perhaps every privacy leader knows the moment when a design seems sound until someone at the edge of the conversation points out what everyone else missed. Maybe it is the engineer who notices that a "gender" field hard codes a binary. It might be the frontline associate who explains why an address verification step will fail for people without stable housing. Or it could be the translator who shows how a "plain language" notice becomes opaque when carried across borders. 

None of these observations are exotic; they are obvious, but only to the person who lives near the edge case. Privacy is built on choices such as what data to collect, how long to keep it, which trade-offs to privilege and whose risks to take seriously. 

These choices are never neutral. A homogeneous team will, despite best intentions, produce homogeneous risk models. A diverse team, by contrast, is a living control: it widens what gets seen, debated, tested and ultimately protected. In that sense, diversity is not just a moral posture or a staffing metric; it is an internal safeguard for process fairness. 

When diversity is woven into the architecture of a privacy program — its councils, impact assessments, product reviews, vendor oversight and incident playbooks — quality improves, reputational exposure shrinks and the public's trust becomes sturdier. Across privacy programs, voices that are too often adjacent to, rather than centered in, privacy work need to be elevated.

Beyond compliance — The promise of privacy

Fairness isn't a feature; it's a foundation. Privacy's promise is not only legal compliance but also fair processes and equitable outcomes. Diversity is therefore not an accessory to privacy programs; it is their structural backbone. When diverse voices are embedded across governance, design, operations and oversight, organizations can detect risk earlier, mitigate harm more effectively and earn durable trust. 

Fairness in privacy is partly legal and largely operational. It is the discipline of asking, early and often: "For whom might this fail?" Diversity answers that question in three reinforcing ways. First, diverse teams expand the field of view, because people who have experienced misclassification, exclusionary design or translation loss are faster to name those risks before they ossify into features. Second, diversity rebalances evidence; in governance forums, a plurality of lived experiences tempers optimism bias and groupthink, yielding more accurate impact assessments and release decisions. Third, diversity improves remedies; when harm occurs, people who understand cultural nuance and practical constraints design responses that feel like help rather than hoops.

The arsenal of lenses — How diversity strengthens privacy

A team's diversity is a program's superpower. Diversity is not a headcount; it is a repertoire of lenses. Privacy problems straddle law, engineering, product, design, security, operations and lived experience. What diverse teams contribute is range: a wider field of view, different heuristics for spotting risk and an instinct to ask how a design behaves for people nowhere near the median. 

That range creates generative friction; the constructive tension that slows premature consensus, forces evidence over assumption and improves the fidelity of decisions about data and people. When people who have navigated exclusion, translation loss, inaccessible systems or stigma sit at the design table, the conversation changes from "does this work" to "for whom does this fail, and what would fairness require here?" 

Over time, that shift becomes program muscle memory. The facets of diversity that matter in practice are multiple and mutually reinforcing.

The edge-case detectives: Identity and lived experience. Gender-diverse and racially and ethnically diverse practitioners frequently surface harms others overlook such as misgendering, overbroad location tracking, facial recognition error rates or benefits portals that inadvertently expose sensitive identities. Independent testing, such as the U.S. National Institute of Standards and Technology's Face Recognition Vendor Test, has shown significant demographic differentials in facial recognition performance across race and sex, underscoring why representative testing and review matter.

The creative think-tank: Cognitive and educational diversity. Mixed teams of lawyers, engineers, product managers, designers, security analysts and ethicists pressure-test solutions from different angles. Attorneys sharpen legality and duty, engineers test feasibility and designers keep comprehension and behavior change in view. Ethicists frame harms the law has not yet named. Neurodiversity adds distinctive strengths in pattern recognition and visual search that can benefit log analysis, incident triage and audit.

The global translators: Language and culture. Multilingual and cross-cultural experience prevents "plain English" from becoming "unclear elsewhere." Research such as "Defining Privacy: How Users Interpret Technical Terms in Privacy Policies" shows privacy policies are often difficult to read and technical terms are widely misunderstood, which affects user comfort and the quality of consent.

The reality checkers: Socioeconomic and generational perspective. Team members who have navigated low-bandwidth devices, shared accounts or address instability catch failure modes in identity verification and self-service rights processes. A multigenerational team balances default privacy expectations with usability, avoiding designs that exclude by assumption.

The intersectional insight: Seeing the whole person. People do not live one category at a time. Intersectional analysis prevents single-axis solutions, revealing compound risks, for example, a neurodivergent LGBTQ+ person seeking confidentiality in a benefits process. Intersectionality is the connective tissue that keeps programs from flattening complex realities into simplistic personas.

From conviction to action — Building a program of fairness

A program grounded in diversity is not a set of inspirational posters; it is a set of structures that make fairness routine.

Governance that reflects reality. Privacy councils and assessment forums should be staffed to reflect the communities affected by decisions. Their charters can explicitly tie diversity to process fairness and trust, and include a "fairness escalation" path when mitigations appear to burden vulnerable groups.

Risk assessment that sees the whole. Impact assessments should ask about disparate impact, accessibility, translation loss and stigma. Representative testing becomes a release criterion. When perfect representativeness is not feasible, limitations are documented and mitigations are more likely to be funded rather than ignored. Emerging governance frameworks, like NIST's AI Risk Management Framework, likewise position fairness and harmful-bias management as core attributes of trustworthy systems.

Design that respects dignity. Notices, consent flows and data subject rights mechanisms are usable only if they are understandable and accessible. This requires multilingual support, assistive-technology compatibility, viable alternatives to exclusionary identity checks, and routes for people who cannot or will not use a portal.

Life cycle controls that travel. Procurement should evaluate whether a vendor's practices including accessibility, localization and representative testing match organizational values and risk appetite. Contracts can encode those expectations, and monitoring can confirm them in practice.

Incident response that recognizes real harm. Harm is not limited to credential loss. For some communities it includes social and physical safety, employment risk or reputational damage. Response teams with diverse composition anticipate those impacts, communicate credibly and offer remedies that protect rather than expose.

Measurement that drives learning. Equity-aware metrics such as comprehension rates, data subject request completion without abandonment, error distribution across cohorts and accessibility conformance turn intentions into accountable practice. Publishing lessons learned, even internally, normalizes improvement and attracts participation.

A stronger profession, a better world

Privacy only keeps its promises when it keeps them for everyone. Programs grounded in diversity are better at spotting harm, fairer in weighing trade-offs and more effective at delivering redress that feels like justice. 

Professionally, they raise the standard of practice and attract talent that might otherwise remain at the margins. Socially, they help ensure that the tools we build — consent systems, identity checks, data-sharing mechanisms, etc. — do not reproduce the very exclusions privacy is meant to resist. 

The work is pragmatic: seat more voices at the table, update the questions we ask, instrument the journeys we design, and hold ourselves accountable for outcomes. The result is not only stronger privacy programs, it is a stronger, more trusted industry and, simply, a better world.

This article was written in collaboration with the IAPP Diversity in Privacy Advisory Board. 

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Drew Bjerken

CIPP/E, CIPM, CIPT, FIP

Chief Privacy Officer, Global Privacy Office

Marriott Vacations Worldwide

Tags:

CommunityEthicsFrameworks and standardsStrategy and governanceRisk managementPrivacy

Related Stories