Smart glasses at work: A policy problem hiding in plain sight

AI and privacy governance teams should act now to set clear guardrails around smart glasses before adoption becomes more widespread.

Contributors:
Noga Rosenthal
AIGP, CIPP/E, CIPP/US
General Counsel and Chief Privacy Officer
Ampersand
The first time I heard about someone wearing smart glasses to work was when an elementary school teacher mentioned he wore them in class. I had a vision of a teacher standing before a room of kindergarteners, recording them.
The thought of someone in a position of trust using a recording device that looks like ordinary eyewear in a room full of minors who cannot meaningfully consent to being filmed was mildly horrifying.
I quickly realized that if this was happening in schools, it was probably already happening in other workplaces too. Maybe sporadically, a few employees here, a department there, with some industries seeing far more adoption than others. And most companies still have no policy squarely addressing the use of smart glasses in the workplace, let alone whether sensitive environments like schools warrant a ban altogether.
Smart glasses are not inherently "good" or "bad." Like many workplace technologies, they can be used responsibly or misused. The question before artificial intelligence and privacy governance teams is not whether employees will use them, because they are and they will. The question is whether organizations should act now to set clear guardrails before adoption becomes more widespread.
Why now?
The reason for addressing the risk presented by this technology and build company guardrails now versus waiting is simple: smart glasses are starting to cross the line from being a novelty to being normal. They are becoming more discreet, more capable and more integrated with AI features.Â
Some models can capture audio and video, take photos and live stream. Others support real-time transcription, translation or AI-assisted extraction of information from what the wearer sees and hears. Smart glasses can create a low-friction pathway for sensitive workplace data to be collected, processed, shared and retained, sometimes outside approved company systems.Â
Even if a workforce is not broadly using smart glasses today, one person using them in the wrong context can create disproportionate privacy, security and trust fallout among employees or clients.
That is why this policy update belongs on the AI and privacy governance roadmap now.Â
At a minimum, a smart glasses policy should address five core areas.
Designate recording-free zones
First, companies should set a baseline rule that certain spaces are off-limits for recording. Period.Â
A policy should explicitly prohibit recording, photographing or livestreaming in restrooms, locker rooms, lactation rooms and any space where individuals have a heightened expectation of privacy. The same restriction should apply to conference rooms or other spaces during confidential meetings, such as leadership sessions, strategic planning or sensitive human resources discussions. Recording should also be prohibited in areas where sensitive data is displayed or discussed, including an executive conference room with whiteboards or screens where proprietary information may be displayed.
Companies may also want to post signage in certain areas stating that recording is prohibited. Signage provides clear notice to employees and visitors and strengthens enforcement of the prohibition if issues arise later.
Establish clear notice and consent requirements
In many countries, or individual states in the U.S., various laws require notifying individuals regarding audio recordings and obtaining consent to such recordings. Some jurisdictions require only one-party consent, meaning the person recording may do so without informing others, while others require all-party consent before any recording can begin.Â
Employees should not be expected to determine, in real time, whether a particular interaction is governed by one-party or all-party consent standards. It's strongly advisable that companies have a policy establishing a simple, single workable standard that reduces legal and trust risk across jurisdictions by having employees provide notice and ask for consent to record with smart glasses, whether audio or video, no matter their jurisdiction.Â
Technology alone will not solve the consent or transparency problem. For example, some smart glasses include privacy-enhancing features like indicator lights or an audio cue like a "shutter snap" when recording is active or if a person is taking a picture. While these can help with transparency, companies should not treat them as consent mechanisms or assume they cannot be bypassed by bad actors. Policies should be designed for the real world, including the possibility of bad actors creating workarounds.
The consequences of getting consent wrong are not theoretical. There are at least two pending class-action lawsuits against AI transcription tool providers around consent practices. One complaint alleges that once the AI tool was connected to a host's calendar, it automatically joined meetings and began recording, transcribing and taking screenshots without providing notice to or obtaining consent from the other meeting participants.
A complainant in another case alleges that, as a participant in a virtual meeting, an AI tool captured her voice and generated a voiceprint used to identify speakders, all without notice, written disclosure of the purpose or duration of collection and without obtaining a written release. This adds the complication of obtaining consent before capturing an individual's biometric data.Â
Though these cases involve virtual meeting AI transcription tools, and not smart glasses, the legal theories also apply to smart glasses. Smart glasses often come with AI capabilities that can generate both voiceprints from audio recordings and facial geometry data from video recordings. Both of these may qualify as biometric identifiers in states with biometric privacy laws which require written consent or release.Â
One person activating a recording tool or smart glasses in a conference room makes everyone else in the room a data subject, requiring notice or consent. The difference with smart glasses is that they add the ability to capture video and physical-world recordings, which expands the risk. Companies developing smart glasses policies should treat these cases as an early warning sign of where litigation is heading.
But how a company would actually operationalize biometric release for smart glasses in a physical workplace is an open question. In a meeting, an individual could technically obtain written release from others, but this seems unrealistic. It's even more unrealistic to obtain written consent if you capture a recording in an informal setting like the hallway or a cafeteria. Companies could include a biometric release in employee onboarding paperwork to partially address this for their own workforce, but that does not cover visitors, clients, vendors or anyone else the wearer encounters.Â
This is an area where the technology is ahead of any workable compliance model, and governance teams should be honest about that gap rather than papering over it with a policy that cannot be followed in practice.
Protect sensitive and confidential informationÂ
Smart glasses can capture video, photos and audio with minimal visibility. As AI capabilities expand, they can also enable information extraction, such as recognizing text from screens or documents. Policies should treat this as a data life cycle problem, providing guardrails not just at the moment of capturing data but what happens afterward with the data: where it is stored, who can access it, how long it is retained and how it is shared.Â
For instance, consider human resources representatives recording their review of passports during an employee onboarding meeting. They could extract the passport number. Where is that information stored and is it stored securely? Companies should set tight access controls and retention periods to ensure these files are deleted.
Policies should explicitly tie smart glasses restrictions or limitations to existing company confidentiality, proprietary information and information security policies. They should prohibit recording in meetings or contexts likely to involve highly confidential information, trade secrets or sensitive business strategy.Â
They should also require employees to "stop recording immediately" if confidential material is inadvertently captured outside the company networks and report the incident through the appropriate channel. The policy should state clearly that unauthorized recording or disclosure of confidential information may result in discipline, up to and including termination, and have a policy around how long these recordings are maintained.
Where possible, governance teams should also align this with broader controls including "bring your own device" policies, such as personal device management and incident response procedures.
Address objections and friction points head-on
This is where policies often break down. Someone objects to being recorded, such as a manager or employee during a difficult employee performance review, or a situation arises as simple as an employee recording a one-on-one meeting when there is no need. A good policy anticipates these situations and gives employees and managers a consistent playbook to handle objections.
The first scenario arises when the company wants to record a meeting and someone objects. In jurisdictions or contexts where all-party consent is required, recording generally cannot proceed without consent. Companies should offer alternatives such as written summaries, a note-taker or an unrecorded meeting. The employee, client or vendor should also be empowered to leave the meeting.
Companies should also be clear that objections will not automatically stop recording in every context. The policy should identify business necessity carve-outs, including situations where the company's need to record takes precedence, such as formal workplace investigations or regulatory compliance documentation. Employees should understand these exceptions exist and that invoking an objection in those contexts will not necessarily result in recording being paused.Â
When recording does not proceed because of an objection, the company should document what happened, including who objected, what alternative was offered and how the situation was resolved. That record supports consistent enforcement and provides meaningful protection against later claims of selective or disparate treatment.
On the other hand, a company may want to record a meeting, but it may not be a business necessity. They may even be in a jurisdiction where they are able to record individuals without their consent. Forcing an employee to accept that they will be recorded can create employee relations and trust issues which may outweigh the benefit of recording the meeting. Companies should consider whether the business value of recording outweighs the impact to employee relations and should therefore offer individuals the ability to object to the recording.Â
The company's goal should be to treat its employees consistently. If managers allow recording in some situations but not others, the policy becomes harder to enforce, and the company's credibility can suffer. To operationalize consistency, companies should create a clear escalation path before conflicts arise. When a recording objection cannot be resolved in the moment, managers should have a named point of contact such as someone in human resources or legal, who are empowered to make a binding call. Employees and managers should not be left to negotiate this on their own, and the policy should make clear who has final authority.
A different challenge arises when an employee wants to record a meeting and the company objects. Overly broad bans on all workplace recording by a company can create legal and employee relations risks, including in contexts where employees may be documenting unsafe workplace conditions using smart glasses. However, there may be situations where the company does not want meetings recorded, such as legal meetings.
Companies can still restrict recording in specific contexts, such as recording-free zones, confidential meetings, or discussions involving sensitive data or trade secrets. If an employee insists on recording a meeting designated as confidential, the company can pause the meeting and offer an alternative, like written notes, before proceeding.
Finally, when a third-party objects, such as a client or vendor, the company policy should default to no recording. The relationship risk is rarely worth the pushback. If recording is truly necessary, explain why and offer alternatives, such as a written summary.
None of this policy works without trained managers. Companies should train managers on when recording is and is not permitted, how to respond to objections in real time, and when to escalate rather than make a decision unilaterally.
Build in accommodation exceptions
Policies should account for employees who request smart glasses as part of a disability accommodation. Wearable technology is increasingly used as assistive technology, including for hearing assistance, real-time captioning, memory support and other accessibility needs.Â
A policy should direct employees to human resources, or the appropriate function, to request an accommodation through the interactive process. Like other interactive processes, human resources should evaluate the request individually and document the decision. What is novel is that the human resources team may need to work with IT, security and AI governance to implement safeguards when accommodations are granted, such as restricting storage of the recording to the company network, limiting sharing of the recording and enabling automatic deletion where feasible. They may also want to narrow the use of the recording to what is necessary and clarify that the accommodation does not mean unlimited recording.
Why is a total ban harder than it sounds?
Many companies will instinctively want to ban smart glasses outright. Think back to the teacher wearing smart glasses in a kindergarten classroom. In sensitive environments, such as schools, hospitals or other sites with children, patients or other vulnerable populations, companies may decide a total ban is the right call. In many other workplaces, the answer around governance of smart glasses is more nuanced and should depend on the business model, operating footprint and risk appetite. In practice, banning the use of smart glasses can also be difficult to enforce and, in some cases, maybe legally risky.
As these devices become more discreet, managers may not be able to reliably distinguish between ordinary prescription glasses and recording-capable wearables, raising the risk that a blanket ban leads to employees being asked to remove glasses that have no recording capability at all. A full ban may be unenforceable in practice.
A categorical ban can also conflict with disability accommodation obligations if smart glasses are used as assistive technology. A policy that prohibits smart glasses without a clear accommodation pathway is unlikely to hold up in practice.
For many organizations, the stronger approach is to define its risk profile and set guardrails accordingly. Companies should have strict rules where risk is highest, flexibility where accommodation is required and clear standards so employees and managers are not forced to improvise.
Governance professionals also need to acknowledge that attitudes may change quickly. Today, smart glasses may feel intrusive to some. That reaction will not necessarily last. What once felt uncomfortable can become normalized. As the technology becomes less visible and more integrated into everyday tools, expectations will shift.
That makes early policy-setting critical. Waiting until smart glasses feel normal almost guarantees the devices will already be embedded in daily workflows, making it far harder to impose necessary guardrail restrictions later. Governance teams should plan to revisit and adjust policies as both technology and workplace norms evolve.
Smart glasses may already be in your building. Your policy should be, too.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Noga Rosenthal
AIGP, CIPP/E, CIPP/US
General Counsel and Chief Privacy Officer
Ampersand



