Aligning with the global movement of data protection authorities, Brazil's data protection agency, the Agência Nacional de Proteção de Dados, in conjunction with the Federal Prosecution Service, the Ministério Público Federal, and the National Consumer Secretariat, Senacon, issued a joint recommendation to social media company X.
The objective is to halt the generation and circulation of nonconsensual sexually explicit synthetic content — known as deepfakes — produced by X's chatbot, Grok.
The 20 Jan. measure follows reports and complaints that the tool was being used to create erotic and sexualized images of real women and minors, without consent, based on the manipulation of legitimate photographs. Grok has indicated it would ban creating undressed photos of real people.
At this initial stage, the ANPD requested implementation of several measures for the platform's compliance:
- Immediate cessation of the generation of any sexualized or eroticized content of minors, as well as of adults without authorization.
- Creation, within 30 days, of effective technical procedures to identify, review and remove content of this type already available on the platform.
- Implementation of accessible mechanisms for data subjects to exercise their rights and report abuse.
- Drafting of a data protection impact assessment specific to Grok's synthetic content generation activities.
- Immediate and permanent suspension of accounts involved in the production or exclusive sharing of such media.
Additionally, the ANPD opened an administrative proceeding to delve deeper into the case and evaluate any potential violations of Brazil's General Data Protection Law in more detail.
Potential broader impacts
Beyond the immediate measures, what truly draws attention — and may have impacts beyond the Grok case — is Technical Note No. 1/2026 from the ANPD's General Coordination of Inspection, which grounded these measures and brings forth some interpretations regarding the practical application of the LGPD.
The first highlight is the ANPD's assertion that synthetic content generated by generative artificial intelligence systems, when referring to identified or identifiable natural persons, must be considered personal data. Consequently, the generation of this image and its subsequent use constitute a personal data processing activity, subject to all obligations foreseen in the LGPD.
Another interesting point was the indication that any personal data processing occurring within the platform that is not aligned with its own rules, made explicit through terms and conditions of use, exceeds the data subject's legitimate expectation and, therefore, affronts the principle of good faith provided for in Article 6 of the LGPD.
However, two statements mentioned in the technical note in a succinct and superficial manner, could have significant impacts on a series of other practical cases beyond Grok.
The first states that "when such activity (generating synthetic content of identified or identifiable persons) implies the use of biometric data, the resulting synthetic content will assume the qualification of sensitive personal data."
The ANPD itself, in its Technological Radar on biometrics and facial recognition, states, "biometrics is the technical analysis, performed by mathematical and statistical means, of the physical/physiological or behavioral characteristics of an individual," and later complements that the use of this technology is to "recognize a person through their physiological characteristics."
What occurs in Grok when generating a new image from a pre-existing one is the "tokenization" of the original image, provided as context to the model, which then generates a new image based on predictive tokens. Recognizing that these processes are complex and the way the generated result is produced is not yet fully understood, it does not appear there is an analysis of the individual's characteristics, much less an attempt to recognize the individual through their physiological characteristics.
Returning to the text of the technical note, the second point deserving attention is the ANPD's statement that the "generation of sexualized or eroticized synthetic content affronts various provisions of the LGPD," including the absence of a legal basis for the processing of sensitive personal data.
It is unclear why this data would be sensitive — whether it is because, in the ANPD's view, it involves the use of biometric data, or because the content of the images is sensitive and sexual in nature.
If the first hypothesis prevails, we face the challenges indicated above, especially because it does not appear that Grok or other synthetic image generation AI tools involve the processing of biometric personal data. The second hypothesis is even more complex, as it extends the concept of sensitive personal data or data regarding sex life in a way not foreseen in the LGPD or other regulations that share the same fundamental bases, such as the EU General Data Protection Regulation.
This expansion of the sensitive personal data concept is not new. In Technical Note No. 06/2023/CGTP/ANPD, addressing the analysis of data use in the pharmaceutical retail sector, the ANPD indicated the possibility of inferring information about people's health or sex life from their medication purchase history would be sufficient to consider it sensitive data.
Similarly, in its 2024 case against Meta, the ANPD indicates images, videos and audios of social network users, especially when processed by AI systems, could reveal political, religious, trade union and sexual affiliations of the data subjects and, therefore, should be considered sensitive personal data.
It is important to follow the developments of this case to understand whether the ANPD will confirm this interpretive path, with the potential impact of expanding the concept of sensitive personal data, which, in turn, brings significant practical impacts for all data processing agents.
Henrique Fabretti Moraes, CIPP/E, CIPM, CIPT, CDPO/BR, FIP, is a partner at Opice Blum.


