The U.K. proposed a slate of new rules and requirements in multiple enforcement tools that aim to strengthen digital protections for children. The crackdown is the latest step the U.K. government is taking to hold technology companies accountable for potential online harms impacting a child's wellbeing.

In a 15 Feb. announcement, U.K. Prime Minister Keir Starmer said, "no platform gets a free pass" when protecting children from potentially addictive design features and risks associated with AI chatbots. 

The U.K. intends to table amendments to the Children's Wellbeing and Schools Bill to bolster protections around the collection of children's personal data, as well as updates to the Crime and Policing Bill to prevent all AI tools from creating harmful or illegal content. The measures build upon the children's safety provisions in the Online Safety Act, which require platforms to prevent children from accessing inappropriate and harmful content by implementing age assurance and safety measures.

The U.K. government noted it expects platforms to take proactive steps to prevent children from harmful online interactions and could look to introduce age restrictions for social media platforms. Organizations that do not comply with the measures could face penalties or other enforcement actions.

The flood of child safety activity stems in part from global concerns around alleged nonconsensual explicit deepfake image generation by social platform X's AI chatbot, Grok. The U.K. Information Commissioner's Office and the Office of Communications recently launched separate investigations into the Grok claims.

ICO Executive Director Regulatory Risk and Innovation William Malcolm said the allegations "raise deeply troubling questions about how people's personal data has been used to generate intimate or sexualized images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this."

To address online risks such as AI deepfakes, Starmer noted the new measures will prevent loopholes in children's privacy regulations "to protect children's wellbeing and help parents to navigate the minefield of social media."

Social media age restrictions are on the table as part of the U.K.'s safety push. 

The Department for Science, Innovation and Technology and Department of Education are in the midst of a public consultation to understand children's social media use and related issues. The consult covers specifics of a potential ban on minors' accounts, the means for accurate age assurance and whether the digital age for consent is too low.

In the government's 15 Feb. announcement, DSIT Secretary Liz Kendall said the U.K. will "not wait to take the action families need, so we will tighten the rules on AI chatbots, and we are laying the ground so we can act at pace on the results of the consultation on young people and social media."

Ireland's DPC launches Grok investigation

Ireland's Data Protection Commission announced it launched its own investigation into X's compliance with the EU General Data Protection Regulation. 

In a statement, DPC Deputy Commissioner Graham Doyle said the authority has been in contact with X over the AI tools' alleged ability to easily generate nonconsensual explicit images. He noted the DPC has "commenced a large-scale inquiry which will examine (X's) compliance with some of their fundamental obligations under the GDPR in relation to the matters at hand."

The DPC previously probed X's collection of personal data used to train Grok. The investigation aimed to determine if the platform's method of using public social media posts to train the AI bot violated the GDPR. The inquiry ended after X said it would stop collecting EU users' data for training purposes.

The DPC's probe is separate from France's Grok investigation, which focuses on alleged Digital Services Act violations stemming from deepfake generation. French cybercrime prosecutors and Europol recently carried out a raid on X's Paris office while summoning Executive Chair and Chief Technology Officer Elon Musk for questioning over the Grok deepfake allegations.

EDPS offers view on extending interim CSAM rules

European Data Protection Supervisor Wojciech Wiewiórowski issued an opinion on the European Commission's proposed extension of the interim EU data processing regulations to prevent child sexual abuse material. 

The regulations aim to allow companies to use technologies such as AI to identify and report CSAM, providing organizations with a temporary exemption from the ePrivacy Directive's obligations to not process users' personal communications without consent. The EDPS urged the EU to implement increased safeguards to improve effective online safety measures that limit the potential privacy implications associated with tracking technologies. 

"We must ensure no legal vacuum emerges in this fight," Wiewiórowski said in a statement. "This extension is the right moment to address some of the concerns that have been raised also during the discussions around a long-term Regulation, to ensure that scanning is not indiscriminate and that there is always a clear legal basis for the processing of personal data."

The interim measures would be replaced over time by formal CSAM legislation being debated in current trilogue negotiations between EU institutions. It is unclear where discussions are after the Council of the European Union and European Parliament finalized negotiating positions in the second half of 2025.

The member states' position includes required risk assessments and subsequent mitigation measures as well as the introduction of risk categories. Parliament's position aims to bolster punishments for CSAM and "criminalise explicitly the use of artificial intelligence systems 'designed or adapted primarily' for CSA crimes."

Lexie White is a staff writer for the IAPP.