Published Date : 11/3/2025Â
The algorithms that social media platforms use to determine what is presented to young UK users in their feeds will soon be subject to auditing under the Online Safety Act (OSA). Ofcom, the UK's communications regulator, has announced that it will take enforcement action against platforms that fail to prove their algorithms prevent children under 18 from being exposed to restricted content.
Ofcom Chief Executive Melanie Dawes told the Financial Times that her agency will rigorously enforce the OSA to ensure that social media platforms like YouTube, Roblox, and Facebook do not algorithmically deliver adult content to young users. This move is part of a broader effort to enhance online safety and protect children from harmful material.
The regulator has also discussed how the OSA applies to chatbots and generative AI tools. OpenAI has acknowledged the applicability of the OSA to ChatGPT, while X, the parent company of Grok, has not. This highlights the ongoing challenge of ensuring that all digital tools, including AI-driven chatbots, comply with the OSA's stringent standards.
Lawmakers in the EU and Australia are currently exploring similar measures to restrict social media access to young people. These efforts reflect a growing global consensus that more must be done to protect children online. In total, Ofcom is currently investigating 69 possible violations of the OSA and its age verification rules, according to recent reports.
Dawes also noted a significant decrease in the use of VPNs since the OSA's launch. Initially, there was a surge in VPN usage as users tried to circumvent age verification measures. However, the effectiveness of the OSA in reducing this trend is a positive sign for regulators.
Despite these efforts, some lawmakers and civil society groups have expressed concerns about the OSA. Liberal Democrat Lord Timothy Clement-Jones introduced a motion “to regret” in the upper chamber last week, arguing that the OSA introduces “a ceiling, not a floor” on online child protection. He was joined by legislators from other parties who argued that the codes are not specific enough to meet the differing needs of children at different ages.
Critics also pointed out that live-streaming and algorithms that promote harmful but legal content need more scrutiny. These concerns highlight the ongoing debate about how best to balance the need for online safety with the rights and freedoms of internet users.
The implementation of the OSA and similar measures in other countries is a significant step towards creating a safer online environment for young users. As technology continues to evolve, regulatory bodies like Ofcom will play a crucial role in ensuring that these protections keep pace with the changing digital landscape.Â
Q: What is the Online Safety Act (OSA)?
A: The Online Safety Act is a UK law designed to protect children and young people from harmful content online. It requires social media platforms and other digital services to implement robust measures to prevent the exposure of young users to inappropriate or harmful content.
Q: How will Ofcom enforce the OSA?
A: Ofcom will enforce the OSA by auditing the algorithms of social media platforms to ensure they do not expose young users to restricted content. They will also investigate and take action against platforms that fail to comply with the OSA's requirements.
Q: What are the concerns about the OSA?
A: Some lawmakers and civil society groups have raised concerns that the OSA may not be specific enough to meet the varying needs of children at different ages. There are also concerns about the regulation of live-streaming and algorithms that promote harmful but legal content.
Q: How does the OSA apply to chatbots and AI tools?
A: The OSA applies to chatbots and AI tools, requiring them to comply with the same standards as other digital services. OpenAI has acknowledged the applicability of the OSA to ChatGPT, while X has not yet done so for Grok.
Q: What impact has the OSA had on VPN usage?
A: The OSA has led to a significant decrease in the use of VPNs, which were initially used to circumvent age verification measures. This reduction is a positive sign for regulators aiming to protect young users online.Â