Published Date : 7/8/2025Â
Ofcom is pushing forward with its 'Year of Action' by introducing a new set of proposals to strengthen its Codes of Practice. The UK regulator has launched a consultation on the latest document, inviting stakeholders to provide feedback by October 20, 2025. This move comes as part of a broader effort to address growing concerns about online safety, particularly in the context of livestreaming and the spread of illegal content. n nThe proposed measures focus on reducing the distribution of harmful material by upgrading recommender systems and crisis response protocols. Ofcom also aims to expand the use of proactive technologies, such as automated detection of child sexual abuse material (CSAM), deepfakes, and suicide-related content, to block illegal images before they reach users. A key emphasis is placed on strengthening child protections, especially through the implementation of 'highly effective age assurance' to prevent grooming in user-to-user services. n nOne of the main concerns highlighted by Ofcom is the rapid spread of illegal content, which can cause widespread harm during crises. For example, the violent riots following the Southport murders last year demonstrated how quickly harmful content can go viral, exacerbating real-world consequences. 'Recommender systems can worsen this issue,' the regulator noted in a blog post. To combat this, Ofcom is urging platforms to establish systems that flag livestreams reporting imminent physical harm and ensure 24/7 human moderation to address threats in real time. n nGrooming remains a critical issue, and Ofcom is advocating for the use of age verification or age estimation technologies to safeguard minors. While existing codes already require platforms to protect children from grooming, the new guidance emphasizes the importance of robust age checks to underpin these measures. 'Platforms should use these tools to prevent harmful interactions in livestreams,' the regulator stated. n nThe proposal has significant implications for platforms like Twitch, which are now part of the broader age verification debate. Other sectors, including social media, video gaming hubs, and adult content providers, are also caught in this regulatory shift. Ofcom's Online Safety Group Director, Oliver Griffiths, emphasized the regulator's commitment to holding platforms accountable. 'We're always looking for ways to make life safer online,' he said, adding that the proposals aim to address evolving technological challenges and emerging harms. n nThe full proposal is available for download on Ofcom's website, and the regulator is preparing to enforce age assurance requirements under its children's code starting July 25. This comes as part of a larger push to ensure that tech firms prioritize user safety and comply with updated regulations. The move reflects a growing global trend toward stricter online safety measures, particularly in the wake of increasing concerns about misinformation, cyberbullying, and exposure to harmful content. n nAs the consultation period progresses, stakeholders will have the opportunity to shape the final version of these proposals. Ofcom's approach highlights the need for a balance between innovation and safety, ensuring that platforms can thrive while protecting vulnerable users. The focus on age verification and proactive technologies underscores the regulator's determination to create a safer digital environment for all. n nThe proposed measures also address the unique challenges posed by livestreaming, where content is live and often unfiltered. Unlike pre-recorded material, livestreams can spread rapidly, making it crucial for platforms to have real-time monitoring and response systems. Ofcom's emphasis on human moderators and automated tools aims to create a layered defense against harmful content, ensuring that users are protected without stifling free expression. n nIn addition to technical solutions, the proposal calls for greater transparency from platforms. Ofcom is encouraging companies to disclose how they detect and remove harmful content, as well as the effectiveness of their age verification systems. This transparency is seen as essential for building public trust and ensuring that regulatory requirements are met. n nThe regulatory landscape is evolving rapidly, and Ofcom's proposals reflect the need for continuous adaptation. As new threats emerge, such as deepfakes and AI-generated content, the regulator is positioning itself to stay ahead of the curve. By setting clear standards and enforcing them rigorously, Ofcom aims to create a safer online ecosystem for children and adults alike. n nOverall, the new measures mark a significant step forward in the fight against online harms. With a focus on age verification, proactive technologies, and real-time monitoring, Ofcom is taking a comprehensive approach to safeguarding users. The success of these initiatives will depend on the cooperation of platforms, regulators, and the public, all working together to create a safer digital world.Â
Q: What is Ofcom's main goal with the new proposals?
A: Ofcom aims to enhance online safety by implementing advanced age verification systems for livestreams to protect children from harmful content and grooming.
Q: How does the proposal address the spread of illegal content?
A: The proposal includes upgrades to recommender systems, proactive technologies like automated CSAM detection, and real-time moderation to curb the rapid spread of harmful material.
Q: What is the deadline for stakeholders to provide feedback?
A: Stakeholders have until October 20, 2025, to respond to Ofcom's consultation on the new measures.
Q: How will age verification impact platforms like Twitch?
A: Platforms such as Twitch will need to adopt robust age checks to comply with Ofcom's guidelines, joining other sectors in the broader age verification debate.
Q: What role do human moderators play in the new regulations?
A: Human moderators must be available 24/7 to review content and take action in real time, especially when users report livestreams involving imminent physical harm.Â