Published Date : 9/9/2025Â
The UK government continues to build out the Online Safety Act (OSA), this week announcing tighter legal requirements for platforms to locate and remove material that encourages or assists serious self-harm. The change means that platforms will have the responsibility to ensure content previously subject to age assurance regulations, which is also being made illegal, is intercepted and removed before reaching children or adult users.
The move is just the first in what is expected to be a series of amendments toughening the OSA by new Technology Secretary Liz Kendall. Ofcom is expected to publish a register of regulated services soon, MLex reports, after the government’s approach to categorization was upheld in court.
Self-harm content to be reclassified as a priority offense
A release from the Department for Science, Innovation and Technology (DSIT) says the OSA will be amended to classify self-harm content as a “priority offense.” It specifies that, while the measure is partly to protect children from content that promotes suicide, eating disorders, and “online challenges or hoaxes that may encourage someone to take part in an activity that could cause them harm,” it also aims to help adults with mental health challenges avoid bogus medical advice and potential triggers.
“Vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country,” says new Technology Secretary Liz Kendall – apparently unafraid to pick up where outgoing Tech Secretary Peter Kyle left off. “Our enhanced protections will make clear to social media companies that taking immediate steps to keep users safe from toxic material that could be the difference between life and death is not an option, but the law.”
DSIT says it imposes the “strongest possible legal protections, compelling platforms to use cutting-edge technology to actively seek out and eliminate this content before it can reach users and cause irreparable harm.” This suggests a market need for content moderation software that can automate the detection and takedown process.
Verifymy says tech is ready, if Ofcom is willing to act
Andy Lulham, COO at content-moderation provider Verifymy, says “the good news is that the technology is here” – even if it needs a little help from its human friends.
“Today’s content moderation technology is sophisticated and improving every day. AI can detect potentially harmful content and flag it in near real time, making pre-upload checks a distinct possibility. But content moderation remains a complex and nuanced process. This is why the most effective approach combines advanced technology with human expertise, ensuring speed, scalability, and sensitivity.”
“The UK government is pressing ahead with a preventative, proactive model of content moderation that will be broadly welcomed, and will reduce the amount of harmful content online. Now, Ofcom must ensure platforms live up to their duty of care and prioritize the safety of their users accordingly.”Â
Q: What is the Online Safety Act (OSA)?
A: The Online Safety Act (OSA) is a UK law designed to protect internet users, especially children, from harmful content online. It imposes legal requirements on platforms to detect and remove harmful content.
Q: What are the new requirements for platforms under the OSA?
A: Platforms are now required to actively seek out and remove content that encourages or assists serious self-harm, classifying it as a priority offense. This includes content that promotes suicide, eating disorders, and harmful online challenges.
Q: Who is Liz Kendall and what is her role?
A: Liz Kendall is the new UK Technology Secretary. She is responsible for implementing and enforcing the Online Safety Act, ensuring that online platforms meet the legal requirements to protect users from harmful content.
Q: What is the role of Ofcom in this context?
A: Ofcom is the UK’s communications regulator. It will be responsible for ensuring that platforms comply with the new requirements under the OSA and maintaining a register of regulated services.
Q: How does technology assist in content moderation?
A: Advanced AI and content moderation technology can detect harmful content in near real time, flagging it for review. Combining this technology with human expertise ensures effective and sensitive content moderation.Â