This article is more than 1 year old

UK's Online Safety Bill drops rules forcing social media to remove 'legal but harmful' content

Governments directly policing online content will put free speech at risk

The UK government has dropped the requirement forcing social media companies to remove "legal but harmful" content from its Online Safety Bill, a week before the proposal is set to return to Parliament.

The change comes after lawmakers and activists raised concerns it could have a chilling effect on free speech online. In the previous version of the Bill, companies like Twitter or Meta faced sanctions if they failed to remove specific types of content considered harmful, but not illegal, by the British government.

Lawmakers, however, have decided to scrap this rule over fears that it could give the government power to crack down on content and risks curtailing people's freedom of speech. 

"The Bill will no longer define specific types of legal content that companies must address," the Department for Digital, Culture, Media & Sport confirmed in a statement. "This removes any influence future governments could have on what private companies do about legal speech on their sites, or any risk that companies are motivated to take down legitimate posts to avoid sanctions."

Social media companies will still be required to remove illegal content, related to criminal activities such as fraud, threats to kill, harass and stalk people, the sale of illicit drugs and weapons, and revenge pornography. They will, however, be free to set their own policies on how to deal with content that may be harmful but isn't illegal to post in their terms of service. If they do decide to take down any content or ban a user, they will have to allow users to appeal the decision. 

Ofcom will be entitled to sanction companies who fail to enforce their own rules appropriately up to 10 per cent of their annual turnover.

The Online Safety Bill was designed with protecting children in mind. Platforms will have to publish their terms of service for younger users, including specifying a minimum age, explaining how they verify age if they use facial recognition technology, for example. Clear warnings about the potential risks and dangers children might face using the social media application need to be provided too.

Earlier this year, a coroner ruled social media content glorifying self-harm had contributed "more than minimally" to the suicide of 14-year-old Molly Russell. Her death prompted the government to introduce a new criminal offence for assisting or encouraging self-harm and suicide.

Officials want social media companies to build new tools that will give all users the ability to block anonymous accounts and control the types of content listed on their feeds. They should be able to minimize posts on topics they wish to avoid and flag content that is unlawful or harmful to children. 

The latest changes made to the Online Safety Bill will be discussed amongst Members of Parliament in the Commons for Report Stage on December 5. ®

More about

TIP US OFF

Send us news


Other stories you might like