After years of tireless efforts by online safety campaigners, the Online Safety Bill has been passed in parliament and will soon be made law. 

The Online Safety Bill is a new set of laws to protect children and adults online, but what does it actually mean in practice? In this blog post, we explain the main aspects of the new law and what it means for social media companies and internet users.

At a time when online grooming and child abuse image crimes are at an all-time high, this legislation will finally require tech companies to make their sites safe for children by design.

The bill places the following new duties on social media companies:

  • remove illegal content quickly or prevent it from appearing in the first place, including content promoting self-harm
  • prevent children from accessing harmful and age-inappropriate content
  • enforce age limits and age-checking measures
  • ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments
  • provide parents and children with clear and accessible ways to report problems online when they do arise

The online safety laws will mean social media companies will have to keep underage children off their platforms. Social media companies set the age limits on their platforms and many of them say children under 13 years of age are not allowed, but many younger children have accounts. The new laws mean social media companies will have to say what technology they are using, and show they are enforcing their age limits.

It also empowers adults to take control of what they see online. It provides three layers of protection for internet users which will:

  1. Make sure illegal content will have to be removed
  2. Place a legal responsibility on social media platforms to enforce the promises they make to users when they sign up, through terms and conditions
  3. Offer users the option to filter out harmful content, such as bullying, that they do not want to see online.

The largest platforms will be required to offer adult users tools so they can have greater control over the kinds of content they see and who they engage with online. This includes giving them the option of filtering out unverified users, which will help stop anonymous trolls from contacting them.

Platforms must also proactively offer tools to all their registered adult users, so it is easy for them to decide whether to use these tools or not. The tools must be effective and easy to access and could include human moderation, blocking content flagged by other internet users or sensitivity and warning screens. 

Also added to the bill are new laws to tackle online fraud and violence against women and girls. Through this legislation, it will be easier to convict someone who shares intimate images without consent and new laws will further criminalise the non-consensual sharing of intimate deepfakes.

The change in laws will make it easier to charge abusers who share intimate images and put more offenders behind bars and better protect the public. Those found guilty of this base offence have a maximum penalty of 6 months in custody.

Under the bill, the biggest social media platforms will also have to stop users being exposed to dangerous fraudulent adverts by blocking and removing scams, or face Ofcom’s huge new fines.

How will the Online Safety Bill be enforced? 

Platforms will have to demonstrate they have processes in place to meet the requirements set out by the Bill. Ofcom will check how effective those processes are at actually keeping internet users safe from harm.

There will be substantial fines for those who do not comply

If social media platforms do not comply with these rules, Ofcom could fine them up to £18 million or 10% of their global annual revenue, whichever is biggest – meaning fines handed down to the biggest platforms could reach billions of pounds.

Some of the biggest social media companies have already made changes ahead of the bill coming into force. 

  • Snapchat has started removing the accounts of underage users
  • TikTok has implemented stronger age verification

As an online moderation agency, StrawberrySocial deals with the worst of the internet every day, working to keep our clients’ online community members safe and protect them from the types of content no-one should have to see. We are very proud to work with the NSPCC, Which? and Barnardos, who were instrumental in making the Online Safety Bill a reality. This new law will radically change the landscape for children and other vulnerable groups online, and make the internet a safer, more inclusive place for all. 

“At StrawberrySocial we’re hugely excited to see this bill finally make its way into law. Our core mission as a company is to protect vulnerable users online, to combat hate, abuse and bullying wherever we can. This is a daily battle.  We work with some incredible clients such as NSPCC, Samaritans and the NHS – people who work tirelessly, often unseen, to try to make the internet a safe, inclusive and enjoyable space for everyone.

It cannot be under-estimated the amount of harm that is done online and the ever more advanced ways in which criminals and abusers use tech and loopholes in the law. I, for one, hope very much that the Online Safety Bill will play a big part in helping us all tackle what has become, sadly, a growth area. If the only way to bring these massive organisations into line is by hitting their bottom line, then so be it.”  

Rebecca Fitzgerald, CEO & Founder, StrawberrySocial.

Find out more about the Online Safety Bill

Are you a parent or carer concerned about keeping your children safe online? Visit
Thinkuknow for helpful advice and resources.