Online safety is critical to all of us – we need to be able to guarantee safe spaces for kids and teens to interact online. One of the roles Community Managers play at StrawberrySocial is listening to audiences.

Sometimes our users are under 18 years old, and we need to be able to know how to protect them. That is one of the reasons why we’ve all completed the Keeping Children Safe Online training. It’s developed by the NSPCC and CEOP, child protection unit of the National Crime Agency (NCA).

We can’t prevent children from seeing harmful material, either online or in the real world. Radical groups have been known to use a variety of social media channels to share propaganda, such as Facebook, Twitter, YouTube, AskFM, Instagram and Tumblr. And tackling online bullying and grooming is not always easy, as people of all ages can hide their real identity.

So what can we do, as Community Managers? The NSPCC training helps us to understand the risks and issues associated with children and young people being online.

We have processes in place for managing inappropriate or illegal content. We also have policies and procedures to deal with online safety concerns. Depending on the client, we may be reviewing content before it is posted (pre-moderation) or after it has been posted (post-moderation) and then employing their own guidelines/policies to remove inappropriate content and warn or ban users who break the rules.


We can advise on reporting mechanisms as a reactive moderation tool so users can report misconduct in real time. We can be alert to the signs and help make the communities welcoming and inclusive by discouraging exclusive clubs and cliques. We can spot and flag suspicious users, as well as users in distress. Then, as part of job, we can signpost to a range of support sites and helplines, post supportive messages and, sometimes help by contacting the authorities (where privacy rules allow).

Inappropriate behaviour needs warnings in clear and understandable language. We have to be clear about the consequences if users persist in disregarding the rules. Typically they are issued first with warnings, or temporary bans. Ultimately, they can lose their accounts.

By giving confidence to both users and parents that we can manage safety risks, we can also protect the brands we work for, providing reassurance in the face of reputational risk.

How children use technology

More than half of 3-4 year olds have access to tablets. And a quarter of children aged 8-11 have smartphones. More than half of all kids use internet for homework.

Facebook is the channel that most 12-15 year olds consider to be their main social profile. Other popular apps are WhatsApp, Facebook Messenger, Instagram, Call of Duty, Minecraft, Snapchat, YouTube, Tumblr and Twitter.

Most networks have a minimum age limit of 13 but many children use the services when they are younger, because they can sign up without their parents knowing.

The risks they take online

The 3 Cs is a recognised framework for identifying risks:

  • Content – kids might come across age inappropriate content online that is sexual, violent, or extreme
  • Contact – the child engages in online world which might expose them to inappropriate contact. This includes cyberbullying, being stalked or having personal data harvested, being groomed, or coerced into sharing sexual content. Some content might advocate self harm or eating disorder sites with ‘’pro-ana or ‘pro-mia’ themes
  • Conduct – young people might act inappropriately by sharing things about themselves – creating or sharing sexual content such as explicit pictures of themselves, or bullying or harassing others

Need help with your forum or online safety guidelines?

Our team has been working with charities, agencies and brands for many years, keeping people and organisations safe. Drop us a line: