Greetings Digital Family, and happy holidays! As we round the final turn in this marathon year 2020 (woohoo!), we took some time to review how this very unique time has affected our front-line teams.


Traditionally moderation and community managers are responsible for ensuring your online community is a safe and pleasant place for users to engage and share information with respect and dignity. Early on in the study of online communities, it became extremely clear that a significant part of the role included the need to both study, and curtail certain language – everything from vulgarity, to bullying, to the grooming of young people for political or sexual motives.

Technology, industry and law makers have taken huge measures to ensure the online world is as safe as can be; but it’s by no means a battle that’s been won. Ongoing and frequent checks and balances on nomenclature, culture shifts, and changes to traditional language are part and parcel of the job.

Further, once your team adjusts their technology to recognise and remove one new set of words, the community will immediately adjust and create new nomenclature.

In recent years online communities have recognised that this phenomenon wasn’t only for the young but has been extended to almost every corner of online communities and social media.


What does this have to do with 2020? As you know (and have likely experienced in varying degrees yourselves) this year has been unique and unprecedented in many ways; in this case, we’ve found that our clients (and by definition, our internal team) have spent a significant amount of time reviewing the language and information shared by our clients’ community members.

To provide context, remember that the largest social networks, Facebook and Twitter, have gone so far as to implement new policies and new technology which will review content and either include a warning message, or remove the content altogether, especially after the 2016 US elections.

It had become apparent that at least to some extent, there was interference in the campaigns which lead to misinformation being shared on a massive scale through hashtags and keyword searches. This led to a real public outcry about how social media and private companies are collecting not just user information, but keywords, hashtags and language use. Then selling it to the highest bidder, and presenting how that information can be used to manipulate real world actions.

Across Europe, the European court of justice ruled that European countries can order Facebook to remove content worldwide, and not just for users within their borders. Also the European Audiovisual Media Services Directive requires online video services to take appropriate measures to protect viewers from harmful or illegal content, including setting up age checks.

Both Facebook and Twitter stepped up to the challenge, devising new tools to protect the integrity of the critical vote. During the electoral campaign new ad transparency measures were put in place, and in the immediate aftermath Facebook kept users updated on the latest polling information by adding prominent banners to feeds, reminding that votes were still being counted.

Facebook is constantly measuring how far it can push things, and when it needs to slow trends up, in order to avoid them spilling over. Facebook is also reportedly looking to add more friction to sharing of political posts in order to slow the momentum of conspiracy-fueling content. However, technology can only do so much and the need for live human assistance will always remain part of the equation.


Fast forward to 2020 – and between Covid-19, Black Lives Matter, another US Election and more, brands are jumping on the same bandwagon as the platforms themselves; working diligently to ensure both their technology and their teams are well equipped to recognise and take action against any potentially damaging misinformation. How they react after this has been varied:

  • Removal – Moderators have kept a close eye on live threads; ready to hide, delete or request certain words be filtered out of action.
  • Reporting – depending on the severity of the language and it’s overall message, moderators have been escalating more content as incendiary or ‘fake news’.
This year has seen very particular language adjustments to almost every kind of community that can be traced through almost every kind of online space.

Because our moderators and community team are already familiar with how to spot new phrases, they’ve been instrumental at finding new phrases around Covid-19, politics, and social movements. In particular, we’ve found that companies are focussed on the reporting and sometimes, removal of words and phrases such as “Plandemic”, “Covidiots”, “Libdem” and “Anti-vaxx”.

Some clients may wish to highlight (eg create reports on) the above so that they’re better able to format their messaging. Others are adding these, and others, to their filters straight away so as to avoid the spread of potentially dangerous fake news. In addition, our team members have been asked to stay vigilant and report back any new, potentially controversial phrases so that they may be studied and filtered out.

Our moderators and community teams have spent more time than usual this year honing their ‘spidey-senses’ through the ongoing review of potentially controversial or dangerous language. The importance of their work cannot be taken lightly as it’s been an integral part of keeping the right messages present, and the wrong ones from spreading like a virus.


Let us know what words and phrases you’ve been dealing with this year; and get in touch so we can work together to keep your communities open to all!


 

Check out our services: