Trust and safety in the metaverse Part 2: Violence

In part 2 of our series on the Metaverse – we take a deeper dive into currently active Web 3.0 properties, the violence and hateful actions already happening across the metaverse, and what the owners are trying to do to tackle the issues.

We also have recommendations for those of you looking into your own space in the Metaverse – what you should consider before setting up your own property and how you can avoid violent virtual crimes in your space.

When I began researching for this particular blog, I was both horrified – yet not surprised – at the amount of violence that is already a large part of Web 3.0. What did surprise me was the lack of immediate moderation and actions available to combat this activity. We have had over 30 years of experience and research which confirms the need for strong policies, protection and functionality in place before any new digital community goes live. So what is going wrong and where are we falling short?

What is the Metaverse?   

Taking a quick step backwards I’d like to remind the room of the definition of the Metaverse; as a fully digital environment accessed through virtual reality technologies such as VR headsets. Essentially, the Metaverse is meant as a fully immersive experience within a digital space; and most of our faculties are engaged in the virtual world through the use of headsets (visibility), earphones (sounds and voices), vests and gloves (touch). 

Recent examples of violence in the Metaverse

This year alone has seen a multitude of research papers and articles about the rampant violence experienced across just about all of these spaces. In particular, it’s not just the acts of violence and hateful rhetoric, but how quickly it happens, and how common it is.

The main forms of violence currently happening in the Metaverse – according to research and journalists – include: Sexual harrassment, sexual assault (rape) and Violent threats or rhetoric towards already marginalised communities (people of colour, female-presenting and LGBTQIA+ presenting avatars, eg people).

In fact, one such researcher said that in less than 60 seconds she was invited into a ‘private’ room, and her avatar was sexually assaulted while other avatars looked on. What’s worse is the fact that she attempted to use existing safety features (report the incident) which did nothing to remove the attacker or the onlookers. At least, not while this was happening to her. 

For those that have been lucky enough to live within a digital bubble, I can describe such an event as it happens across an internet space. This may feel not entirely ‘real’, but hear me out. You have spent time and energy building your physically-presenting avatar in the metaverse. You’ve named her, clothed her, and even tested out some of her abilities to move around, dance, kick, run, etc. You are then invited to join what looks to be a party or an event. Feeling confident, you enter that room, only to have other avatars move their bodies right up against your avatar’s body, make lewd, threatening, bullying comments at you. You try to run but because there is another avatar in your way, you’re stuck and cannot move. They spam chat bubbles which consist of more and more intense language. 

What’s worse is that even simple solutions such as allowing the victim the ability to remove a violating avatar from their vision and spaces isn’t available. Or, it’s not been made clear how to do this. As of right now, users can report someone for their behaviour however a live content moderator must view the report, decide if the content is against guidelines, and then take action. And we all know there just aren’t enough moderators in the world to tackle this kind of reporting on a space as large as Meta.

From my research the emphasis on personal safety continues to be placed on the victims of such events. Users are meant to extricate themselves from the vicinity and/or do all the reporting, while the perpetrators are allowed to carry on. “We want everyone to enjoy their time within our products” currently translates into “the victims must take action, and the harassers will eventually learn right from wrong”

Don’t get me wrong – empowering users is something I have always supported, however, at some point there needs to be more accountability on the part of the offender, and it needs to be swift. 

While lawmakers debate the ‘reality’ of such an attack, those of us that have lived in the digital sphere for any given time can in fact verify that there is an emotional, visceral, traumatic experience attached to these events – whether they happened to our physical bodies or not. These events cause PTSD, fear, anxiety, depression and aggression in the real world and to our real psyches. 

What’s worse is that even tech giants are at a loss as to how to combat these issues. Andrew Bosworth, Meta’s Chief Tech Officer, admitted in an internal memo that moderation in the metaverse “at any meaningful scale is practically impossible.” [Read below for some advice brands can use to help remove this kind of behaviour from their particular ‘worlds’].

It’s important to note that there doesn’t seem to be a consensus as to who is responsible for setting and enforcing virtual crime policy; or even who defines what a virtual crime is. What should be the rules for violent, unwanted actions and what are the punishments? Who punishes the criminals and how?

Banishment isn’t a catch all. Nor does it detract offenders from logging back in and creating a new avatar to repeat the activity. In addition it’s close to impossible to hire enough people to handle the amount of crime that exists virtually. That being said, we believe that a combination of human and technical solutions will always be necessary and the best way forward.

“We believe that a combination of human and technical solutions will always be necessary and the best way forward.”

Companies responsible for creating and operating their Web spaces should have a huge say in what is and is not allowed in their spaces; but we know what happens when a platform wants ‘true open and free speech’ they end up getting sued until they learn to employ a robust and intelligent Trust and Safety team, along with guidelines and even, permanent removal of repeat offenders of their rules’

Therefore if you are thinking about buying some real estate in Web 3.0 we offer up a few recommendations. Whether your brand is looking to open up a pop up restaurant/clothing store, host a live event with a brand influencer, or recreate a high-level armour shop for your RPG, make sure you and your leadership have considered and discussed the below with your legal team, your production team and most definitely your trust and safety agency (such as StrawberrySocial).

If you want to create space in the Metaverse, consider:


    • Making your space Private (a la’ Facebook groups)

    • Present your guidelines (rules of engagement) clearly, and visually everywhere. And include consequences for anyone breaking the rules

    • Embed a robust filter and strong technology or AI system to track your users’ behaviour and history of behaviour.*


        • *Work with your legal team to understand limitations to data collection

    • Ask the right questions when considering a moderation technology organisation, such as: What functionality is available to remove violators from your area quickly, or make it undesirable to stay, such as an ‘invisible’ or ‘silence’ function?

    • Have strong, knowledgeable moderation, support, community engagement and T&S teams involved from the beginning.


        • Strong training should be in place for anyone moderating or supporting your Metaverse space. This should include VOIP training, live moderation and any possible real-time engagement you wish to offer your community. StrawberrySocial employ well-versed and highly experienced moderators in all of the above arenas.

        • We recommend reaching out to your trust and safety agency who will be able to discuss possible issues at great length and can offer professional advice for training as well as technical setup

  • Connect with local/federal authorities to share what and when needed

Remember – these are not going to be the most fun conversations you’ve had, we get it. We’ve been having them for decades. But they are and will be necessary. Once you’ve had the conversations with the right folks you’ll be better able to discuss potential issues down the line should something happen.

In addition, make sure your Content Moderation / Support Agency is one that can have these difficult conversations with grace, intelligence and maturity. They are the ones to help craft your processes and manage the technology you need. Once this is completed, you’ll be much less likely to end up at the wrong end of a lawsuit, or worse yet – responsible for potential real acts of violence upon your customers. 

Find out more about StrawberrySocial’s work in Trust and Safety and Content Moderation and Online Support, or get in touch with us here.