Thursday, March 3, 2022, the Cyberbullying Team Meetup hosted a social and open discussion in AltspaceVR to discuss the “dark side of the metaverse” and how to empower and protect ourselves and others exploring the toxic behaviors in the metaverse. For information on how to attend future team meetups, see our How to Attend a VR Event Guide.
>The EDVR Cyberbullying team explores the subject from all perspectives, including those directly impacted by random or targeted attacks, event and virtual world hosts and presenters, social VR staff moderators, companies, and even the bullies themselves. As educators, it is important to understand the wonder and awe potential of the metaverse as well as its dark side. The team works with and features therapists, researchers, and resources to help educators and parents. The events are hosted regularly in AltspaceVR and ENGAGE XR, and this year will be our fourth annual month-long event series in October during National/International Bullying Month.
The news in the media since the announcement of Facebook Meta’s investment in the metaverse brought many articles and investigations about the toxicity in social VR and the metaverse. We’ve featured a few articles below.
To explore this further, this is an informal “State of the Metaverse” overview of cyberbullying. Note that examples in this review may appear to exaggerate the toxic nature of the metaverse. The incidents described, except for the staff moderators and multiplayer esports games, is not the norm for most well-moderated VR events, virtual worlds, and social VR platforms.
Myth: Attacks in the Metaverse are Increasing
It is a myth that online harassment, sexual assaults, and bullying are on the increase in social media as well as social VR, even though the media sensationalizes it. The truth is much more complicated.
The most recent Pew Research Center study on online harassment in 2021 did not cover specifically social VR, but the findings represent it in general. While the number of reported online harassment encounters have not increased since 2017, the severity has worsened. The majority reported that online harassment is a major problem, with 41% reporting from direct personal experience. Roughly two-thirds of adults under 30 have been harassed online, with women targeted twice as more than men, and Black, Hispanic, and Asian targeted even more than Whites.
A study released in January 2022 for International Safer Internet Day called the “Civility, Safety, and Interaction Online, in cooperation with Microsoft’s Digital Civility Index (DCI), found that across 22 countries, teens aged 13-17 and adults 18-74 reported on their exposure to 21 online risks across four categories and found that since 2106, there has been a slight decrease in perceived online attacks, increasing slightly with the pandemic and increased online activity and education.
Perception is very different from the facts. The decrease in online harassment and attacks does not mean that online harassment and bullying is going away or we should pay less attention to it. Nearly half of children in US grade schools reported being bullied and a third of students admitted to bullying others. Globally, suicide is the fourth leading cause of death among 15-19 year olds, with bullying and cyberbullying often part of the trigger. Trolls find clever ways to get around platform bans and blocks, then move onto another platform to continue their behavior, and we need to make the consequences of their actions stronger. Although 49 states have anti-bullying legislation, there is no US federal law. Countries around the world have laws, but many are toothless. We want stronger laws and support.
Facts aside, parents need to be aware and cautious when purchasing VR headsets as toys for their under-13 year old children, and ensure parental controls are enabled to limit exposure to adult content and risk.
Myth: Sexual Assault in VR Isn’t Real
Many dismiss harassment and sexual abuse in VR because it “isn’t real.” As reported in Reveal News by Jessica Buchleitner, it feels very real in virtual reality.
As a consultant and speaker, Gittins is granted access to events featuring the latest virtual reality technologies. But one experience testing a multiplayer game in March 2016 at a limited access event that left an impression she can’t shake. The players were on two teams – two men on one and Gittins and another man on the other. Motion tracking allowed players to move their hands in the game. Gittins waved at the man embodied in a female avatar next to her.
“He replied, ‘Look! I’m rubbing my tits at you!’ Then he proceeded to rub his avatar’s chest, Gittins said. “It was unsettling. I felt like I walked in on locker room banter. That’s not behavior considered appropriate in real life, so I felt like something menacing was to come.”
A Bloomberg reporter shared a first-hand account of a metaverse experience that triggered deep discomfort.
I can still remember the first time I heard the N-word uttered aloud. I was just a kid having fun on the playground of my elementary school, when a Black boy I didn’t know well used the word in passing. Though I knew he meant it innocently, I felt startled by the exchange. That moment would jumpstart a yearslong personal exploration of how Black people suffer from, respond to and repurpose words designed to demean our humanity.
The last time I heard the N-word was a few weeks ago, after I emerged from a brief appearance at a 2 Chainz concert in Horizon Venues, a live events app on Meta Platforms Inc.’s Quest virtual reality headset. Another user — who was Black, at least by the appearance of his avatar — called out to me, using the N-word as he invited me to travel to another part of the app. We were both legless cartoons in the virtual world, but hearing the word from his real voice unsettled me just as much as it did on the playground decades ago.
This isn’t a new experience. Julian Dibbell wrote in 2005 about a rape experience in cyberspace in 1993, claimed by many to be the first report of an online sexual assault. The author interviewed the woman attacked in a multi-user, object-oriented online virtual experience called MUD and MOO:
They say he raped them that night. They say he did it with a cunning little doll, fashioned in their image and imbued with the power to make them do whatever he desired. They say that by manipulating the doll he forced them to have sex with him, and with each other, and to do horrible, brutal things to their own bodies. And though I wasn’t there that night, I think I can assure you that what they say is true, because it all happened right in the living room—right there amid the well-stocked bookcases and the sofas and the fireplace—of a house I came for a time to think of as my second home.
…Months later, the woman in Seattle would confide to me that as she wrote those words post-traumatic tears were streaming down her face—a real-life fact that should suffice to prove that the words’ emotional content was no mere playacting. The precise tenor of that content, however, its mingling of murderous rage and eyeball-rolling annoyance, was a curious amalgam that neither the RL [Real Life] nor the VR facts alone can quite account for. Where virtual reality and its conventions would have us believe that legba and Starsinger were brutally raped in their own living room, here was the victim legba scolding Mr. Bungle for a breach of “civility.” Where real life, on the other hand, insists the incident was only an episode in a free-form version of Dungeons and Dragons, confined to the realm of the symbolic and at no point threatening any player’s life, limb, or material well-being, here now was the player legba issuing aggrieved and heartfelt calls for Mr. Bungle’s dismemberment. Ludicrously excessive by RL’s lights, woefully understated by VR’s, the tone of legba’s response made sense only in the buzzing, dissonant gap between them.
There is no doubt in the minds of active VR users that the immersive experience feels real, even when we tell ourselves it isn’t. Our bodies respond as if it real, and we feel the emotional and physical consequences.
It isn’t just users impacted by the very real assaults on our virtual persons. Many dream of jobs working as moderation staff for social VR platforms exclaiming how much fun it is it to be paid to be in VR all day. The reality is far different. Understanding cyberbullying must include the workers as well as the users, and for many, it is too real. Customer support moderators in social media and social VR face a barrage of attacks and harassment. Reports on being a Community Guide in Facebook Meta Horizon described:
Imagine you had to moderate a Facebook comment thread, only each commenter was able to come up to you, wave their hands in your face, and scream whatever they want.
That’s what some moderators of Facebook virtual reality platform Horizon Worlds are dealing with, and it looks just as nightmarish as it sounds.
“Trying to not smash my headset, like in frustration,” one Horizon Worlds Community Guide with the screen name Peanutbutter can be heard saying in a TikTok uploaded by @vrpranksters after interacting with a bunch of kids fighting and screaming over a virtual boomerang.
The so-called “pranksters” and trolls often film purposefully initiated aggressive experiences by harassing moderators while live streaming and recording the experience, later claiming how they were harassed and victimized by the staff, twisting the truth in the name of entertainment.
Social VR moderators report the same mental health impacts as social media moderators, fighting against personal attacks, harassment, impersonation including identity theft and privacy violations, and visual and verbal exposure to violence and toxicity. In 2019, an article in The Verge revealed the secret lives of Facebook moderators and the very real impact of their work, leading to a lawsuit due to PTSD-like symptoms. The report helped lead to a USD $52M compensation package agreement with Facebook (Meta) for content moderators for mental healthcare. A year later, the workers claimed the compensation and support were inadequate and many still report PTSD and other mental health and physical issues due to the toxic environments they are moderating.
Why Do Bullies Bully?
Understanding the motivation and mental health of trolls and online toxic personalities is a frequent topic for the Cyberbullying Team discussions. We want to understand why they do what they do. We want to understand how to prevent and protect ourselves. We want to encourage toxic people to embrace a more positive role in society.
The toxic individuals are so prevalent, there are over 100 names to represent their behaviors such as trolls, VR creepers, ghosters, pranksters, griefers, catfishers, baiters, bustrs, cannibals, haters…the list is extensive. While it is important to protect ourselves and others, and educate ourselves about the laws and methodologies these toxic people use, it’s just as important to understand their reasons, psychology, and how we can reduce and prevent more harm.
A research study at Brigham Young University found those who take pleasure from the misfortune of others, known as schadenfreude, were more likely to exhibit trolling behaviors. Dr. Pamela Brubaker, BYU public relations professor and co-author of the study, said:
They are more concerned with enhancing their own online experience rather than creating a positive online experience for people who do not receive the same type of enjoyment or pleasure from such provocative discussions.
Peer-pressure, especially with males, often creates gangs who think it is “fun” to harass and attack female players. Some reporters state they were attacked by adults when in reality, they maybe teenagers, working in packs as they would in the real world.
Research by Jeff Lin, Brendan Maher, and others on video game toxic behavior found toxic and intolerant language is common in multi-player video and VR games. Lin, now lead designer of social systems at Riot, used their popular League of Legends video game for his research and found:
The resulting map of toxic behaviour was surprising. Common wisdom holds that the bulk of the cruelty on the Internet comes from a sliver of its inhabitants — the trolls. Indeed, Lin’s team found that only about 1% of players were consistently toxic. But it turned out that these trolls produced only about 5% of the toxicity in League of Legends. “The vast majority was from the average person just having a bad day,” says Lin. They behaved well for the most part, but lashed out on rare occasions.
That meant that even if Riot banned all the most toxic players, it might not have a big impact. To reduce the bad behaviour that most players experienced, the company would have to change how players act.
Targeting virtual platforms as something unique implies that VR platforms and environments are more dangerous than the rest, yet time and again, research and surveys reveal that virtual worlds and environments are no worse than anywhere else. And the process of banning toxic players is definitely more complicated than we think.
Accountability: Put Privacy and Safety Protections First
A panel at the Intro the Metaverse 2 event spoke on the need for personal safety across all social VR platforms and metaverse experiences, agreeing that all should be built with “safety and positivity in mind.”
“It’s really hard to retrofit safety once things are already out in the open. It’s much better to build with those things in place right from the beginning. People need to now think differently if they’re building experiences for the metaverse.”
…”There’s a real opportunity for teaching, and for people to understand…In the same way that we have to self-regulate in the real world — walk into a room and people can see the way that you’re acting, the way you respond to people having communication, body language, those things. They’re actually going to become much more real within the metaverse. I think that, hopefully, will hold some people to account. But they need to be taught that these things are going to actually apply now, which they haven’t really had to think about before.”
Informa author, Yasaman Farazan, researched the issue of how developers cope with toxic behavior in VR and the challenges, recommending stronger tools in place early on to allow users to defend and protect themselves and quickly report abuse.
Unfortunately, most developed and developing platforms are in retrofit mode.
Recent reports of attacks against women caused Meta to add a “personal boundary” around the avatars in Horizon so people cannot touch each others’ avatars. While this may stop some forms of harassment, it doesn’t do anything about visual and verbal abuse and other assaults.
AltspaceVR has long featured powerful self-moderation safety tools such as a personal space bubble and mute and block options. VR Chat and others have similar tools, but users need to be educated on how to use them, and how to use them in response to an attack situation when the shock of the event often clouds decision-making.
Social VR platforms have reputations for expected experiences. VR Chat has easy to access tools and reporting systems, but the virtual ecosystem is famous for being a breeding ground for toxic behavior. It is expected and “normal” to be harassed and attacked on that platform, an unfortunate consequence of a moderation staff that is not visibly presence and enforcing the platform rules for behavior.
AltspaceVR is known for its more adult and conservative atmosphere where intolerance and bad behavior is not welcome. It has well-trained and qualified staff moderators called Community Support Technicians (CST) or Admins adhering to the strongly written AltspaceVR Community Standards and AltspaceVR Terms of Service. The AltspaceVR Campfire, a public social space in the social and events VR platform, was recently closed by Microsoft likely in response to increased press coverage of bullying and attacks and the growing number of complaints of toxicity in that public virtual space. Their goal is to make AltspaceVR a safer space with stronger individual tools, and they’ve hired additional moderation staff to patrol the platform.
Facebook/Meta Horizon is still new enough, their community is still being formed. This is a critical stage for the company and its community as once a reputation is set, it is very hard to change the minds of the masses. Meta’s Reality Labs publishes their Responsible Innovation Principles, Conduct in VR Policy, and Oculus Meta Quest Terms of Service, but as many reporters learned, these are loosely and inconsistently enforced.
For more information on the various terms of service and community guides and policies for platforms Educators in VR works with, see our Legal Policies page.
Cyberbullying Prevention Education and Training
The Educators in VR Cyberbullying team has explored the research and apps over the years used for bullying prevention, education, and empathy training in schools and businesses.
In 2016, university students in Canada were so moved by the suicide of Amanda Todd at 15 years of age, they worked together to create a VR app for the Imagine Cup competition that dealt with anti-bullying. In 2018, Les Ateliers 360 project (French) was developed by the Jasin Roy Foundation to help teenagers cope with the realities they see every day at school but in a controlled, educational environment in virtual reality. University of Warwick students worked with researchers on FearNot!, a anti-bullying VR program for learning intervention techniques for children. Another research project by Stavroulia et al. titled, “A 3D virtual environment for training teachers to identify bullying,” explored training techniques for teachers.
In 2019, a New Jersey elementary school introduced a VR program to help students deal with bullies in school and online, which received wide-spread acclaim for helping young children and prepare them for present and future bullying situations. So many of these apps were created organically by high school and college research students and are no longer supported.
Several schools and VR educational apps and classroom management tools offered bullying and empathy training. The US Air Force is using VR to deal with sexual harassment as are other military and corporate agencies along with anger management and other human resources training. Other companies are working on versions for corporate human resources training.
Sadly, many of the apps for schools and businesses are not well supported by developers or available on most common headset stores. Educators in VR is working with educational industry associations and organizations to develop standards and practices to encourage anti-bullying training be incorporated into headsets, stores, and more easily accessible to everyone. We also support the need for more research and diversity in anti-bullying in immersive education and training.
Hope Against the Bullies
One of the most hopeful statements on this issue was made by Phil Spencer of Microsoft Xbox in an interview with the The New York Times. He said that he would embrace a system that allows players to either bring their list of “banned users” across platforms, or work with the industry to create a multi-platform ban strategy like used with Activision’s Call of Duty (recently purchased by Microsoft) that acts much like the ban-list network used by casinos and hotels in Vegas, Tahoe, and others. Customers who cross the line in one casino or hotel are monitored if they leave and go to another. If they repeat the behavior, they are banned across the network of casinos and hotels, protecting the other customers. The cross-platform alliance would require a great deal of collaboration and oversight, and likely welcome by many.
Sadly, this isn’t a new topic, no matter how shocked the reporters are when investigating virtual assaults. These attacks represent long-held mycologist views that women deserve it. Arwa Mahdawi on The Guardian reminds us that “Facebook” began as “Facemash,” an online tool where students could “rate their female classmates’ attractiveness.” She said:
Sexual harassment in digital spaces has been a problem ever since the advent of the internet. But, as the beta tester who was groped on Horizon Worlds noted, the immersive nature of virtual reality adds a whole new level of violation. The whole point of virtual reality is to trick your brain into thinking your body is actually experiencing something – I don’t need to spell out just how awful that has the potential to be when it comes to online harassment. And yet there are still far too many people who are quick to dismiss or diminish the concept of virtual abuse because there aren’t “real” bodies involved. Regulation also hasn’t kept up with the pace of innovation and harassment in virtual reality is a legal grey area. As MIT Technology Review points out, there is also “no body that’s plainly responsible for the rights and safety of those who participate anywhere online, let alone in virtual worlds.” The likes of Zuckerberg want us to think the metaverse is the future, but it’s just a new venue for age-old problems nobody seems to want to solve.
On a very personal note, I was involved with Alternatives to Fear, a non-profit sexual assault prevention education organization that fought hard for over 30 years for US state laws to be passed on sexual assault, marital rape, and other protections for civil rights. During the testimony in one state in the 1980s, a congressman said that changing the Marital Rape laws were wrong because “if you can’t rape your wife, who can you?” That sounds like old-fashioned thinking until I read recently that an Indian politician apologized for saying, “There is a saying that when rape is inevitable, lie down and enjoy it.”
Attitudes are changing, but these views make it harder for people to report and challenge the behavior. Still, society is slowly educating itself, and the laws are slowly catching up to support the changing standards. Instead of victim blaming and asking “What did you do to make this happen,” there are active conversations across the metaverse and social media exploring the causes and mental health issues for those who target and abuse others and better supporting the recipients of their unwanted attention. Tools and laws are slowly moving into place to not just protect individuals, but improve the security for our social activities online.
With all the negativity and sensationalism around the news of bullying in the metaverse, there are moments of joy in anti-bullying activism.
This month, New Zealand launched a campaign to tone down the abuse rhetoric and “rising tide of division and antisocial behavior” by saying “What would your mother say?”
Recently, a community of Macedonian parents complained and harassed a young girl with special needs and her family about the girl taught along with their children. The country’s president, Stevo Pendarosvski, said, “We are all equal in this society. I came here to give my support and to raise awareness that inclusion is a basic principle.” He walked the young girl to her school as his commitment to the country, a powerful statement from a country not long ago was torn apart by ethnic civil war.
Together, Educators in VR and our Cyberbullying Team members are helping to educate others on this very serious topic, so join us to help change the metaverse and the world.
To keep up with the Educators in VR Theatre Team events and all Educators in VR events in AltspaceVR, subscribe to the Educators in VR Channel and to our Educators in VR Google Calendar of events, workshops, classes, and conferences.
Articles on Cyberbullying News
The following are samples of articles in the news recently about the issues of toxicity and cyberbullying. Our team is thrilled that these topics are making news and conversations are happening all around to bring awareness to the serious issues in and around cyberbullying after years of these issues being in the shadows.
- Undercover journalist witnesses abuse in metaverse – BBC News
- Metaverse vs. employment law: The reality of the virtual workplace – Ars Technica
- Meta Wouldn’t Tell Us How It Enforces Its Rules In VR, So We Ran A Test To Find Out – BuzzFeed News
- Facebook’s Metaverse is already a hot spot for child predators, claim experts – Stealth Optional
- Kids are flocking to Facebook’s ‘metaverse.’ Experts worry predators will follow – The Washington Post
- Meta CTO Andrew Bosworth warns of ‘existential threat’ in VR moderation – The Verge
- Meta told to overhaul policies over doxxing fears – BBC News
- When virtual reality feels real, so does the sexual harassment – Reveal
- 4 women in engineering discuss harassment, isolation and perseverance – TechCrunch
- ‘Toxic’: Online abuse drives women, girls from social media – Social Media News – Al Jazeera
- Oculus Quest 2: Meta to discuss children’s VR safety with watchdog – BBC News
- I get abuse and threats online – why can’t it be stopped? – BBC News
- The Metaverse Via Oculus Is Awkward if You’re a Woman. And Beware of Griefers – Bloomberg
- Meta: Woman Claimed Male Avatars Virtually Groped Her in Horizon Venues
- As Facebook plans the metaverse, it struggles to combat harassment in VR – CNET
- In VR, there are no rules, so parents are making up their own – CNN
- Andrew Bosworth on Meta’s next big challenge: Harassment in the metaverse – Protocol
- Study provides first evidence of a causal link between perceived moral division and support for authoritarian leaders – PsyPost
- Facebook’s Expanded VR Policies Disallow “invading personal space”
- The metaverse has a groping problem already – MIT Technology Review
- 50% of Female VR Players Experience Sexual Harassment, Here’s What Companies Should Do – Tech Times
- Trolls on ‘dragging sites’ can ruin lives. It’s time they answered for their actions – Sali Hughes – The Guardian
- UK data watchdog seeks talks with Meta over child protection concerns – Meta – The Guardian