Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Presented by Cohere
To create healthy online communities, companies need better strategies to weed out harmful posts. In this VB On-Demand event, AI/ML experts from Cohere and Google Cloud share insights into the new tools changing how moderation is done.
Game players experience a staggering amount of online abuse. A recent study found that five out of six adults (18–45) experienced harassment in online multiplayer games, or over 80 million gamers. Three out of five young gamers (13–17) have been harassed, or nearly 14 million gamers. Identity-based harassment is on the rise, as is instances of white supremacist rhetoric.
It’s happening in an increasingly raucous online world, where 2.5 quintillion bytes of data is produced every day, making content moderation, always a tricky, human-based proposition, a bigger challenge than it’s ever been.
“Competing arguments suggest it’s not a rise in harassment, it’s just more visible because gaming and social media have become more popular — but what it really means is that more people than ever are experiencing toxicity,” says Mike Lavia, enterprise sales lead at Cohere. “It’s causing a lot of harm to people and it’s causing a lot of harm in the way it creates negative PR for gaming and other social communities. It’s also asking developers to balance moderation and monetization, so now developers are trying to play catch up.”
Human-based methods aren’t enough
The traditional way of dealing with content moderation was to have a human look at the content, validate whether it broke any trust and safety rules, and either tag it as toxic or non-toxic. Humans are still predominantly used, just because people feel like they’re probably the most accurate at identifying content, especially for images and videos. However, training humans on trust and safety policies, and pinpointing harmful behavior takes a long time, Lavia says, because it is often not black or white.
“The way that people communicate on social media and games, and the way that language is used, especially in the last two or three years, is shifting rapidly. Constant global upheaval impacts conversations,” Lavia says. “By the time a human is trained to understand one toxic pattern, you might be out of date, and things start slipping through the cracks.”
Natural language processing (NLP), or the ability for a computer to understand human language, has progressed in leaps and bounds over the last few years, and has emerged as an innovative way to identify toxicity in text in real time. Powerful models that understand human language are finally available to developers, and actually affordable in terms of cost, resources and scalability to integrate into existing workflows and tech stacks.
How language models evolve in real time
Part of moderation is staying abreast of current events, because the outside world doesn’t stay outside — it’s constantly impacting online communities and conversations. Base models are trained on terabytes of data, by scraping the web, and then fine tuning keeps models relevant to the community, the world and the business. An enterprise brings their own IP data to fine tune a model to understand their specific business or their specific task at hand.
“That’s where you can extend a model to then understand your business and execute the task at a very high-performing level, and they can be updated pretty quickly,” Lavia says. “And then over time you can create thresholds to kick off the retraining and push a new one to the market, so you can create a new intent for toxicity.”
You might flag any conversation about Russia and Ukraine, which might not necessarily be toxic, but is worth tracking. If a user is getting flagged a huge number of times in a session, they’re flagged, monitored and reported if necessary.
“Previous models wouldn’t be able to detect that,” he says. “By retraining the model to include that type of training data, you kick off the ability to start monitoring for and identifying that type of content. With AI, and with these platforms like what Cohere is developing, it’s very easy to retrain models and continually retrain over time as you need to.”
You can label misinformation, political talk, current events — any kind of topic that doesn’t fit your community, and causes the kind of division that turns users off.
“What you’re seeing with Facebook and Twitter and some of the gaming platforms, where there’s significant churn, it’s primarily due to this toxic environment,” he says. “It’s hard to talk about inclusivity without talking about toxicity, because toxicity is degrading inclusivity. A lot of these platforms have to figure out what that happy medium is between monetization and moderating their platforms to make sure that it’s safe for everyone.”
To learn more about how NLP models work and how developers can leverage them, how to build and scale inclusive communities cost effectively and more, don’t miss this on-demand event!
- Tailoring tools to your community’s unique vernacular and policies
- Increasing the capacity to understand the nuance and context of human language
- Using language AI that learns as toxicity evolves
- Significantly accelerating the ability to identify toxicity at scale
- David Wynn, Head of Solutions Consulting, Google Cloud for Games
- Mike Lavia, Enterprise Sales Lead, Cohere
- Dean Takahashi, Lead Writer, GamesBeat (moderator)
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.