Home / Tech / News / YouTube claims its crackdown on borderline content is actually working

YouTube claims its crackdown on borderline content is actually working

After making changes to its recommendation algorithm in an effort to reduce the spread of “borderline” content — videos that toe the line between what’s acceptable and what violates YouTube’s terms of service — YouTube has seen a 70 percent decrease in watch time on those types of videos by non-subscribers.

More than 30 changes have been made to the way videos are recommended since January 2019, according to a new blog post from YouTube outlining how the company is trying to tackle borderline content. YouTube doesn’t say exactly what’s changed, nor does the blog post outline how many videos were being recommended before and after the changes were implemented. Instead, YouTube’s new blog post outlines how external moderators go through specific criteria to determine whether a flagged video is borderline. That information is then used to inform machine learning tools that YouTube relies on to police the platform.

“Each evaluated video receives up to nine different opinions and some critical areas require certified experts,” the blog post reads. “For example, medical doctors provide guidance on the validity of videos about specific medical treatments to limit the spread of medical misinformation. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models.”

Some of the criteria that moderators look through were demonstrated in a recent interview with YouTube CEO Susan Wojcicki on 60 Minutes. Wojcicki walked reporter Lesley Stahl through a couple of videos that might be borderline content. One video deemed by Wojcicki as violent focused on Syrian prisoners, but was allowed to remain up because it was uploaded by a group trying to expose issues in the country. Another video used World War II footage and, while that may be seen by many as acceptable for historical context, Wojcicki showed how it could be used by hateful groups to spread white supremacist rhetoric. It was banned.

YouTube recently changed its hate policies to address topics like white nationalism, which is now considered a violation of YouTube’s terms of service. People might take that to believe by declaring any supremacist statement might result in a ban. That’s not necessarily true. When pressed on the issue by Stahl, Wojcicki defended YouTube’s stance that the content of a video is judged on context, adding that if a video simply said “white people are superior” with no other context, it would be acceptable.

“Nothing is more important to us than ensuring we are living up to our responsibility,” the blog post adds. “We remain focused on maintaining that delicate balance which allows diverse voices to flourish on YouTube — including those that others will disagree with — while also protecting viewers, creators and the wider ecosystem from harmful content.”

Part of the way YouTube is tackling the issue is surfacing more authoritative sources for subjects like “news, science and historical events, where accuracy and authoritativeness are key.” YouTube’s teams are trying to do that by addressing three different but related issues: surfacing more authoritative sources like The Guardian and NBC when searching for news topics, providing more reliable information during breaking news events, and providing additional context to users beside the videos.

That means when topics like “Brexit” or “anti-vaccination” are searched, the top results should show videos from reliable, authoritative news sources — even if the engagement rate is lower than on other videos covering the subject. Through doing this during breaking news events like mass shootings or terrorist attacks, YouTube has “seen that consumption on authoritative news partners’ channels has grown by 60 percent.”

It’s good to see YouTube fighting these types of problematic content. The problem is that it’s unclear from this new blog post — or any other public interview that Wojcicki and executives have given — what those numbers translate to overall. A 70 percent decrease in people watching borderline content from channels they’re not subscribed to is important; it acknowledges the rabbit hole effect journalists, academics, and former YouTube engineers have cited for years. The question remains whether that still translates to a substantial number of viewing hours. YouTube’s blog post doesn’t say.

“Content that comes close to — but doesn’t quite cross the line of — violating our Community Guidelines is a fraction of 1 percent of what’s watched on YouTube in the US,” the blog post reads.

There are 500 hours of content uploaded every minute to YouTube. That’s 720,000 hours of content every single day. It would take 30,000 days to watch every video uploaded in just one day on YouTube. It’s a lot of video — much of which is watched in the United States. A decrease in people watching borderline content is good, but until YouTube releases specific numbers, it’s difficult to assess what that really means.


Source link

Check Also

Jaguar I-Pace gains 12 miles of range in new software update

Jaguar is issuing a software update to its first electric car, the I-Pace, which will …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.