By Aara Ramesh
According to a study released by the Mozilla Foundation on Wednesday, July 7, the video platform giant YouTube is continuing to push to users “disturbing and hateful” content that violates its own Community Guidelines, continuing the long-standing controversy over the site’s highly secretive recommendation algorithm.
YouTube is the second-most visited website in the world, behind only its parent company Google (owned by Alphabet Inc.). The video sharing platform has said that its recommendation algorithm is responsible for 70% of total viewing time on the site, accounting for an estimated 700 million hours daily. YouTube has said that “videos promoted by the recommendation system result in more than 200 million views a day from its homepage,” and that the system draws on more than 80 billion pieces of information.
The methodology employed by Mozilla’s investigation involved RegretsReporter, an open-source extension through which users could volunteer their data for study, in essence turning each individual into a watchdog. Between July 2020 and May 2021, a total of 37,380 users in 91 countries participated, flagging 3,362 videos as “regrettable.” Mozilla says that this is “largest-ever crowdsourced investigation into YouTube’s algorithm.”
The topics reported as “regrettable” by users included political misinformation, unsubstantiated Covid-19 “fear-mongering,” violent and graphic imagery, and “children’s” cartoons that featured inappropriate sexual content. Some videos clearly violated YouTube’s Community Guidelines by featuring “hate speech, debunked political and scientific misinformation,” while others fell into the “borderline” category, content that edges close to breaking the rules, but that does not overtly do so.
According to Mozilla, 71% of the videos that were reported as “regrettable” were “actively recommended” by YouTube itself. These suggested videos were also 40% more likely to be flagged by the volunteers, compared to those they searched out for themselves. Around 9% of the videos flagged as “regrettable” have since been removed from the platform, sometimes for rules violations. However, before they were taken down, these videos collected about 160 million views between them.
Some of the examples cited by Mozilla in its report include a person who was recommended a video titled, “Man humilitates [sic] feminist in viral video,” after searching out content related to the U.S. military. One person was directed to an extreme right-wing political video after watching a music video from Art Garfunkel.
The root of the discussion over YouTube’s algorithm stems from what it was originally created for. YouTube aims to promote “stickiness,” because that is how it remains profitable. The more engaging content is, the longer a user stays on the site, or the more times they are likely to return, increasing the “stickiness” of the platform. This, in turn, brings more eyeballs to the advertisements that play before and during a video, boosting revenue. The way to enhance stickiness is to keep recommending interesting content to users, prompting them to keep clicking “next” or even allowing suggested videos to auto-play.
The technology, being as it is not-human, does not actively look out for the kind of topics that might be “harmful” or “disturbing.” Rather, it tracks videos that are already demonstrating stickiness and that are drawing the most amount of traffic, and cues that up for the viewer. Those videos, unfortunately, are more likely to be rooted in extreme emotions like fear, doubt, anger, confusion, etc.
Ostensibly, the system was designed to help the user as much as to generate profit for the company. The aim is to suggest new topics for curious users to discover, broadening their horizons and exposing them to fresh ideas.
The difference between companies like Facebook and Twitter, and YouTube, is in where the content that users view originates from. In the case of the former two, people choose to follow or engage with particular accounts. The latter, on the other hand, takes initiative and actively suggests content that a user might not have stumbled upon otherwise.
Mozilla has been working for years to get YouTube to address this problem, as part of its ongoing “trustworthy AI” campaign. In 2019, it solicited anecdotes from users about their #YouTubeRegrets. The responses were shocking, revealing that people had been exposed to racist, violent, extreme content after watching “innocuous” content.
According to an investigation undertaken by the Wall Street Journal, even when patrons don’t show interest in political material or conspiracy theory content, YouTube will recommend such channels to them. For those who do display a political bias, YouTube recommends increasingly extreme versions of that content that echoes that bias.
Brandi Geurkink, Mozilla’s Senior Manager of Advocacy, had this to say — “YouTube needs to admit their algorithm is designed in a way that harms and misinforms people. […] Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies. […] Mozilla hopes that these findings—which are just the tip of the iceberg—will convince the public and lawmakers of the urgent need for better transparency into YouTube’s AI.”
In a response to Mozilla’s study, the company said, “We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1 percent.”
According to estimates, YouTube is set to earn about $30 billion in advertising revenue this year. A study conducted earlier this year by the Pew Research Center found that of all the online platforms surveyed, YouTube was the most commonly used, with eight out of every ten Americans saying they had ever used the site (compared to 7.3 out of 10 in 2019). Additionally, Pew found that 54% of users access YouTube daily, with around a third saying they visit the site several times a day.
You can read the full report, including Mozilla’s suggested remedies, here.