Business

YouTube’s algorithm pushes violent content and misinformation: study

YouTube’s algorithm still amplifies violent videos, hateful content and misinformation despite the company’s efforts to limit the reach of such videos, according to a study published this week.

The Mozilla Foundation, a software nonprofit that is outspoken on privacy issues, conducted the 10-month investigation, which found that 71 percent of all videos flagged by volunteers as disturbing were recommended by YouTube’s algorithm.

The study, which Mozilla described as “the largest-ever crowdsourced investigation into YouTube’s algorithm,” used data volunteered by users who installed a Mozilla extension on their web browser that tracked their YouTube usage and allowed them to report potentially problematic videos.

Supporters of President Donald Trump rally to reopen California as the coronavirus pandemic continues to worsen, on May 16, 2020 in Woodland Hills, California.
YouTube’s algorithm continued to suggest videos about COVID-19 pandemic conspiracies. Getty Images

The researchers could then go back and see if the flagged videos were suggested by the algorithm or whether the user found it on their own.

More than 37,000 users from 91 countries installed the extension, and the volunteers flagged 3,362 “regrettable videos” between July 2020 and May 2021.

Mozilla then brought in 41 researchers from the University of Exeter to review the flagged videos and determine if they might violate YouTube’s Community Guidelines.

Of the more than 3,300 flagged videos, 71 percent were suggested by the algorithm, according to the study.

Among them were a sexualized parody of “Toy Story” and an election video that falsely claimed Microsoft founder Bill Gates hired students involved with the Black Lives Matter movement to count ballots in battleground states.

Others included conspiracies about 9/11 and the COVID-19 pandemic, as well as the promotion of white supremacy, according to the report.

YouTube later removed 200 videos that participants flagged, which equates to about 9 percent.

Members of the National Socialist Movement (NSM) and other white nationalists march toward the entrance to Greenville Street Park in Newnan, Georgia.
Videos promoting white supremacy were also suggested by the company’s algorithm. AFP via Getty Images

But the videos had already accumulated more than 160 million views before they were taken down, according to Mozilla.

A spokesperson for YouTube said it’s not clear how the study defined objectionable videos and questioned some of the findings.

“We welcome research on our recommendations, and we’re exploring more ways to bring in outside researchers to study our systems,” the spokesperson said in a statement.

“But it’s hard for us to draw any conclusions from this report, as they never define what ‘regretted’ means and only share a few videos, not the entire data set,” it continued.

“For example, some of the content listed as ‘regrettable’ includes a pottery-making tutorial, a clip from the TV show Silicon Valley, a DIY crafts video, and a Fox Business segment.

“Our public data shows that consumption of recommended borderline content is significantly below 1% and only 0.16-0.18% of all views on YouTube come from violative content,” the statement added.

“We’ve introduced over 30 changes to our recommendation system in the past year, and we’re always working to improve the experience on YouTube.”