According to a study published this week, YouTube’s algorithms still increase violent videos, hateful content and misinformation, despite the company’s efforts to limit access to such videos.
The Mozilla Foundation, a software nonprofit that has been outspoken on privacy issues, conducted a 10-month investigation that found that 71 percent of all videos that were disturbing by volunteers were recommended by YouTube’s algorithm.
The study, which Mozilla described as “the largest crowdfunding investigation into YouTube’s algorithm,” used data volunteered by users who installed a Mozilla extension on their web browsers that tracked their YouTube usage. And used to allow them to report potentially problematic videos.
The researchers could then go back and see whether the flagged video was algorithmically suggested or whether the user found it himself.
More than 37,000 users from 91 countries installed the extension, and volunteers flagged 3,362 “sad videos” between July 2020 and May 2021.
Mozilla brought in 41 researchers from the University of Exeter to review the flagged videos and determine whether they may violate YouTube’s Community Guidelines.
According to the study, 71 percent of the more than 3,300 flagged videos were algorithmically suggested.
One of them was an erotic parody of “Toy Story” and an election video that claimed Microsoft founder Bill Gates had hired students affiliated with the Black Lives Matter movement to count ballots on battlefields. .
According to the report, others included conspiracies to promote white supremacy along with 9/11 and the COVID-19 pandemic.
YouTube later removed 200 videos that were flagged by participants, which equates to about 9 percent.
But according to Mozilla, the video had already been viewed more than 160 million times before it was taken down.
A spokesperson for YouTube said it was unclear how the study defined the offensive video and questioned some of the findings.
“We welcome the research on our recommendations, and we are exploring more ways for outside researchers to study our systems,” the spokesperson said in a statement.
“But it is difficult for us to draw any conclusions from this report, as they never define what ‘sorry’ means and only share some of the videos, not the entire data set,” it continued.
“For example, some of the content listed as ‘regrettable’ includes a pottery-making tutorial, a clip from the TV show Silicon Valley, a DIY craft video, and a Fox Business segment.
“Our public data shows that consumption of recommended borderline content is well below 1% and that only 0.16-0.18% of all views on YouTube come from infringing content,” the statement said.
“We’ve made over 30 changes to our recommendation system over the past year, and we’re always working to improve the experience on YouTube.”