“Research volunteers encountered a range of regrettable videos, reporting everything from COVID fear-mongering to political misinformation to wildly inappropriate “children’s” cartoons,” Mozilla Foundation wrote in a post.
The largest-ever crowdsourced probe into YouTube’s controversial recommendation algorithm found that the automated software continues to recommend videos viewers considered “disturbing and hateful,” Mozilla said, including ones that violate YouTube’s own content policies.
The study involved nearly 38,000 YouTube users across 91 countries who volunteered data to Mozilla about the “regrettable experiences” they have had on the world’s most popular video content platform. Overall, participants flagged 3,362 regrettable videos between July 2020 and May 2021, with the most frequent “regret” categories being misinformation, violent or graphic content, hate speech, and spam/scams.
Mozilla said that almost 200 videos that YouTube’s algorithm recommended to volunteers have since been removed from the platform, including several that YouTube deemed violated their own policies.
“YouTube needs to admit their algorithm is designed in a way that harms and misinforms people,” said Brandi Geurkink, Mozilla’s Senior Manager of Advocacy, in a statement. “Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies.”
“Mozilla hopes that these findings—which are just the tip of the iceberg—will convince the public and lawmakers of the urgent need for better transparency into YouTube’s AI,” Geurkink added.
YouTube did not respond to a request for comment from The Epoch Times, but a spokesperson told The Wall Street Journal that YouTube has reduced recommendations of content it considers harmful to less than 1 percent of videos viewed on the platform. Further, the outlet reported that YouTube’s safety team said its automated system detects 94 percent of videos that violate its policies and removes most of them before they get 10 views.
Mozilla’s report provides fresh insight into YouTube’s secretive recommendation algorithm, which the company itself acknowledged in a 2019 blog post was in need of tweaks. YouTube said that, since January 2019, it had “launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation,” with the company claiming that its actions have led to an average 70 percent drop in watch time for this kind of content.
“That said, there will always be content on YouTube that brushes up against our policies, but doesn’t quite cross the line,” YouTube said.
Mozilla’s report also found that the rate of “regrettable” videos was over 60 percent higher in non-English speaking countries, most notably in Brazil, Germany, and France.
The Epoch Times is the fastest-growing independent news media in America. We are nonpartisan and dedicated to truthful reporting.