YouTube extends fact-checking functionality to US video searches amid COVID-19 pandemic
YouTube, Alphabet Inc’s Google’s video streaming platform, announced it would begin showing text and links from third-party fact-checkers to US users, part of attempts to counter disinformation on the site during the COVID-19 pandemic.
The information panels, introduced last year in Brazil and India, will showcase third-party fact-checked articles above search results for subjects such as “COVID and ibuprofen” or misleading statements such as “COVID-19 is a bioweapon,” as well as unique searches such as “did a tornado his Los Angeles.”
Social networking sites like Facebook Inc and Twitter Inc are under pressure to fight misinformation about the current coronavirus pandemic, from fake cures to conspiracy theories.
YouTube said in a blog article that over a dozen US publishers, including FactCheck.org, PolitiFact and The Washington Post Fact Checker, are involved in the fact-checking network.
The corporation said it could not provide a full list of fact-checking partners.
YouTube began using information panels in 2018, which surfaced links to sources such as Encyclopedia Britannica and Wikipedia for topics considered vulnerable to misinformation, such as theories of “flat earth.” But it said in the blog post on Tuesday that the panels will now help counter disinformation amid a fast-changing news cycle.
The site has also recently begun connecting with the World Health Organization, Centers for Disease Control and Prevention or local health authorities for COVID-19 linked videos and searches.
YouTube did not specify how many search words the fact-check boxes would prompt in the blog post. It said it would “take some time to completely scale up our processes,” as it rolled out the fact-checking functionality.
The feature will only work on searches, but the firm has previously said that its recommendation tool, which allows users to view videos related to those they’ve spent substantial time watching in the past, determines the bulk of overall “watch time.”
YouTube announced in January that it started to reduce borderline content recommendations or videos that could misinform viewers in negative ways, such as “videos promoting a bogus miracle cure for a serious illness.”
Major social media firms who vacated their offices during the pandemic have warned relying on more automated software could impact their content moderation. In March, Google said this might cause a spike in videos being erroneously deleted for policy violations.