Video sharing app TikTok will slap warning labels on videos it suspects contain “misinformation” and discourage users from sharing them. The move brings TikTok’s policies closer in line with those of Twitter. </p><div><p>TikTok already removes videos that its fact-checkers deem to contain <em>“false” </em>information. However, the Chinese-owned company is expanding on this policy, and <a href="https://newsroom.tiktok.com/en-ie/new-prompts-help-people-consider-before-they-share" target="_blank" rel="noopener noreferrer">announced</a> on Wednesday that videos suspected of, but not proven to contain, <em>“misinformation”</em> will be restricted.
Starting on Thursday in the US and Canada, and later this month globally, suspect videos will be “flagged as unsubstantiated content,” and viewers attempting to share them will be reminded of this and offered a chance to cancel their share.
Today we’re rolling out a feature to inform viewers when a video contains unsubstantiated content in an effort to reduce sharing. Learn more about how we continue to invest in media literacy and product experiences that help promote an authentic community. https://t.co/KZdMkYO1Uypic.twitter.com/b3vbnXUX2s
— TikTokComms (@TikTokComms) February 3, 2021
Twitter introduced a similar policy in the run-up to the 2020 US presidential election, labeling certain tweets (usually ones raising concerns about voter fraud) as “disputed” and limiting retweets in order to protect “the integrity of the election conversation.” More recently, Twitter unveiled ‘Birdwatch,’ a feature that lets certain verified users add notes to posts they identify as “misinformation.” Amid cries of censorship from conservatives, Twitter reportedly plans further crackdowns, following its permanent suspension of former president Donald Trump from its platform last month.
Under its community guidelines, updated in December, TikTok bans“misinformation that incites hate or prejudice, misinformation related to emergencies that induces panic, medical misinformation, content that misleads community members about elections,” and “conspiratorial content that attacks a specific protected group.”
Even before last year’s election, TikTok banned “misinformation” related to Covid-19 and climate change, and partnered up in August with PolitiFact and Lead Stories (both of whom have been accused by conservatives of bias) to screen out election-related wrongthink before the vote in November.
In its announcement on Wednesday, TikTok said that its new labeling system decreased the rate at which users shared videos by 24 percent, and reduced ‘likes’ on flagged videos by seven percent. Twitter reported a similar drop in sharing when it banned the retweeting of “misinformation” before the election.
Think your friends would be interested? Share this story!