‘Misinformation on TikTok is a whole different beast’: How publishers are tackling the Ukraine-Russia war disinformation problem on TikTok 

Misinformation ricochets around the internet during any world event or political conflict — that’s nothing new — but TikTok poses new challenges, thanks to an algorithm that doesn’t favor breaking news and how it limits users’ interactions with each other. So when misleading videos or false accounts of what’s happening on the ground in Ukraine get posted as the war with Russia unfolds, they can circulate quickly thanks to their shock value and go unchecked indefinitely. 

Per usual, news publishers like CBS News, NowThis, The New York Times, The Washington Post and Vice World News are being as diligent as possible to cover stories truthfully, but they’re taking further steps on TikTok — like hosting Q&As and regularly featuring reporters to familiarize their audiences with a trusted face — to address, and in some cases disprove, the viral content of missile attacks and soldiers parachuting into war that young audiences are reacting to on the platform. 

Sometimes this includes reposting the misinformation and decoding what about it makes it false:

@nowthis

Journalism 101: Not everything you see on social media is what it seems—that’s especially important as we watch Putin’s war on Ukraine

♬ original sound – nowthis
@washingtonpost

No matter how devastating, enlightening or enraging a post is, wait to share it. Assume everything is suspect until you confirm its authenticity. #euphoria #medialiteracy

♬ original sound – We are a newspaper.

But other times, the strategy for getting more engagement and views on factual information is a bit more involved. First, let’s get into why TikTok has a unique impact on the dissemination of disinformation.

The TikTok problem

TikTok has become a habitual app for many young, mobile-first audiences, and the Ukraine-Russia conflict is the first time that many of those users are being exposed to first-hand accounts and surveillance footage of warfare as it unfolds. Because of the constant flow of coverage, it also makes it particularly difficult for those audiences to stay up-to-date with the latest information or to take the time to assess whether or not the footage they’re seeing is true or accurate. 

Christiaan Triebert works on the visual investigations team at The New York Times, which is responsible for verifying videos and images of different world events that are uploaded to the internet, particularly social media. He was one of the first members of the team, which was formed in 2017 and before that, he worked at Bellingcat, a company that focused entirely on social video verification. 

Having spent nearly a decade learning how to spot fake videos and images online, Triebert said that misinformation is “rife in every conflict or situation” but the unique differentiator this time around has been the rapid dissemination of false videos and images on TikTok. 

“Misinformation on TikTok is a whole different beast than on Twitter,” said Triebert. “It’s almost striking sometimes how [quickly] videos make the rounds on TikTok.” 

Videos on TikTok can get millions of views in a matter of hours or days, but unlike Twitter, commenting on TikToks to dispute false claims or say it’s a fake video is more difficult because you can’t include images or videos proving your point, Triebert said. What’s more, initial posts go viral with faulty information and attract a lot of attention, but the subsequent posts debunking the original video are often, by comparison, significantly less eye-grabbing than a video of a missile hitting an apartment complex, he added. 

Solution #1: Avoiding viral videos altogether

Some publishers are choosing not to post viral videos altogether, whether they are first posted by other TikTok users or by wire services. 

Vice World News has decided not to publish any user-generated content from the platform, such as videos posted by people in Ukraine or Russia, as a precaution because the verification process takes such a long time. 

“We made a decision, given the pace at which this was moving, that we would tell those human stories through our journalists, rather than sourcing information from social media. And I think, frankly, at this moment in time and given the nature of this conflict, it’s a pretty dangerous place to be playing around,” said Katie Drummond, svp of global news at Vice News. 

The Washington Post’s Dave Jorgenson, who produces and leads content for the publisher’s TikTok, said his team posted one video from the Reuters wire service on the page and it ended up performing worse (it received 63,000 views) than the original scripted sketch content and on-the-ground reporting that WaPo’s content strategy has been rooted in (which tends to garner upwards of 500,000 – 1 million views per video).

“A lot of people are posting things from [wire services] and I’m sure that that particular clip has been posted already. TikTok’s algorithm doesn’t really favor that either so we want to make sure that we’re not doubling up too much on things,” Jorgenson said.

Solution #2: Combating misinformation face-to-face

News publishers have an added challenge of not just disputing this misinformation, but doing so in a way that gets as much traction as the original posts themselves. 

Some of the publishers are producing videos on TikTok that seek their audiences’ involvement in an effort to identify and address any potential misinformation.

WaPo’s Jorgenson put out a call to action on TikTok that asked people for questions in the comment section that he and his team responded to over the course of the next several days with answers sourced from Washington Post reporting. 

The Vice World News account hosted a TikTok Live conversation between senior news reporter Sophia Smith Galer and correspondent Matthew Cassel, who has been creating much of Vice’s on-the-ground coverage. In that, Smith Galer asked Cassel a lot of the questions that they were being asked in the comment sections of their posts.  

Drummond said that her team is also trying to turn its journalists and correspondents into recurring hosts on the page in order to familiarize their audiences with their expertise on the subject.

“Having Matt Cassel or Ben Solomon as someone that our audience recognize and trust, and that person is taking them to go see something; I think that that’s something Vice has always done really well and so we’re really just thinking about how we translate that for this audience,” said Drummond. 

Jorgenson, who is typically the face of the WaPo TikTok page along with his two colleagues Carmella Boykin and Chris Vazquez, has begun working with WaPo’s correspondents who are on the ground in Ukraine by having them film themselves unpack major events.

“There’s certainly value to people seeing our face on the platform, whether it’s mine or Carmella or Chris, or now these reporters that we have on the ground in Ukraine. I think that people now when they start to recognize [them] it’s reassuring and it’s coming from a verified source,” Jorgenson said.  

NowThis has been using its U.S.-based editorial staff to emulate a similar strategy, including using its senior political correspondent Serena Marshall to be the face of its NowThisPolitics account. Additionally, on both the politics and main NowThis page, voiceovers from its staff are used to explain some of the viral content and what the latest news means as the war unfolds. 

On TikTok, users “want to connect, they want to feel like they’re being spoken to, and that’s a real opportunity for us in a way that we can diffuse more complicated topics, or share information in a way that feels relatable and conversational to the audience,” according to Samara Mackereth, executive editor of social video at NowThis.

Solution #3: Keeping high verification standards

Last week, Triebert posted a thread on Twitter that laid out an example of a viral video of a supposed attack by Russia on Ukraine and explained the process of verifying its legitimacy.

The video, which first went viral on Sunday, Feb. 27, depicts an explosion off the side of a road in what is supposedly Ukraine. But ultimately Oleksandr Skichko, the governor of Cherkasy Oblast, Ukraine (where the video was found to have originated through geolocation), denied the video was real. 

“We don’t put a lot of stake in [what government officials say] because we’re looking for visual proof. Whether or not officials are telling the truth, we don’t really care. We just want to corroborate with other visuals,” said Triebert.

Through the process of visual corroboration, his team wasn’t able to definitively say whether or not it was an attack by Russia, but they could determine that it actually happened on Thursday, Feb. 24 (the start of the Russian invasion), thanks to local media reports that day, and occurred around the same time as other attacks, making it very likely that this explosion of an ammunition center was Russia’s doing. 

Not all verification processes are as tedious as the example above. Triebert said sometimes a video or picture can be proven fake within 10-30 seconds by doing a simple reverse Google search.

But for harder projects, it can take up to 10 hours before determining its legitimacy. In that case, “You have to balance it with how important the news value [would] be of this video if you’re going to spend more than an hour on it,” he said.

https://staging.digiday.com/?p=441095

More in Media

YouTube is under fire again, this time over child protection

Adalytics Research asks, ‘Are YouTube advertisers inadvertently harvesting data from millions of children?’

Illustration of a puzzle that spells out the word 'media.'

Media Briefing: Publishers pump up per-subscriber revenue amid ad revenue declines

Publishers’ Q2 earnings reveal digital advertising is still in a tight spot, but digital subscriptions are picking up steam.

Lessons for AI from the ad-tech era: ‘We’re living in a memory-less world’

Experts reflect how the failures of social media and online advertising can help the industry improve the next era of innovation.