To identify unsafe content, YouTube tries asking creators to rate their own videos

YouTube is asking creators themselves to help identify videos that aren’t brand-safe.

YouTube gave them a questionnaire to fill out when uploading a video that declares what inappropriate content the video may include. YouTube tested the self-certification questionnaire with 15 creators, but said it plans to expand the test to a few hundred more creators soon.

YouTube will use the questionnaire to see if it helps identify which videos shouldn’t not carry ads and mitigate cases of videos being mistakenly demonetized, by reinforcing its human reviewers’ and computer systems’ ability to classify inappropriate content.

The eight-question form tries to identify videos that contain inappropriate content and also their context, with options that are more nuanced than a simple yes or no. For example, creators can say if the profanity is used in a “non-hateful, comedic, or artistic manner” or is used in the context of a documentary or news clip. If the profanity is used “repeatedly in a vulgar or hateful context,” they can say that, too. To counter any false answers from creators, YouTube still scans each video itself.

During the initial test, YouTube found that creator-provided classifications typically corresponded to YouTube’s own classifications. “The majority of those self-certifications were very consistent with the determinations we would have made internally with our human reviewers,” said Tom Leung, director of product management at YouTube, in a video published earlier this month detailing the test.

As promising as that sounds, it may be cold comfort to some advertisers, especially those 300-plus brands whose ads were recently found by CNN to be running against videos promoting things like racism and pedophilia. If YouTube’s human reviewers and automated system can’t figure out that a video advocating Nazi ideals violates its rules against hate speech, why should advertisers trust that the person posting that video to be honest about its content?

Many advertisers already don’t trust YouTube’s existing review systems. Chase has instituted its own system to avoid controversial videos, as has Procter & Gamble, which boycotted YouTube for more than a year until last week.

Advertisers may insulate themselves within their whitelists of approved YouTube channels, but they may still rely on YouTube to decide if those channels’ videos should carry ads. At the moment, YouTube has not demonetized that many videos that advertisers have been affected. “Fortunately, they are so big, and the quantity of available inventory is so high that there’s not necessarily a supply constraint right now,” said Mike Dossett, vp of digital strategy at RPA.

However, the most recent brand-safety revelations show that YouTube may not have gone far enough in demonetizing videos. YouTube could raise the bar, but that could risk shrinking its supply, causing ad rates to spike and scaring off ad buyers. Through the self-certification questionnaire, YouTube can attempt to strike a balance for videos falling in the gray area of brand-safety concerns.

“They’ve demonetized a ton of content that shouldn’t be getting ads. But what’s left is still highly nuanced and subjective in the eyes of any particular brand. What’s left is largely OK to monetize broadly, but maybe only 10 percent of YouTube is OK for anyone [to advertise against],” said Mike Henry, CEO of video analytics firm OpenSlate.

YouTube’s hope appears to be that its questionnaire helps fill the gaps in its existing review process. In the video discussing the test, Leung said YouTube could use the questionnaires to establish a track record of trust among creators that its review system could consider when making monetization calls. And the questionnaires could be converted into data that is plugged into YouTube’s automated review system’s machine-learning model to flesh out its ability to parse controversial content in context.

“It reveals some of the potential limitations, or at least the need to approach the issue from all angles. It can’t just be an algorithm. It can’t just be human moderators,” said Dossett. If YouTube were to solely rely on the self-certification questionnaire to curb controversial content from carrying ads, it would be a concern, he said. “But because it’s paired with these two other safety mechanisms, I’m less concerned right now.”

Sign up for Digiday’s weekly video briefing email to stay informed about the modernization of video, television and entertainment.

More in Future of TV


Future of TV Briefing: Streaming subscriber slowdown gives way to the great rebundling

This week’s Future of TV Briefing looks at how the latest quarterly earnings season evinces more streaming subscriber growth struggles and a growing fondness for the bundle.

Here is why Wall Street’s reception to the CTV narrative is cooling

It would appear that investors’ enthusiasm for the CTV narrative that has served TTD, Magnite and PubMatic so well for the last 18 months is starting to plateau.

Future of TV Briefing: The push-pull for creators between short-form and long-form video

This week’s Future of TV Briefing looks at how platforms are pulling creators toward longer videos while brands push for shorter fare.