Solving content moderation requires humans, AI is not good enough

by Adam Singolda, founder and CEO, Taboola

While the term AI can be found in virtually every investor’s deck and every company’s page on the web, it’s actually quite rare to witness real artificial intelligence at work — because it’s a very complicated thing to do. There is a world of difference between machine learning (ML), deep learning (DL) and … BS.

Saying that, even true AI can go wrong. Recently, there was a trend on TikTok where people used the phrase “I had pasta tonight,” not to talk about what they’d had for dinner, but as a code word to signal a suicidal call for help. It wasn’t TikTok’s fault that the algorithm didn’t catch the trend quickly enough to stop promoting these posts as if they were really about food — risking a particularly tone-deaf look for the platform —  because artificial intelligence requires ample historical data in order to work. In computer science, it’s referred to as “garbage in, garbage out.” This is why AI can beat humans in chess or Mahjong, but it could have never invented the game.

This was also something Harvard’s Professor Steven Pinker discussed last year when he referred to the “art of asking questions,” something that’s still reserved for humans. While AI will get better and better at computing things, it will likely never fall in love or ask a question out of curiosity.

When it comes to media and marketing, AI is really important, and as of now, it plays a big part in content moderation online. It decides what’s OK for us to see and what’s not OK, what’s harmful, what’s hateful, what’s fake, what gets boosted, what goes viral and what gets buried. But as we’ve seen from the big tech platforms over the past years — or from the examples above — it has fundamental issues and, even more than that, poses a fundamental question: Is AI the right tool to moderate content and to moderate ads? Or do we need humans as well?

The stakes in play and the AI–human mix

AI is an incredible and revolutionary tool, probably as significant as the invention of electricity or the internet, and it will be a huge part of our lives forever. But there are two important things to know about AI.

  • Some mistakes are too big to bear. As an example, if Alexa made a mistake and suggested that a consumer buy coffee beans they don’t really want to buy based on my behavior, it’s not a big deal. It’s annoying, but not a big deal. If YouTube tagged a video as a pet video thinking there were dogs in the video, but there weren’t, it’s not a big deal. On the other hand, putting AI to use in more serious matters such as how to respond to a health emergency or questions related to democracy, depression, racism and human rights, it begs a bigger question — is AI good enough? Are these matters that we’d want an ethical human mind to consider as well?

Aside from mistakes on a global and societal scale, when it comes to serious matters of media and marketing, such as moderating content, publishers and platforms must recognize the limitations of humans as well. People get fatigued, whereas a computer has endless stamina, whether it’s reviewing 100 or 1,000 articles. People have biases; they have good days and bad days, and so forth. And so, if the goal is to consider a more human approach to moderating content, it’s important that those content review teams are incredibly diverse and supported.

Still, when it came to finally realizing that “eating pasta” was not about eating pasta and that it was a codename for suicide, it was humans who caught it. And when COVID-19 happened, humans saw it spread, not machines. And when an image-recognition AI horrifically misidentified Black people as gorillas, it was humans who picked it up, not AI.

Together, humans and machines make better decisions

The future will be over-indexed by machines that help people live a better life across many daily interactions. However, in serious matters — whether existential or editorial — there are human problems that require humans to solve them with AI in a supporting role. 

It is important for every tech platform that has meaningful distribution to take responsibility for the content on its platform — meaning that while they address the limitations of human review with AI support, they also compensate for the judgment calls AI may never be able to make.

For marketers, publishers, and all stakeholders working to reach audiences with content, the only acceptable vote is to build a future for humans working with AI.

https://staging.digiday.com/?p=381677

More from Digiday

Sliders test article

Amazon bulldozes into new markets, upending the status quo and challenging rivals. Today, it’s the turn of the ad-supported streaming world, and Amazon is coming out of the gate strong.  Why, you ask? Because Amazon is serving marketers an opportunity beginning today to reach a whopping 115 million monthly viewers in the U.S. alone, courtesy […]

How CTV and DOOH are growing this political season for smaller agencies

Connected TV and digital out-of-home are playing a bigger role in upcoming elections and politics – especially for smaller agencies looking to place clients’ dollars.

CMO Strategies: Advertisers identify the top attributes on ad-supported streaming platforms

This is the third installment in Digiday’s multi-part series covering the top ad-supported streaming services and part of Digiday’s CMO Strategies series. In this report, we examine which ad attributes matter the most to marketers on streaming platforms.