Cheat Sheet: Senators want more transparency into ‘addictive’ Facebook, Twitter and YouTube algorithms

capitol building

A Senate Judiciary subcommittee hearing held yesterday had the opportunity to be a reprieve.

In contrast to previous congressional hearings that have been little more than opportunities for legislators to verbally pummel big tech execs, this latest hearing on the impact of social media algorithms was designed to be more substantive, providing insight into how the platforms’ systems work to amplify harmful content. At least that was the aim.

Instead, senators on both sides of the aisle criticized public policy executives from Facebook, Twitter and YouTube about the negative effects of their firms’ data-fueled, advertising-supported business models and questioned whether the sheer scale and dominance of their platforms foster perverse goals inspiring technical decisions that promote engagement with content that ultimately harms people and society. Meanwhile, the platforms’ executives shied away from sharing any fresh insights into how their algorithms operate.

“We truly don’t see these as partisan questions and don’t come to this hearing with a specific regulatory or legislative agenda, but this is an area that requires urgent attention,” said Democratic Sen. Chris Coons, the Delaware Democrat who chairs the Privacy, Technology and the Law subcommittee, who suggested that social media algorithms promote false information and force people into hyper-tailored idea echo-chambers.

Still, Coons — who last year introduced the Algorithmic Fairness Act which would give the Federal Trade Commission authority to evaluate the fairness of algorithms used to determine online ad targeting and search results —indicated the hearing could help inform possible future legislation.

Here are the key takeaways from the hearing:

  • Senators wanted to dig into how social media algorithms value, amplify or suppress engagement with certain types of inflammatory or false content. However, in the end, public policy executives from the three platforms pointed to existing public information about how their algorithms work rather than providing any new detail.
  • Senators signaled interest in increased transparency regarding how social media algorithms are built and work.
  • Critics of the platforms who gave testimony — Tristan Harris, co-founder and president of the Center for Humane Technology and Joan Donovan, research director at the Shorenstein Center on Media, Politics, and Public Policy — argued the algorithms that recommend videos on YouTube or rank posts in customized Facebook and Twitter feeds are built to inspire outrage and promote misinformation at scale.
  • Platform representatives pushed back on criticisms and stressed that their companies have made changes in recent years to how their algorithms operate in order to downplay harmful, untruthful or extremist content, and created features that give people more control over how content is presented.

Platforms push back on critics of “addictive” business models

Sen. Ben Sasse of Nebraska, the ranking Republican on the subcommittee, defined what he saw as the crux of the debate: There is a fundamental disconnect between the “healthy engagement” the platform representatives say is the goal of their companies’ business models and what their critics argue are the actual goals of those business models, which is to fuel user engagement and “quantity” through content addiction and outrage that promotes the spread of misinformation and extremism that threatens society and democracy. 

Sasse suggested Facebook’s business model is “directly correlated to the amount of time that people spend on the site.” He directly asked Monika Bickert, vp for content policy at Facebook, “The business model is addiction, isn’t it?” 

In response, Bickert pointed to Facebook’s 2018 decision to prioritize posts related to family and friends over things such as political issues. “It led to people spending tens of millions of fewer hours on Facebook every day,” she said, “but that was something we did because we felt that longer term it was more important for people to see that sort of content because they would find it meaningful and they would want to continue to use the site, so it’s a long-term picture.”

Alexandra Veitch, YouTube’s director of government affairs and public policy for the Americas and emerging markets claimed, “Misinformation is not in our interest. Our business relies on the trust of our users, but also our advertisers who on our platform advertise on single pieces of content. We want to build these relationships for the long-term which is why we bake user choice, user control right into the product with things like timers and the ability to turn autoplay off, [and] take-a-break reminders of which we’ve sent over a billion,” she said.

Lawmakers want more algorithmic transparency, but YouTube is reluctant to commit 

Algorithmic systems that employ complex machine learning processes are often referred to as impenetrable black boxes that reveal little about how they work to those affected by them. But Sen. Coons suggested there are ways YouTube and the other platforms could enable more algorithmic transparency.

Lamenting the fact that YouTube does not publicly show the number of times a video has been recommended before it is removed from the platform, he said, “We have no way of knowing how many times it was recommended by your algorithm before it was ultimately removed. Could YouTube commit today to providing more transparency about your recommendation algorithm and its impacts?”

Veitch said YouTube could not commit to releasing data on the number of times videos featuring content that violates YouTube’s policies are recommended by the platform’s algorithms, but said, “We want to be more transparent so let us work with you on that.” 

Notably, Veitch also pointed to YouTube’s violative view rate metric which shows the percentage of views on the platform of content violating community guidelines. That metric is among those authorized by the Global Alliance for Responsible Media, a group established by the World Federation of Advertisers to work with digital platforms to measure the effects of harmful digital media content on advertising.

Expect continued interest among lawmakers and social media firms themselves in algorithm transparency efforts. The CEOs of Facebook, Google and Twitter signaled support for more transparency in content moderation during a March 25 joint House Energy and Commerce Committee hearing on disinformation.

At least one Section 230 reform bill already calls for transparency reports from the platforms. The Platform Accountability and Consumer Transparency Act, a bipartisan bill that has been reintroduced in the Senate this year, calls on platforms to provide detailed information about the content they do and do not allow as well as how they make decisions about enforcing those policies.

It would require platforms to publish quarterly transparency reports outlining actions taken to enforce their content moderation policies and even require them to set up call centers to address complaints and appeals.

https://staging.digiday.com/?p=412436

More in Media

NewFronts Briefing: Samsung, Condé Nast, Roku focus presentations on new ad formats and category-specific inventory

Day two of IAB’s NewFronts featured presentations from Samsung, Condé Nast and Roku, highlighting new partnerships, ad formats and inventory, as well as new AI capabilities.

The Athletic to raise ad prices as it paces to hit 3 million newsletter subscribers

The New York Times’ sports site The Athletic is about to hit 3 million total newsletter subscribers. It plans to raise ad prices as as a result of this nearly 20% year over year increase.

NewFronts Briefing: Google, Vizio and news publishers pitch marketers with new ad offerings and range of content categories

Day one of the 2024 IAB NewFronts featured presentations from Google and Vizio, as well as a spotlight on news publishers.