Skip to Content

Social media algorithms to face scrutiny as lawmakers look to ‘curb the amplification of misinformation’

KVIA

A version of this article first appeared in the “Reliable Sources” newsletter. You can sign up for free right here.

Usually, when social media executives are brought to testify in front of Congress, the hearings are centered on specific policies and types of content, misinfo and foreign interference, antitrust issues, and privacy concerns. What doesn’t quite get as much attention are the engines that drive these platforms: their algorithms. That’s what makes Tuesday’s Senate Judiciary hearing with Facebook, Twitter, and YouTube different. The hearing is entirely focused on social media algorithms.

“Nearly all social media platforms — including the three companies represented at Tuesday’s hearing — use algorithms to curate what their users see and when they see it,” Sen. Chris Coons, who is chairing the hearing, told me Monday. “These algorithms impact what billions of people read, watch, and think every single day, and it’s critical that Congress and the American public understand how they work and how we can curb the amplification of misinformation and political polarization.”

No CEOs

A second aspect that will differentiate Tuesday’s hearing from others: none of the big CEOs will be in attendance. Instead, the companies are sending their heads of policy to answer questions. While that means the hearing will carry less buzz, it’s possible it could lead to a more substantive conversation as the temptation for gotcha moments will be diminished. “Congress has shown that when CEOs are in the hot seat, members can’t resist dunking on them,” Casey Newton, who writes the Platformer newsletter covering tech, told me. “That’s great for fundraising but generally terrible for advancing the conversation. Bringing in policy people could make for more constructive debate.”

YouTube in the hot seat

Another unique aspect about Tuesday’s hearing: YouTube is finally testifying before a congressional committee. Usually, lawmakers call Google and its CEO Sundar Picahi to testify, allowing YouTube to avoid the scrutiny it might receive if it were present itself. But on Tuesday, the company’s director of gov’t affairs and public policy, Alexandra Veitch, will find herself in the hot seat.

“YouTube has approximately two billion users worldwide, and 70% of what those users watch comes from YouTube’s recommendation algorithm,” Coons told me. “While representatives from Google testify frequently before Congress, Sen. [Ben] Sasse and I agreed that it was critical to hear from YouTube directly given the platform’s enormous influence in shaping opinions and beliefs around the world.”

What we can expect to see

So what can we expect to actually see at the hearing? Renée DiResta of the Stanford Internet Observatory said its likely we’ll — once again — see a divide between what Democrats and Republicans ask. DiResta said she expects Democrats to ask “about how algorithmic nudges are polarizing and/or radicalizing users, amplifying health misinformation, amplifying conspiracy theory communities.” Republicans, she said, will likely ask “whether anti-conservative bias is built into moderation tools, or algorithms involved in the determination to down-rank content.”

The companies are different

Joan Donovan, research director of Harvard’s Shorenstein Center, will be one of the experts to testify. She said she will try to “highlight how the business model has driven design decisions at these companies in ways that put growth above all else, which has resulted in social media products that advantage media manipulators, disinformers, and other motivated misinformers.” But, she cautioned, that “it’s really hard to talk about all of these companies products in the aggregate,” pointing out that “the problems of amplification on Facebook are not the same as YouTube or Twitter.”

Twitter: Algorithms can improve safety

Lauren Culbertson, Twitter’s head of US public policy, will emphasize to the Senate committee that algorithms are not all bad. “We use technology every day in our efforts to automatically improve outcomes and experiences for people on Twitter,” she’ll say in her prepared remarks. “We do that, in part, through algorithms. For example, our machine learning tools help identify potentially abusive or harmful content, including content that violates Twitter’s Rules, to human moderators for review.”

The future of regulation?

Neil Potts, Facebook’s VP of public policy for trust and safety, suggested during a convo with me on Monday that Tuesday’s hearing on algorithms is only the tip of the iceberg. “In the last decade, going through 2020 with the pandemic as a capstone, there has been a lot of focus on content,” Potts said. “I think people are going to be moving the discussion beyond that to algorithms. And whether that is algorithms on social media, algorithms in the financial sector, algorithms in the health care space, algorithms more broadly — I think that is going to be the next wave of interest and potential regulation…”

Article Topic Follows: Biz/Tech

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KVIA ABC 7 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content