Disinformation dystopia: Is Big Tech prepared for the 2020 election?
A version of this article first appeared in the “Reliable Sources” newsletter. You can sign up for it here.
In 256 days, Americans will head to the polls to vote on whether to re-elect Donald Trump as President or to go in another direction. A lot will happen between now and then. But one thing is near certain: disinformation campaigns, both domestic and foreign, will be waged on the American electorate.
On Thursday, NYT reported (and CNN confirmed) that House lawmakers were warned by an intel official that Russia was taking steps to interfere in the 2020 election and get Trump re-elected. If the Russians follow a similar playbook from 2016, that effort will include exploiting divisions and spreading misinformation on tech platforms like Facebook and Twitter.
Is Big Tech ready for this challenge? Is America ready? A Thursday opinion piece in WaPo eloquently said, “As Americans approach November worrying about foreign interference, they should remember that it cannot succeed without willing Americans. Americans can and must safeguard their own elections, and those who refuse … must be called out for their actions.” So what is being done? What’s being ignored? Let’s take a look at a few stories published Thursday that touch on this topic…
“We can’t remove all of it because it will disproportionately affect conservatives”
First let’s take a look back into the past. WaPo’s Craig Timberg published a deeply reported story Thursday detailing what took place inside Facebook in the aftermath of the 2016 election. According to Timberg, “Facebook created ‘Project P’ — for propaganda — in the hectic weeks after the 2016 presidential election and quickly found dozens of pages that had peddled false news reports ahead of Donald Trump’s surprise victory.”
As Timberg noted, “In a world of perfect neutrality, which Facebook espouses as its goal, the political tilt of the pages shouldn’t have mattered.” But that, apparently, wasn’t the case. Timberg reported that Joel Kaplan, a Republican who is now the head of Facebook’s Washington office, brought forward a concern: “We can’t remove all of it because it will disproportionately affect conservatives.”
The statement is both troubling and revealing. As Timberg noted, the “Project P” debate “exemplified the political dynamics that have reigned within Facebook since Trump emerged as the Republican Party’s presumptive nominee to the White House in 2016.” As Timberg succinctly summarized, “Such factors have helped shape a platform that gives politicians license to lie and that remains awash in misinformation, vulnerable to a repeat of many of the problems that marred the 2016 presidential election.”
Streaming services offer loophole for political advertisers
“The ‘deep state’ is trying to inject our health system with socialist price controls,” one recent Hulu ad said, according to a story by WaPo’s Tony Romm. The ad urged viewers to text “SOCIALISM SUCKS” to the FreedomWorks, the group behind the ad.
But as Romm noted, neither FreedomWorks nor Hulu are “required to reveal much more to the public about the 30-second spot or whom it targeted, leaving watchdogs and regulators fearful that federal election laws aren’t fit for the digital age — and that voters remain vulnerable to manipulation.” Other streaming services, like Roku, offer similar loopholes, which as Romm noted stand “in stark contrast” to regulations governing traditional television channels, like ABC, NBC, CBS, and Fox.
Romm’s conclusion? “Four years after Russian agents exploited popular online platforms to push propaganda, sow unrest and promote the Trump candidacy, the U.S. government has made virtually no progress on bringing more transparency to paid political speech. The risks remain high that voters could be duped and deceived by foreign governments, U.S. candidates and advocacy groups — particularly online, where major regulatory gaps exist.”
Artificial intelligence-created images surface in harassment campaigns
If you haven’t yet, read this story from CNN’s Donie O’Sullivan. The story revealed how an account that threatened Sleeping Giants, a group that persuades companies not to run ads with media orgs that spread misinformation or hate speech, with potentially embarrassing information used a profile image created by artificial intelligence.
“Most of the major social media platforms have rules against using other people’s pictures in this way and have an option for people to make impersonation complaints if their identity is being used,” O’Sullivan reported. “But by using AI-generated faces of people that do not exist, trolls can potentially avoid being reported for impersonation.”
>> O’Sullivan’s big takeaway: “AI-generated images can potentially help lend credibility to online harassment campaigns as well as coordinated information campaigns…”
Twitter tests warning label
When you log-on to Twitter in the future and come across misinformation, it a bright orange caution label might be attached to it. That’s according to NBC’s Ben Collins, who obtained a leaked demo showing new methods Twitter is testing to combat misinformation online.
In the version that Collins obtained, the orange caution label would inform users that the claim was potentially misleading and it would feature fact-checks from verified users. You can check out screen grabs showing how this work in Collins’ story…
FOR THE RECORD
— Bloomberg News reported Thursday evening that the Trump campaign has purchased the “coveted advertising space” on YouTube’s homepage during Election Day and the immediate run up to it… (Bloomberg News)
— Here’s how “news literacy instruction” is “flourishing in the wake of the 2016 election as worries about fake news grow…” (NYT)
— Bernie Sanders suggested at the Democratic debate that some of his angriest supporters might be Russian bots. But as Sam Stein reported, some are skeptical of this claim… (Daily Beast)
— ICYMI: Alex Kantrowitz on “how Saudi Arabia influenced Twitter…” (BuzzFeed News)