Skip to Content

Social media platforms brace for the Chauvin trial verdict

Social media companies are on high alert for a verdict in the trial of former Minneapolis Police Officer Derek Chauvin for the death of George Floyd, knowing that their platforms have been used in the past to inflame tensions in the United States and that the same thing could well happen again in the wake of the verdict.

Chauvin, 45, has pleaded not guilty to second-degree unintentional murder, third-degree murder and second-degree manslaughter charges in the death of Floyd last May. The jury has reached a verdict, which will be announced Tuesday afternoon.

On Monday, hours before the jury began its deliberations, Facebook’s vice president of content policy, Monika Bickert, said the company had designated Minneapolis as “high-risk” and was working on “preventing online content from being linked to offline harm” after the verdict.

“Our teams are working around the clock to look for potential threats both on and off of Facebook and Instagram so we can protect peaceful protests and limit content that could lead to civil unrest or violence,” Bickert said in a blog post. That includes “identifying and removing calls to bring arms to areas in Minneapolis” and removing “content that praises, celebrates or mocks George Floyd’s death,” she added.

Google, which owns video platform YouTube, also said it was on high alert for misinformation and hate speech.

“We have our highest protections in place and will quickly remove any content flagged that violates our hate speech and incitement to violence policies,” YouTube spokesman Farshad Shadloo said in a statement. “In addition, we are working round-the-clock to ensure that we are connecting people with authoritative information, especially for breaking news about the trial, and we will reduce the spread of misinformation through our systems.”

In response to an email from CNN Business, Twitter said it would continue to enforce its existing policies that cover abusive behavior, hateful conduct and violent threats.

“We’re monitoring the service and will take enforcement action judiciously and impartially on content, Trends, Tweets, and accounts that are in violation of the Twitter Rules,” a company spokesperson said in a statement.

Other prominent social networks, including audio-app Clubhouse and short-form video platform TikTok, did not respond to requests for comment.

Verdicts in similar cases have sparked widespread protests in the past, most recently in response to a grand jury decision last year not to indict two of the three Louisville police officers involved in the death of Breonna Taylor.

Tech firms have been under scrutiny for years over their role in spreading misinformation and hate speech online while also serving as a rallying and planning place for violence offline. Attention to their role in such activities has only increased since the January 6 US Capitol riot this year.

Facebook, Twitter and YouTube responded to the riot by banning former President Donald Trump for his role in inciting them, as well as several of his supporters and other accounts spreading baseless election fraud claims. Trump and his actions had long posed a dilemma for the social media companies, particularly when he was president. One of the biggest controversies over his posts came during the protests around Floyd’s death last summer, when Facebook was criticized for its inaction on a post by Trump that said “looting” would lead to “shooting.”

The Chauvin verdict will provide these platforms with the most serious test since then of their ability to prevent real-world harm in real time.

“After January 6th platforms appear more cognizant of the fact that what people see in their feeds, and share, can create an environment that leads to real-world violence,” said Renée DiResta, technical research manager at the Stanford Internet Observatory.

Social media companies are indicating they’re moving towards a more proactive model than their previous strategy of taking down content after the fact, while also being more cognizant of “potentially impactful moments” that could lead to violence, she noted.

“The responsibility to not be violent lies with people, of course, but addressing content that attempts to incite violence falls within platform responsibility,” DiResta said.

Article Topic Follows: US & World

Jump to comments ↓

Author Profile Photo

CNN

BE PART OF THE CONVERSATION

KVIA ABC 7 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content