Skip to Content

She was called the n-word and given instructions to slit her wrists. What did Facebook do?

You can call someone the n-word and give her graphic instructions on how to kill herself, and you won’t get kicked off Facebook.

Or you can tell a mother you hope her son gets raped, and you won’t get kicked off the world’s most popular social media platform. Or you can tell a mother whose 5-year-old daughter has died that “if your kids keep dying it’s god trying to tell u u don’t deserve them.”

You can write to that mother: “F**k you c**t. Are you dead yet c**t? Fingers crossed. Spit on c**ts like you. You c**t dog. Die c**t. You piece of sh*t c**t dog. You are ignorant dumb dog. Die c**t.”

Still, you can stay on Facebook.

These are the findings of a six-month CNN investigation into bullying and harassment faced by parents, doctors and others who advocate for vaccination on Facebook.

A Facebook spokesperson said the platform is conducting its own investigation as a result of CNN’s findings.

Facebook: We don’t tolerate bullying and harassment

Facebook officials — already under fire for the platform’s role in the 2016 election, among other issues — say they don’t tolerate bullying and harassment.

“We have a responsibility to keep people safe on our services — whether from terrorism, bullying, or other threats,” Facebook CEO Mark Zuckerberg wrote in November of last year.

“Bullying and harassment happen in many places and come in many different forms,” according to Facebook’s community standards. “We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook.”

However, CNN’s investigation calls these statements into question. Our investigation found Facebook sometimes allows users to stay on its platform even when they repeatedly violate Facebook’s standards on bullying and harassment, and verbally abuse others in the most hateful and violent of ways.

For example, the woman who received the message calling her the n-word and telling her to slit her wrists reported the message to Facebook. Facebook determined that the message violated its community standards, and that the sender was a repeat offender — but still, the sender was allowed to stay on the platform until CNN started asking questions.

“It’s horrific. It’s mind boggling,” said Andrew Marantz, author of a new book, “Antisocial.”

“Facebook likes to call itself a community, and if they want to live up to that promise, at the very bare minimum they should try to protect people from stuff like this.”

Even Zuckerberg says his company needs to do more.

“On bullying and harassment, we have a lot of work still to do,” the Facebook CEO told reporters in May.

‘I was scared’

Aisha Odom, who received the message with the n-word, agrees.

Odom, a mother of three young children in Austin, Texas, has urged others on Facebook to get vaccinated to help protect people like her brother, who has Crohn’s disease and can’t receive certain immunizations.

She thought she’d become accustomed to anti-vaxxer attacks, but in April, Odom logged into Facebook Messenger and read a message that made her feel physically ill.

“Can we immunize are [sic] babies from becoming ignorant n*****s like you?” a user wrote to her, spelling out the n-word. “You uneducated tw*t, here’s something you need to learn…How to slice those wrists correctly. Kill yourself, kill your kids, kill your parents, kill them all.”

The user then gave illustrated instructions on how Odom should slit her wrists.

“I didn’t know if it was a threat. I was scared,” Odom said. “I called my husband and sent him a screenshot, and he was afraid, too.”

Odom immediately reported the message to Facebook. Facebook determined that not only did the message violate Facebook’s standards, but the user had violated Facebook’s standards before, according to the Facebook spokesperson.

The user’s punishment: For 30 days, she wasn’t allowed to send messages in Messenger, according to the Facebook spokesperson. Otherwise, she was allowed to continue participating and posting on Facebook as usual.

Odom said she was “flabbergasted” that Facebook didn’t do more.

“Calling someone a horrible racial slur and telling them to kill themselves is totally OK with Facebook,” Odom told CNN.

In August, four months after Odom reported the message, CNN sent it to Facebook and asked why the user wasn’t removed from the platform. In September, Facebook removed the user’s account.

“We investigated the account further and determined the account should be actioned,” the spokesperson wrote to CNN in response.

The spokesperson did not explain why Facebook didn’t remove the user on its own. The spokesperson, who spoke on the condition of anonymity, declined to make a Facebook executive available for an interview for this story.

CNN’s investigation

Our investigation began in March with a story about anti-vaxxers attacking vaccine advocates on Facebook. After that story published, the Facebook spokesperson asked CNN to send the content we’d collected while reporting our story.

With permission from the recipients, CNN sent the spokesperson 65 pieces of content that vaccine advocates said they found offensive, including posts and comments, and direct messages.

The spokesperson sent the comments to Facebook’s community standards team, which found that 39 of those pieces of content, posted by 31 separate users, violated their community standards, most of them regarding harassment.

Facebook removed only one of the users — the one who wrote the message to Odom. The other users remain on the platform, even though they had violated Facebook standards.

With permission from the recipients, CNN sent the 39 pieces of content to ethicists, lawyers and social media experts.

“Whoever sent these messages should be off social media forever. All of them,” said Arthur Caplan, head of the Division of Medical Ethics at the New York University School of Medicine. “Can’t Facebook do more to police this? They should.”

“This is absolutely harrowing,” said Mary Anne Franks, a professor at the University of Miami School of Law who specializes in social media. “Facebook can do much better.”

Marantz, the book author, recalled a famous saying from Zuckerberg.

“Facebook is one of the most profitable companies in the world. They can move fast and break things. Why can’t they move fast and repair things?” he said.

‘If these don’t cross the line of civility, what does?’

When CNN asked why Facebook removed only one of the 31 users from the platform, the spokesperson gave two answers.

First, the spokesperson said Facebook couldn’t identify the users, since for the most part CNN sent screenshots of the comments, and not links to them.

The spokesperson did not explain how Facebook managed to identify the person who sent the message to Odom, but not any of the other users.

The spokesperson also didn’t explain why they were unable to identify the users, yet CNN was able to identify most of them, using information from the screenshots such as names, profile photos, place of employment and schools attended.

Secondly, the spokesperson noted that Facebook generally does not remove someone for a single infraction of bullying and harassment standards. Instead, it takes multiple violations, or “strikes.”

“We don’t want people to game the system, so we do not share the specific number of strikes that leads to a temporary block or permanent suspension,” according to a Facebook report, “Enforcing our Community Standards.” “The effects of a strike vary depending on the severity of the violation and a person’s history on Facebook.”

In addition to the user who sent the message to Odom, CNN’s investigation found at least two users who said they had violated Facebook’s standards before and yet were allowed to stay on Facebook.

One user told CNN he’d been in “Facebook jail” seven times and the other said she’d been similarly penalized on Facebook “pretty frequently.”

When CNN pointed out that these two users were still on Facebook despite what appeared to be multiple strikes, the spokesperson responded that Facebook would launch “a more thorough investigation” into the users who wrote the violating content.

Ethicists questioned why Facebook hadn’t removed the users who made some of the most flagrantly abusive comments — particularly ones that encouraged violence.

“Some of these deserve immediate banning,” said Timothy Caulfield, a professor of health law and policy at the University of Alberta in Canada. “If these don’t cross some line of civility, what does?”

Caulfield and Caplan, the NYU ethicist, said it was especially important for Facebook to be tough on abuse by anti-vaxxers, since they could intimidate users who post facts about vaccines.

They pointed to this year’s measles outbreak — the largest in 27 years with more than 1,200 cases, according to the US Centers for Disease Control Prevention. Experts say measles flourished in large part because of anti-vaccine misinformation on social media, particularly Facebook.

“The vaccine space is exquisitely sensitive in that highly vulnerable lives — newborns, cancer patients, those with immune diseases — are hanging in the balance,” Caplan wrote in an email to CNN. “Facebook should have no tolerance for or room for anti-vaxx zealots and kooks.  They need to be monitored or if reported, checked and then blocked and banned pronto.  Public health must override misinformation and threatening behavior in the domain of vaccines.”

This year, Facebook took steps to decrease the influence of anti-vaccine misinformation. In March, the platform announced it would start lowering the ranking of groups and pages that spread such misinformation in its News Feed and Search options. In September, Facebook announced that educational pop-up windows will appear on the social media platforms when a user searches for vaccine-related content, visits vaccine-related Facebook groups and pages, or taps a vaccine-related hashtag on Instagram.

The difficulty in detecting bullying

In January 2018, Zuckerberg set a challenge for himself: Fix Facebook.

“The world feels anxious and divided, and Facebook has a lot of work to do — whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent,” Zuckerberg wrote in a Facebook post. “My personal challenge for 2018 is to focus on fixing these important issues. We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools.”

In a series of Community Standards Enforcement Reports, Facebook revealed the results of its efforts.

For example, in the first three months of 2019, the social media platform took down 5.4 million pieces of content that violated its standards against child nudity and sexual exploitation of children. In nearly every case — 99% of the time — Facebook detected this content on its own without being alerted to it by users.

But Facebook had much less success detecting bullying and harassment.

In the first quarter of 2019, the social media platform took action on 2.6 million pieces of content for violating bullying and harassment standards, but users had to report it 86% of the time; only 14% of the time did Facebook find and flag the content on its own.

The problem is that while it’s relatively easy for an algorithm to detect nudity, it’s much more complicated to detect when someone is being bullied online, since there are subtleties involved in discerning abusive speech from speech that doesn’t violate community guidelines.

“As you get into hate speech and bullying, linguistic nuances get even harder — like understanding when someone is condemning a racial slur as opposed to using it to attack others,” Zuckerberg wrote in a note in November of last year.

Facebook says since it’s hard to detect bullying, they have to rely on users reporting abuse.

“When it comes to bullying and harassment, context really matters,” the spokesperson added. “It’s hard to tell the difference between a bullying comment and a light-hearted jest without knowing the people involved or the nuance of the situation. So we rely heavily on reports — if you or someone you know is being bullied or harassed, we encourage you to report it.”

A recent Facebook report came to the same conclusion.

“In areas like bullying and harassment, where context is so vital to understanding if content violates our policy, we expect that for the foreseeable future, our automated systems will not have the ability to detect this content at a scale similar to other policy violations,” according to Facebook’s transparency report. “[In] many instances, we need a person to report this behavior to us before we can identify or remove it.”

Last year, Guy Rosen, Facebook’s vice president for product management, said the platform is trying to improve. “We are determined to improve our understanding of these types of abuses so we can get better at proactively detecting them,” Rosen wrote. “Overall, we know we have a lot more work to do when it comes to preventing abuse on Facebook. Machine learning and artificial intelligence will continue to help us detect and remove bad content.”

Relying on mourning mothers

While Facebook urges users to report abusive posts, Catherine Hughes says she simply couldn’t. She was too busy mourning the death of her baby.

Hughes’ 1-month old son, Riley, died from whooping cough. She’d taken to Facebook to urge others to get vaccinated in order to protect babies like Riley, who are too young to get all of their shots.

Hughes estimates that since her son’s death in 2015, she and her husband have received thousands of abusive Facebook messages and comments. She estimates about half of them have been from users in the United States.

They’ve been called baby killers. She has been called a whore and worse. They were told to kill themselves. They received death threats.

The comments kept rolling in, even on the day of her son’s funeral.

“Those messages while we were at Riley’s funeral in particular kicked us in the gut. You’re going through worst day of your life, and they thought their beliefs were more important than basic human ethics,” she said.

Hughes noted that even if she wasn’t mourning the loss of her child, she would never have been able to report the huge volume of posts, comments and messages. Plus, she said, many of them were comments on her own profile, and while such comments can be hidden or deleted, Facebook does not allow users to report bullying or harassing comments made on their own profile or timeline.

“When it comes to a bullying culture, you can’t just leave it to the victims — the bereaved parents — to do the work of removing this abhorrent behavior,” Hughes said. “Facebook has done incredible things. They are so forward thinking. Why can’t they come up with the technology, with the algorithm to find a solution to this problem?”

Serese Marotta agrees.

Her 5-year-old son Joseph died from the flu in 2009. After urging others to get the flu vaccine, Marotta was attacked by anti-vaxxers. They called her obscene names and one user sent a death threat. Like Hughes, she did not report the comments and messages.

“I firmly believe that Facebook should be able to figure out how to monitor this and I think it’s their responsibility to do so,” said Marotta, who lives in Syracuse, New York, and is the chief operating officer of Families Fighting Flu. “If they can come up with algorithms to monitor child pornography, they should be able to come up with an algorithm to monitor this type of behavior.”

Marantz, the book author, agrees.

“If these people are supposed to be the best computer engineers in the whole world and they can’t write a program that flags a comment that tells someone to kill herself and calls her the n-word — that seems implausible to me,” he said.

Marantz said since the platform uses data crunching and algorithms to target ads to users, they should use those same tools to protect users, instead of relying on users to report when they’re being bullied.

“It really takes a lot of chutzpah and victim blaming to assert that the onus has to be on the victims,” he said.

Suggestions for Facebook

When CNN brought this criticism to Facebook’s attention, the spokesperson sent a comment by email.

“We want members of our community to feel safe and respected on Facebook and will remove material that appears to purposefully target private individuals with the intention of degrading or shaming them. We try to empower our users with controls, such as blocking other users and moderating comments, so they can limit their exposure to unwanted, offensive or hurtful content. We also encourage people to report bullying behavior on our platform, so we can review the content and take proper action,” the spokesperson wrote.

The spokesperson also pointed to Facebook’s statement about bullying and harassment on its community standards page, which mentions the platform’s Bullying Prevention Hub, a resource for teens, parents and educators seeking support for issues related to bullying and other conflicts.

A year ago, Facebook introduced “new tools and programs so people can better control unwanted, offensive or hurtful experiences on Facebook,” including the ability to report when a friend or family member is possibly being bullied or harassed.

“Once reported, our Community Operations team will review the post, keep your report anonymous, and determine whether it violates our Community Standards,” Antigone Davis, Facebook’s global head of safety, wrote in the post.

Facebook reviews more than two million pieces of content a day, according to the Facebook spokesperson.

The spokesperson reiterated the technical challenges in detecting bullying before it’s reported, because a bullying post and a harmless post might contain similar language.

Caulfield, the health law and policy expert, said there’s no question that there are technical challenges to policing bullying, especially since Facebook has such a large volume of content.

“There’s a resource and management issue for Facebook for sure,” he said. “But given where we are now culturally, in this era of untruths and misinformation, the calculus has changed, and we need to start being more aggressive in monitoring these platforms. I think clearer rules and clearer action is required.”

Three experts have suggestions for how Facebook might overcome the technical challenges to detecting bullying.

Marantz, the book author, said more human eyes looking at content could go a long way toward solving bullying problems.

“Tomorrow, by fiat, Mark Zuckerberg could make Facebook slightly less profitable and enormously less immoral: He could hire thousands more content moderators and pay them fairly,” Marantz wrote in a recent New York Times opinion piece.

Tom Wheeler, the former chairman of the Federal Communications Commission, has an idea for how software might help. He suggests the government should force Facebook and other social media sites to provide the results of their algorithms to third parties.

Such an “opening up,” he wrote last year in a New York Times opinion piece, would involve something called an open application programming interface. APIs are a way for outsiders to access parts of a platform’s database.

While Facebook allows access to some data on its APIs, it allows far less access than other platforms, such as Twitter and YouTube, according to Darren Linvill and Patrick Warren, researchers at Clemson University.

The Clemson team said Twitter’s more open APIs allowed them to identify suspected Russian trolls and alert Twitter.

“Patrick and I killed a couple dozen Russian trolls because Twitter is open, and we can’t do the same thing on Facebook,” said Linvill, an associate professor at the Clemson College of Behavioral, Social and Health Sciences.

The researchers noted that while Facebook’s relatively closed APIs give its users more privacy, it makes it tougher for researchers to help Facebook accomplish goals such as reducing bullying.

“There are consequences to what Facebook does,” said Warren, an associate professor at the Clemson College of Business. “There are negative public interest consequences.”

Wheeler, now a visiting fellow at the Brookings Institution and senior fellow at the Harvard Kennedy School, said if Facebook were forced to open the results of its algorithms, outsiders could then do their own research on bullying on the platform.

“That way you could design your own algorithm to say, ‘I want to look up use of the n-word,’ or whatever it is you want to look up,” Wheeler told CNN.

Wheeler said while this wouldn’t get rid of bullying on Facebook, it would be a step in the right direction.

“We can’t solve this problem with a magic wand, but we can’t start to solve it until there’s transparency,” he said.

Danielle Citron, a legal scholar and a recipient this year of a MacArthur Foundation “genius grant” for her work fighting online harassment, also has a suggestion for Facebook.

Online spies have infiltrated anti-vaccine Facebook groups and found group members issuing calls to attack vaccine advocates, including mothers who are mourning their dead children.

The result is waves of harassment, with dozens of comments or more coming in all at once, according to victims.

“I call these cybermobs. It’s death by a thousand cuts. It’s like a thousand bee stings,” Citron said.

Citron said Facebook should come up with a way to report the mob instead of having to report each individual piece of content.

“Facebook needs to view the cybermob as a real phenomenon,” she said. “There should be a way where users can let them know this storm is happening — where they can say, ‘See my page, a cybermob has descended.’ “

Citron, a professor at Boston University School of Law who has been an unpaid adviser to Facebook, said Facebook has come “a long way” in the past decade on bullying and harassment issues.

“They don’t get everything right, but they’re trying,” she said.

But she said Facebook faces an inherent conflict in policing itself.

“They don’t want to de-platform people. They want their business,” she said.

Citron said Facebook needs to remember that it loses business if abused victims are scared off the platform.

“It’s not good for people to be terrified and chased offline,” she said.

Moratta, the vaccine advocate whose 5-year-old son died of the flu, said she hasn’t been scared off Facebook — but she’s talked to many parents who’ve received violent threats, and they’re terrified.

“Obviously we have freedom of speech in this country, but when you’re talking about causing harm to another individual, I don’t think that should be allowed,” she said. “Facebook is doing a disservice by allowing this type of bad behavior to be propagated. They’re facilitating that and they shouldn’t be.”

Article Topic Follows: Health

Jump to comments ↓

Author Profile Photo

CNN

BE PART OF THE CONVERSATION

KVIA ABC 7 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content