Facebook fail —

Facebook approves ads calling for children’s deaths in Brazil, test finds

YouTube had no problem passing the same test.

Brazilian President Luiz Inácio Lula da Silva kisses a child onstage at the end of a speech to supporters.
Enlarge / Brazilian President Luiz Inácio Lula da Silva kisses a child onstage at the end of a speech to supporters.

“Unearth all the rats that have seized power and shoot them,” read an ad approved by Facebook just days after a mob violently stormed government buildings in Brazil’s capital.

That violence was fueled by false election interference claims, mirroring attacks in the United States on January 6, 2021. Previously, Facebook-owner Meta said it was dedicated to blocking content designed to incite more post-election violence in Brazil. Yet today, the human rights organization Global Witness published results of a test that shows Meta is seemingly still accepting ads that do exactly that.

Global Witness submitted 16 ads to Facebook, with some calling on people to storm government buildings, others describing the election as stolen, and some even calling for the deaths of children whose parents voted for Brazil’s new president, Luiz Inácio Lula da Silva. Facebook approved all but two ads, which Global Witness digital threats campaigner Rosie Sharpe said proved that Facebook is not doing enough to enforce its ad policies restricting such violent content.

“In the aftermath of the violence in Brasilia, Facebook said that they were ‘actively monitoring’ the situation and removing content in violation of their policies,” Sharpe said in a press release. “This test shows how poorly they’re able to enforce what they say. There is absolutely no way the sort of violent content we tested should ever be approved for publication by a major social media firm like Facebook.”

To ensure that none of their test ads reached vulnerable audiences, Global Witness deleted the ads before alarming messages like “Death to the children of Lula voters” could be published.

Global Witness identified this as a problem that is particularly concerning on Facebook—not necessarily a weakness of all ad-based social platforms—by also testing the same ad set on YouTube. Unlike Facebook, YouTube did not approve any of the ads. YouTube also took the additional step of suspending the accounts that attempted to publish the ads.

“YouTube’s much stronger response demonstrates that the test we set is possible to pass,” Sharpe said.

Ars could not immediately reach YouTube or Meta for comment. Global Witness digital threats campaign leader Naomi Hirst told Ars, “It is difficult to have any confidence that Facebook will take the necessary action to combat the spread of disinformation and hate on their platform. Despite promises that they take this issue seriously, our investigations have exposed repeated failings."

Meta downplays test results

A Meta spokesperson told Global Witness that its sample size of 16 ads while performing the test was too small to represent Facebook’s ability to enforce its ad policies at scale.

“Like we've said in the past, ahead of last year’s election in Brazil, we removed hundreds of thousands of pieces of content that violated our policies on violence and incitement and rejected tens of thousands of ad submissions before they ran,” a Meta spokesperson said in Global Witness’ press release. “We use technology and teams to help keep our platforms safe from abuse, and we’re constantly refining our processes to enforce our policies at scale."

Hirst told Ars that Global Witness has now shown that "before, during, and now after the recent elections in Brazil," it has remained easy to get "ads containing incitement to violence, incorrect information, and hateful content" approved by Facebook.

"Democracy is at risk if Facebook and other social media firms fail to stem the tide of division and violence that flourish online,” Hirst told Ars.

In its press release, Global Witness seems to suggest that Facebook is not taking the Brazil attacks as seriously as the social platform took the US attacks last year when the company implemented “break-glass measures” to prevent civil unrest from spreading on Facebook.

Ahead of the 2020 US elections, Facebook head of global affairs and communications Nick Clegg told USA Today that Facebook had “developed break-glass tools which do allow us to—if for a temporary period of time—effectively throw a blanket over a lot of content that would freely circulate on our platforms, in order to play our role as responsible as we can to prevent that content, wittingly or otherwise, from aiding and abetting those who want to continue with the violence and civil strife that we're seeing on the ground."

After Global Witness’ recent test seemed to show Facebook was putting less effort into preventing violence-inciting content from inflaming civil unrest in Brazil, Global Witness has recommended that Facebook and all other social media companies commit to ramping up content-moderation efforts with the same gusto shown after the US attacks.

“Global Witness is calling on Facebook and other social media firms to immediately implement ’break the glass’ measures and declare publicly what else they are doing to combat disinformation and incitement to violence in Brazil and how those efforts are being resourced,” Sharpe said.

Channel Ars Technica