How Facebook neglected the rest of the world, fueling hate speech and violence in India

A trove of internal documents show Facebook didn’t invest in key safety protocols in the company’s largest market.

October 24, 2021 at 7:00 a.m. EDT
(Washington Post; Facebook screenshots; iStock)
14 min

In February 2019, not long before India’s general election, a pair of Facebook employees set up a dummy account to better understand the experience of a new user in the company’s largest market. They made a profile of a 21-year-old woman, a resident of North India, and began to track what Facebook showed her.

At first, her feed filled with soft-core porn and other, more harmless, fare. Then violence flared in Kashmir, the site of a long-running territorial dispute between India and Pakistan. Indian Prime Minister Narendra Modi, campaigning for reelection as a nationalist strongman, unleashed retaliatory airstrikes that India claimed hit a terrorist training camp.

Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech. “300 dogs died now say long live India, death to Pakistan,” one post said, over a background of laughing emoji faces. “These are pakistani dogs,” said the translated caption of one photo of dead bodies lined-up on stretchers, hosted in the News Feed.

An internal Facebook memo, reviewed by The Washington Post, called the dummy account test an “integrity nightmare” that underscored the vast difference between the experience of Facebook in India and what U.S. users typically encounter. One Facebook worker noted the staggering number of dead bodies.

About the same time, in a dorm room in northern India, 8,000 miles away from the company’s Silicon Valley headquarters, a Kashmiri student named Junaid told The Post he watched as his real Facebook page flooded with hateful messages. One said Kashmiris were “traitors who deserved to be shot.” Some of his classmates used these posts as their profile pictures on Facebook-owned WhatsApp.

Junaid, who spoke on the condition that only his first name be used for fear of retribution, recalled huddling in his room one evening as groups of men marched outside chanting death to Kashmiris. His phone buzzed with news of students from Kashmir being beaten in the streets — along with more violent Facebook messages.

“Hate spreads like wildfire on Facebook,” Junaid said. “None of the hate speech accounts were blocked.”

Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs

For all of Facebook’s troubles in North America, its problems with hate speech and disinformation are dramatically worse in the developing world. Internal company documents made public Saturday reveal that Facebook has meticulously studied its approach abroad — and was well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes.

“The painful reality is that we simply can’t cover the entire world with the same level of support,” Samidh Chakrabarti, then the company’s civic integrity lead, wrote in a 2019 post on Facebook’s message board, adding that the company managed the problem by tiering countries for investment.

This story is based on those documents, known as the Facebook Papers, which were disclosed to the Securities and Exchange Commission by whistleblower Frances Haugen, and composed of research, slide decks and posts on the company message board — some previously reported by the Wall Street Journal. It is also based on documents independently reviewed by The Post, as well as more than a dozen interviews with former Facebook employees and industry experts with knowledge of the company’s practices abroad.

The SEC disclosures, provided to Congress in redacted form by Haugen’s legal counsel and reviewed by a consortium of news organizations including The Post, suggest that as Facebook pushed into the developing world it didn’t invest in comparable protections.

Facebook whistleblower Frances Haugen tells lawmakers that meaningful reform is necessary ‘for our common good’

According to one 2020 summary, although the United States comprises less than 10 percent of Facebook’s daily users, the company’s budget to fight misinformation was heavily weighted toward America, where 84 percent of its “global remit/language coverage” was allocated. Just 16 percent was earmarked for the “Rest of World,” a cross-continent grouping that included India, France and Italy.

Facebook spokesperson Dani Lever said that the company had made “progress” and had “dedicated teams working to stop abuse on our platform in countries where there is heightened risk of conflict and violence. We also have global teams with native speakers reviewing content in over 70 languages along with experts in humanitarian and human rights issues.”

Many of these additions had come in the past two years. “We’ve hired more people with language, country and topic expertise. We’ve also increased the number of team members with work experience in Myanmar and Ethiopia to include former humanitarian aid workers, crisis responders, and policy specialists,” Lever said.

Meanwhile, in India, Lever said, the “hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems.”

Globally there are over 90 languages with over 10 million speakers. In India alone, the government recognizes 122 languages, according to its 2001 census.

In India, where the Hindu-nationalist Bharatiya Janata Party — part of the coalition behind Modi’s political rise — deploys inflammatory rhetoric against the country’s Muslim minority, misinformation and hate speech can translate into real-life violence, making the stakes of these limited safety protocols particularly high. Researchers have documented the BJP using social media, including Facebook and WhatsApp, to run complex propaganda campaigns that scholars say play to existing social tensions against Muslims.

Members from the Next Billion Network, a collective of civil society actors working on technology-related harms in the global south, warned Facebook officials in the United States that unchecked hate speech on the platform could trigger large-scale communal violence in India, in multiple meetings held between 2018 and 2019, according to three people with knowledge of the matter, who spoke on the condition of anonymity to describe sensitive matters.

How misinformation on WhatsApp led to a mob killing in India

Despite Facebook’s assurances it would increase moderation efforts, when riots broke out in Delhi last year, calls to violence against Muslims remained on the site, despite being flagged, according to the group. Gruesome images, claiming falsely to depict violence perpetrated by Muslims during the riots, were found by The Post. Facebook labeled them with a fact check, but they remained on the site as of Saturday.

More than 50 people were killed in the turmoil, the majority of them Muslims.

“They were told, told, told and they didn’t do one damn thing about it,” said a member of the group who attended the meetings. “The anger [from the global south] is so visceral on how disposable they view our lives.”

Facebook said it removed content that praised, supported or represented violence during the riots in Delhi.

Rising hate in India

India is the world’s largest democracy and a growing economic powerhouse, making it more of a priority for Facebook than many other countries in the global south. Low-cost smartphones and cheap data plans have led to a telecom revolution, with millions of Indian users coming online for the first time every year. Facebook has made great efforts to capture these customers, and its signature app has 410 million users according to the Indian government, more than the entire population of the United States.

The company activated large teams to monitor the platform during major elections, dispatched representatives to engage with activists and civil society groups, and conducted research surveying Indian people, finding many were concerned about the quantity of misinformation on the platform, according to several documents.

But despite the extra attention, the Facebook that Indians interact with is missing many of the key guardrails the company deployed in the United States and other mostly-English-speaking countries for years. One document stated that Facebook had not developed algorithms that could detect hate speech in Hindi and Bengali, despite them being the fourth- and seventh-most spoken languages in the world, respectively. Other documents showed how political actors spammed the social network with multiple accounts, spreading anti-Muslim messages across people’s news feeds in violation of Facebook’s rules.

The company said it introduced hate-speech classifiers in Hindi in 2018 and Bengali in 2020; systems for detecting violence and incitement in Hindi and Bengali were added in 2021.

Pratik Sinha, co-founder of Alt News, a fact-checking site in India that routinely debunks viral fake and inflammatory posts, said that while misinformation and hate speech proliferate across multiple social networks, Facebook sometimes doesn’t takedown bad actors.

“Their investment in a country’s democracy is conditional,” Sinha said. “It is beneficial to care about it in the U.S. Banning Trump works for them there. They can’t even ban a small-time guy in India.”

‘Bring the world closer together’

Facebook’s mission statement is to “bring the world closer together,” and for years, voracious expansion into markets beyond the United States has fueled its growth and profits.

Social networks that let citizens connect and organize became a route around governments that had controlled and censored centralized systems like TV and radio. Facebook was celebrated for its role in helping activists organize protests against authoritarian governments in the Middle East during the Arab Spring.

Death threats, clone accounts: Another day fighting trolls in the Philippines

For millions of people in Asia, Africa and South America, Facebook became the primary way they experience the Internet. Facebook partnered with local telecom operators in countries such as Myanmar, Ghana and Mexico to give free access to its app, along with a bundle of other basic services like job listings and weather reports. The program, called “Free Basics,” helped millions get online for the first time, cementing Facebook’s role as a communication platform all around the world and locking millions of users into a version of the Internet controlled by an individual company. (While India was one of the first countries to get Free Basics in 2015, backlash from activists who argued that the program unfairly benefited Facebook led to its shutdown.)

In late 2019, the Next Billion Network ran a multicountry study, separate from the whistleblower’s documents, of Facebook’s moderation and alerted the company that large volumes of legitimate complaints, including death threats, were being dismissed in countries throughout the global south, including Pakistan, Myanmar and India, because of technical issues, according to a copy of the report reviewed by The Post.

It found that cumbersome reporting flows and a lack of translations were discouraging users from reporting bad content, the only way content is moderated in many of the countries that lack more automated systems. Facebook’s community standards, the set of rules that users must abide by, were not translated into Urdu, the national language of Pakistan. Instead, the company flipped the English version so it read from right to left, mirroring the way Urdu is read.

In June 2020, a Facebook employee posted an audit of the company’s attempts to make its platform safer for users in “at-risk countries,” a designation given to nations Facebook marks as especially vulnerable to misinformation and hate speech. The audit showed Facebook had massive gaps in coverage. In countries including Myanmar, Pakistan and Ethiopia, Facebook didn’t have algorithms that could parse the local language and identify posts about covid-19. In India and Indonesia, it couldn’t identify links to misinformation, the audit showed.

India's covid-19 tragedy: Photos and videos of a nation on the brink

In Ethiopia, the audit came a month after its government postponed federal elections, a major step in a buildup to a civil war that broke out months later. In addition to being unable to detect misinformation, the audit found Facebook also didn’t have algorithms to flag hate speech in the country’s two biggest local languages.

After negative coverage, Facebook has made dramatic investments. For example, after a searing United Nations report connected Facebook to an alleged genocide against the Rohingya Muslim minority in Myanmar, the region became a priority for the company, which began flooding it with resources in 2018, according to interviews with two former Facebook employees with knowledge of the matter, who, like others, spoke on the condition of anonymity to describe sensitive matters.

Facebook took several steps to tighten security and remove viral hate speech and misinformation in the region, according to multiple documents. One note, from 2019, showed that Facebook expanded its list of derogatory terms in the local language and was able to catch and demote thousands of slurs. Ahead of Myanmar’s 2020 elections, Facebook launched an intervention that promoted posts from users’ friends and family and reduced viral misinformation, employees found.

A former employee said that it was easy to work on the company’s programs in Myanmar, but there was less incentive to work on problematic issues in lower-profile countries, meaning many of the interventions deployed in Myanmar were not used in other places.

“Why just Myanmar? That was the real tragedy,” the former employee said.

‘Pigs’ and fearmongering

In India, internal documents suggest Facebook was aware of the number of political messages on its platforms. One internal post from March shows a Facebook employee believed a BJP worker was breaking the site’s rules to post inflammatory content and spam political posts. The researcher detailed how the worker used multiple accounts to post thousands of “politically-sensitive” messages on Facebook and WhatsApp during the run-up to the elections in the state of West Bengal. The efforts broke Facebook’s rules against “coordinated inauthentic behavior,” the employee wrote. Facebook denied that the operation constituted coordinated activity, but said it took action.

On WhatsApp, fake news is fast — and can be fatal

A case study about harmful networks in India shows that pages and groups of the Rashtriya Swayamsevak Sangh, an influential Hindu-nationalist group associated with the BJP, promoted fearmongering anti-Muslim narratives with violent intent. A number of posts compared Muslims to “pigs” and cited misinformation claiming the Koran calls for men to rape female family members.

The group had not been flagged, according to the document, given what employees called “political sensitivities.” In a slide deck in the same document, Facebook employees said the posts also hadn’t been found because the company didn’t have algorithms that could detect hate speech in Hindi and Bengali.

Facebook in India has been repeatedly criticized for a lack of a firewall between politicians and the company. One deck on political influence on content policy from December 2020 acknowledged the company “routinely makes exceptions for powerful actors when enforcing content policy,” citing India as an example.

“The problem which arises is that the incentives are aligned to a certain degree,” said Apar Gupta, executive director of the Internet Freedom Foundation, a digital advocacy group in India. “The government wants to maintain a level of political control over online discourse and social media platforms want to profit from a very large, sizable and growing market in India.”

Facebook says its global policy teams operate independently and that no single team’s opinion has more influence than the other.

Earlier this year, India enacted strict new rules for social media companies, increasing government powers by requiring the firms to remove any content deemed unlawful within 36 hours of being notified. The new rules have sparked fresh concerns about government censorship of U.S.-based social media networks. They require companies to have an Indian resident on staff to coordinate with local law enforcement agencies. The companies are also required to have a process where people can directly share complaints with the social media networks.

But Junaid, the Kashmiri college student, said Facebook had done little to remove the hate-speech posts against Kashmiris. He went home to his family after his school asked Kashmiri students to leave for their own safety. When he returned to campus 45 days after the 2019 bombing, the Facebook post from a fellow student calling for Kashmiris to be shot was still on their account.

Regine Cabato in Manila contributed to this report.

Read the series: Facebook under fire

The Facebook Papers are a set of internal documents that were provided to Congress in redacted form by Frances Haugen’s legal counsel. The redacted versions were reviewed by a consortium of news organizations, including The Washington Post.

The trove of documents show how Facebook CEO Mark Zuckerberg has, at times, contradicted, downplayed or failed to disclose company findings on the impact of its products and platforms.

The documents also provided new details of the social media platform’s role in fomenting the storming of the U.S. Capitol. An investigation by ProPublica and The Washington Post found that Facebook groups swelled with at least 650,000 posts attacking the legitimacy of Joe Biden’s victory between Election Day and Jan. 6.

Facebook engineers gave extra value to emoji reactions, including ‘angry,’ pushing more emotional and provocative content into users’ news feeds.

Read more from The Post’s investigation:

Key takeaways from the Facebook Papers

Frances Haugen took thousands of Facebook documents. This is how she did it.

How Facebook neglected the rest of the world, fueling hate speech and violence in India

How Facebook shapes your feed