A new human rights report accuses Facebook of supporting the 2017 genocide of Muslims in Myanmar, while other reports accuse the Menlo Park Company of allowing its platform to spread dangerous propaganda and hate speech in other countries. give.
In months and years of genocide by Myanmar’s military against Rohingya Muslims – killing 9,000 people and driving hundreds of thousands out of the country – Facebook “became an echo chamber of antiviral Rohingya content,” Amnesty International reported. That was claimed in a report last week.
“Myanmar military actors and radical Buddhist nationalist groups systematically flooded the Facebook platform to target the Rohingya, sow misinformation about the impending Muslim takeover of the country and sub-optimize the Rohingya,” the report said. attempted to be portrayed as human intruders,” the report said. Attack on social media firm ‘totally understaffed for pre-2017 operations in Myanmar’.
Amnesty called these perceived failures “symptomatic” of Facebook’s “widespread failure” to invest appropriately in content moderation in the developing world. For years, critics have slammed Facebook, which had nearly 3 billion users worldwide, according to the company’s most recent quarterly report, for its inability to effectively monitor content and enforce its policies. In the US, Facebook has been criticized for allowing the spread and propaganda of false news about former President Donald Trump’s 2016 presidential election.
Facebook, which now oversees parent company Meta, said it made “voluntary, legitimate data disclosure” for the United Nations investigation into the atrocities in Myanmar and the Gambia v. Myanmar case at the International Court of Justice.
“Our security and integrity work in Myanmar is guided by feedback from local civil society organizations and international institutions, as well as our ongoing human rights risk management,” said Rafael Frankel, Public Policy Director at META.
In three recent reports, Global Witness, a London-based rights group, has targeted Facebook’s activities in other countries. Facebook did not immediately respond to questions about the Global Witness allegations.
In August, Global Witness claimed that Facebook had “a terrible failure to detect election-related propaganda in ads” as Brazil’s Oct. 2 presidential election drew near. Global Witness submitted 10 ads to Facebook, half of which contained false voting information, including when and where to vote, and the other half “aimed at making the voting process illegal” which suspected electronic voting machines, such as electronic voting machines. B. Doubting the Electronic Voting Machines.
The group said the group knowingly violated several of Facebook’s election integrity guarantees, including not verifying the account used to place the ads. According to Global Witness, Facebook approved all of the ads and said it canceled the ads before they were published.
In the election, former centre-left President Luiz Inácio Lula da Silva received 48% of the vote and 43% went to far-right President Jair Bolsonaro, who has publicly questioned the integrity of the country’s electoral system and voting machines. A runoff election is expected on October 30.
In Ethiopia, Global Witness found a dozen “worst examples” of hate speech posted on Facebook in the dominant language, Amharic, according to a June report. The group said all had previously been reported to Facebook as violating the guidelines and the company had removed most of them. Global Witness presented 12 example ads, with four ads each targeting the country’s three main ethnic groups.
“Sentences used include violent utterances that directly ask people to kill, starve, or ‘clean up,'” the group said in a report. “Many of them amount to calls for genocide.” Global Witness claimed all were released and said it canceled them so they would never attend.
The group said it submitted its findings to Facebook, which responded that “the ads should not have been approved and that they have invested heavily in security measures in Ethiopia, hired more staff with local expertise, and caught hateful and inflammatory acts.” Content,” said Global Witness. A week later, the group filed two more hate speech complaints. Both were accepted for release “within hours,” according to the group. Path.
Since the conflict in northern Ethiopia began in November 2020, hundreds of thousands have been killed, millions displaced and allegations of rape and torture from all sides.
Global Witness also conducted an experiment in Kenya this summer, where deadly violence occurred in several elections. The group found 10 cases of hate speech and incitement to ethnic violence in Kenya, as 2007 election-related violence killed more than 1,000 people, most of them ethnic. Beginning with Kenya’s national elections in early August, Global Witness presented hate speech in the form of 20 advertisements, half in English and half in Swahili, “comparing certain tribal groups to animals and rape, slaughter and murder”. call for beheading”. According to Global Witness, Swahili ads were immediately approved, but English ads were initially rejected for not complying with Facebook’s grammar and profanity policy.
“Facebook invited us to update the ads and after minor fixes they were accepted as well,” the group claimed.
Global Witness informed Facebook of its findings, and Facebook released a statement highlighting its work on removing harmful content ahead of the election. The group said it submitted two more hate speech ads and Facebook approved them.
Kenya Human Rights Commission Chair Rosalyn Odede said human rights abuses fell during last week’s August elections but there were four deaths and 49 cases of assault, harassment and intimidation.