Meta ‘effectively bans’ endorsement of Palestine, new report on ‘systemic’ suppression across Facebook, Instagram finds


A new report conducted by Human Rights Watch (HRW) found that Facebook and Instagram’s parent company Meta systemically censored content in support of Palestine since Oct. 7.

The report, released Thursday, was based on 1,050 cases of censorship submitted to, and reviewed by, HRW. Of those, 1,049 involved content in support of Palestine being removed—while only one case involved the removal of content supportive of Israel.

HRW’s report comes in the wake of multiple highly publicized instances of Meta’s poor handling of content related to Palestine.

Palestinian users complained that Meta mistranslated Arabic bios to read “terrorist,” an error the social media giant apologized for and said it had fixed. And a prominent Palestinian influencer and photographer, whose family was killed in a bombing in October, had his Instagram account inexplicably restricted after their deaths.

But HRW’s report focuses on censorship issues related to posts and comments being removed, and accounts being restricted or shadowbanned as a result.

Some of the key phrases censored or suppressed by Meta under its spam guidelines included “from the river to the sea, Palestine will be free,” “Free Palestine,” “Ceasefire now,” “Stop the genocide,” Rasha Younes said at a virtual press conference Thursday morning.

Younes, the acting deputy director of LGBT Rights at HRW, added that the Palestinian flag emoji was also frequently removed by Meta under its spam policy or because it may be hurtful to others, according to Instagram.

In more than 300 cases documented by HRW, users were unable to appeal restrictions placed on their accounts.

Younes noted that while incendiary content that was not removed was outside the scope of the report, there were instances of Islamophobic and anti-Palestinian posts remaining online despite being reported to Meta.

“For example, a user reported a comment on their own post which said, ‘make Gaza a parking lot.’ After the complaint was reviewed by Instagram, the platform notified the user that the comment was not removed because it did not violate community guidelines,” she said.

Another instance Younes cited was about a user who commented, “I wish Israel success in this war, which is its right. I hope it will wipe Palestine off the face of the earth and the map.” She said that Instagram said the post did not violate its community guidelines, however, it did remove a response to that inflammatory comment in which another user asked for justification.

Other key issues raised in the report include Meta’s dangerous organizations and individuals policy (DOI), which includes Hamas; incorrect removal of newsworthy content; high rates of compliance with government takedown requests; and automated tools that filter and remove content without considering contextual factors.

“We found that in its scope and application, the DOI policy effectively bans endorsement of many major political Palestinian movements and fails discussions of current hostilities because Hamas falls under the most restrictive category in this policy,” said Deborah Brown, who authored the report.

She went on to explain that some individuals speaking out or neutrally discussing current events related to Hamas have had content removed under this policy, such as an Arabic news outlet that posted on Instagram a quote from the Ministry of Health in Gaza with a warning that Israeli forces had ordered the evacuation of a children’s hospital before a bombing.

“The repost of this was removed for violating the DOI policy, presumably because the Ministry of Health in Gaza is a part of the government run by Hamas,” Brown said.

Brown also noted instances of images of dead bodies from Palestinians in Gaza mistakenly not being granted a newsworthiness exception, and instead being removed due to Meta policies on nudity and/or incitement to violence.

“This is problematic because it not only removes content, images, censors images of abuses of Palestinians, but also suggests that Meta does not consider such images to serve the public interest, and also that it could result in the discriminatory treatment of different users depending on their location,” Brown said.

HRW’s report made a number of recommendations for Meta to improve its content moderation policies, including overhauling its DOI policy and publishing a list of those classified under it.

The report comes amid heightened scrutiny of social media’s role amid the war, and just one week after Sen. Elizabeth Warren (D-Mass.) wrote to Mark Zuckerberg expressing concerns over Meta’s moderation policies.

“Social media communications are vital sources of news from the ground, particularly given widespread communications blackouts, ongoing misinformation and information war efforts, and the killing of over 60 journalists and media workers in Gaza,” Warren wrote, later adding that users “deserve to know when and why their accounts and posts are restricted,” and need protection against discrimination.

Daily Dot has reached out to Meta for comment about the report.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

Source: https://www.dailydot.com/debug/palestine-content-facebook-human-rights-watch/