Sunday 22 September 2024

Big Tech's complicity in genocide: The unforgivable silence of online platforms

 

A damning report, “Palestinian Digital Rights, Genocide, and Big Tech Accountability”, by 7amleh, a Palestinian-led non-profit organisation that is focused on protecting the human rights of Palestinians, has laid bare the disturbing and active role that major online platforms and big tech companies play in perpetuating human rights abuses against Palestinians. While the world watches the horrors unfold in Gaza, the role of these digital accomplices cannot be ignored. The report highlights that platforms like Meta, X, YouTube and tech giants Google and Amazon have enabled, facilitated and even profited from these atrocities, effectively shielding war crimes under a digital smokescreen.

The findings are a harrowing indictment of how big tech companies, under the guise of neutrality, have become active participants in censorship, disinformation and incitement to violence. They have provided crucial infrastructure that underpins Israel’s military actions, allowing their platforms to be weaponised, silencing Palestinian voices while amplifying hate speech and calls for genocide. The complicity of these platforms is not a mere oversight; it is an entrenched system of deliberate decision-making that prioritises profits over human rights.

Systematic censorship of Palestinian voices

At the heart of the report’s findings is a shocking pattern of systematic censorship targeting Palestinian voices. Between October 2023 and July 2024, over 1,350 instances of censorship were documented on major platforms, including Facebook, Instagram, X and TikTok. These platforms disproportionately targeted Palestinian journalists, activists and human rights defenders, with Meta’s platforms being among the worst offenders. The censorship took many forms: accounts were suspended, content takedowns became routine and distribution of pro-Palestinian narratives was heavily restricted.

READ: Israel accused of using Google ads to undermine UN body

Meta’s manipulative algorithm changes played a key role in this censorship. The report reveals that during the ongoing war in Gaza, Meta altered its content moderation policies to lower the threshold for flagging Palestinian content, reducing the accuracy of its filters and triggering unnecessary takedowns. For Palestinian content, Meta’s filters operated with a mere 25 per cent certainty of a violation, compared to the usual 80 per cent applied elsewhere. These so-called “temporary risk response measures” were never lifted, allowing for an outsized level of scrutiny on Palestinian content creators. This is not an isolated incident – it’s a calculated, discriminatory policy that silences marginalised voices and hinders the free flow of information at a time when it’s needed the most.

As 7amleh’s report highlights, Meta’s broken promises to safeguard free speech, coupled with its biased content moderation, exacerbated the situation for Palestinians. Human Rights Watch had already condemned Meta for its systemic censorship of Palestinian voices during the war, with over 1,050 instances of content removal on Facebook and Instagram. In nearly all cases, this censorship targeted peaceful, pro-Palestinian content while allowing violent, anti-Palestinian content to flourish unchecked. Comments like “Free Palestine”, “Stop the Genocide” and “Ceasefire Now” were removed under Meta’s spam guidelines, reflecting a dangerous double standard that stifles legitimate political discourse.

Platforms as instruments of genocide

The report makes clear that online platforms are not simply neutral forums but have become instruments of incitement to genocide. Between October 2023 and July 2024, over 3,300 instances of harmful content – including incitement to genocide – were documented, the majority on X and Facebook. These platforms allowed high-level Israeli officials and other users to openly call for the extermination of Palestinians, dehumanising them as “sub-humans”, “animals” and worse. This genocidal rhetoric wasn’t limited to obscure corners of the internet. It was promoted, amplified and left unchallenged by the very platforms that claim to be committed to community standards and human rights.

For instance, on X, a December 2023 post by the deputy mayor of Jerusalem described blindfolded Palestinian detainees as “ants” and called for burying them alive. Although this specific post was eventually removed, countless others like it remain, fuelling a climate of violence and dehumanisation against Palestinians. This failure to combat hate speech directly contravenes international law, particularly in light of the International Court of Justice’s January 2024 order, which directed Israel to prevent and punish incitement to genocide.

These platforms are not just failing in their duty to protect free speech; they are actively facilitating the spread of genocidal propaganda. In the case of Meta, the report details how over 9,500 takedown requests from the Israeli government were sent to Meta between October and November 2023, with a shocking 94 per cent compliance rate. This high level of cooperation with a state actively committing war crimes raises serious concerns about the ethical boundaries of these companies. Meta’s decision to comply with such requests without transparency or accountability reveals a deeper issue: these platforms are willing to become tools of state oppression when the price is right.

READ: Israel using Meta’s WhatsApp to kill Palestinians in Gaza through AI system

The role of Big Tech: Project Nimbus and the automation of killing

Beyond the sphere of social media, Google and Amazon’s collaboration with the Israeli military under Project Nimbus casts an even darker shadow over the tech industry’s role in this conflict. The $1.2 billion cloud computing contract, as the report highlights, provides critical infrastructure to power Israel’s AI-driven Lavender and Gospel targeting systems – systems that are directly linked to the mass civilian casualties in Gaza.

The Lavender system, in particular, functions as a tool for automated killings, identifying targets based on massive data inputs and feeding them into the Israeli military’s bombing campaigns. The report describes how Lavender alone identified over 37,000 potential targets, contributing to the deaths of thousands of civilians, including women and children. By providing cloud services to facilitate this mass-scale targeting, Google and Amazon are directly implicated in these violations of international law. Despite mounting global pressure, both companies continue to support Israel’s military operations under Project Nimbus, even as the civilian death toll in Gaza rises.

Hate speech and disinformation: A coordinated assault on truth

The report goes on to document a deluge of hate speech and disinformation campaigns, often spearheaded by Israeli officials and amplified by online platforms. These campaigns, which include the systematic dissemination of dehumanising content on Telegram, X and YouTube, have targeted Palestinians both inside Gaza and across the diaspora. The report cites three million instances of violent content in Hebrew aimed at Palestinians on X alone, much of it coordinated by Israeli state actors.

Perhaps most troubling is the Israeli government’s influence operation known as STOIC, which ran a disinformation campaign targeting US and Canadian lawmakers to undermine the work of The United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA). This campaign, orchestrated with the help of AI, spread false narratives that led to the defunding of UNRWA, cutting off critical humanitarian aid to Palestinians. This is not merely a failure of moderation but an example of how platforms can be weaponised for state-driven disinformation, with devastating consequences for innocent civilians.

Profiting from genocide: Advertising amidst war crimes

As if censorship and disinformation weren’t enough, the report also exposes how platforms like Facebook have profited from harmful advertisements promoting violence against Palestinians. The investigation found that Facebook ran ads calling for the assassination of pro-Palestinian activists and the forced expulsion of Palestinians from the West Bank. Meta profited from these campaigns, further entrenching its complicity in the human rights violations unfolding in Gaza.

READ: Google, Amazon workers protest billion-dollar contract with Israel

Meanwhile, YouTube ran ads from the Israeli government that used graphic imagery to sway public opinion in favour of its military actions in Gaza. Despite YouTube’s policies against violent content, these ads flooded social media with incendiary narratives, particularly in Europe and the US, contributing to the normalisation of war crimes under the guise of counter-terrorism.

Time for accountability

The findings of this report should compel the international community to act. It is no longer acceptable for tech companies to hide behind vague policies and empty commitments to free speech while facilitating the mass killing and silencing of a besieged population. The complicity of Meta, X, YouTube, Google and Amazon in these atrocities must be brought into the spotlight and held accountable for their role in enabling these crimes.

These platforms are not neutral arbiters of truth – they are corporations driven by profit, willing to accommodate genocidal regimes and turn a blind eye to the suffering of millions if it serves their bottom line. As the report makes clear, it is time for the world to demand that these companies stop profiting from the destruction of Palestinian lives. The silence and complicity of big tech are unforgivable, and they must not be allowed to escape responsibility any longer.

https://www.middleeastmonitor.com/20240921-big-techs-complicity-in-genocide-the-unforgivable-silence-of-online-platforms/

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home