In the wake of the recent Hamas terrorist attacks on Israel, Meta has intensified its content moderation efforts, particularly in Hebrew and Arabic. According to the social media giant, it has removed a staggering seven times more content daily for violating its Dangerous Organizations and Individuals policy compared to the two months preceding the attacks. This surge in moderation activity was disclosed in a blog post where Meta outlined its ongoing efforts to combat the dissemination of violent content during the Israel-Hamas conflict.
Although Meta’s blog post does not explicitly reference the EU or its Digital Services Act, it coincided with European Commissioner Thierry Breton’s open letter to Meta, reminding the company of its obligations to curb disinformation and illegal content on its platforms. Commissioner Breton expressed concerns about the proliferation of illegal content and disinformation within the EU through certain online platforms and urgently called on Meta’s CEO, Mark Zuckerberg, to ensure the effectiveness of their systems. Similar letters were also sent to X (formerly known as Twitter) and TikTok by the commissioner.
Meta has disclosed that it took action against over 795,000 pieces of content in the three days following the October 7th attacks, all in violation of its policies in Hebrew and Arabic. The company has explicitly banned Hamas from its platforms. Additionally, Meta has implemented temporary measures such as blocking specific hashtags and prioritizing reports related to the crisis on Facebook and Instagram Live. Notably, the company has enabled content removal without disabling accounts due to the high volume of content removals, acknowledging that some content may be mistakenly removed.
To address the ongoing situation, Meta has established a specialized operations center staffed with experts, including individuals fluent in Hebrew and Arabic. This approach reflects Meta’s commitment to responding effectively to the crisis, reminiscent of its response to genocidal violence in Myanmar, where it assembled a team of native Myanmar language speakers in response to criticism.
However, Meta’s content moderation efforts have faced challenges and criticism in recent times. Members of its Trusted Partner program, designed to allow expert organizations to raise concerns about content on Facebook and Instagram, have voiced concerns about slow responses. Furthermore, Meta has been under scrutiny for altering moderation policies related to the Russia-Ukraine conflict.
It’s essential to note that X, in its outline of moderation efforts surrounding the conflict, has not specified the languages spoken by its response team. The European Commission has formally requested information from X under its Digital Services Act, citing allegations of the dissemination of illegal content, disinformation, terrorist and violent content, as well as hate speech.
In the ever-evolving landscape of content moderation, Meta’s response to the Israel-Hamas conflict underscores the challenges faced by tech companies in ensuring the responsible use of their platforms during times of crisis. As the situation continues to develop, Meta remains committed to its mission of creating a safer online environment while navigating the complexities of content moderation and compliance with international regulations.