Skip to main content

ICYMI: Meta's Hate Policy Rollback Linked to Increased Antisemitism

May 9, 2025

Jewish members of Congress have experienced a nearly fivefold increase in antisemitic harassment on Facebook since the start of the year, according to new ADL research. ADL believes this is due to a highly controversial change to Meta's content moderation policies for its platforms in January 2025.

Because of the change to the policies that govern what users are allowed to say and do on Meta’s platforms—Instagram, Facebook and Threads—Meta now relies on user reporting to identify much of the hateful, “lawful but awful” speech, instead of proactively removing it without a user report.

“This is a trade-off,” Meta CEO Mark Zuckerberg said at the time the change was announced in January 2025. “It means that we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

Following this dramatic policy shift, ADL expressed deep concern, warning that this change would be a “step backward.” ADL and others further cautioned that dialing back content moderation efforts would allow toxic and hateful content to surge. 

Measuring hateful comments

To measure how much harmful content this policy rollback might allow through, researchers at the ADL Center for Technology and Society (CTS) collected and analyzed toxic and antisemitic comments directed at the 30 Jewish members of Congress with Facebook accounts (27 Democrats, two Republicans, and one independent). Public officials are frequent targets of online harassment, especially those from marginalized groups. ADL research also shows that Jewish users are disproportionately likely to be harassed for their religious identity and therefore targeted with antisemitic hate, and that major platforms fail to act on regular reports of the term “Zionist” used as a slur.

We examined the accounts of Jewish members of Congress as likely targets of antisemitic hate, where we would be able to observe the impact of changes to Meta’s policy or enforcement. 

Key Findings

  • The average number of comments per day—both hateful and non-hateful— appears to have increased sharply (by a factor of eight) from February 4 on.
  • This increase includes a nearly fivefold spike in antisemitic comments per day on the Facebook accounts of 30 Jewish members of Congress.
  • The proportions of antisemitic and toxic content remained relatively consistent from January through early April, suggesting that Meta is dialing down all content moderation, allowing an increase in the total volume of toxic and hateful comments.
 

Antisemitic and toxic comments increased sharply in February

Although these 30 Jewish members of Congress posted more frequently over the course of the month in January, comments on these posts suddenly increased sharply in early February. It is possible that users simply started commenting more frequently, but more likely, Meta implemented its new policy and began letting through more comments, hateful and non-hateful. Meta indicated the new policy would go into effect a few weeks after the announcement on January 7, though it did not specify exactly when.

When we analyzed the antisemitic content of these comments using ADL’s antisemitism classifier (a machine learning tool to detect antisemitic content), we observed a similar increase: antisemitic comments were relatively few (6.5 per day on average) until February 4, when they spiked and remained much higher (29.90 per day through April 7, 4.6 times as many). It appears that Facebook is not moderating many of these hateful comments.

We also observed a corresponding increase in comments we identified as toxic, using Google Jigsaw’s Perspective API (a machine learning classifier that can detect toxic speech, such as comments that are “rude, disrespectful, or unreasonable” and which may cause others to leave the discussion). While there were 14.38 toxic comments per day until February 4, after that date, they jumped to 188.94 per day on average. On some days, toxic comments spiked: Our research tallied 701 toxic comments across 66 total posts in one day. A small percentage of posts attracted the majority of antisemitic and/or toxic comments: 11.7% of posts accounted for 80% of toxic comments and 12.17% accounted for 80% of antisemitic comments. This pattern may be due to how Facebook’s algorithm surfaces content, but it may also be that hateful or harassing replies encourage others to respond similarly.

Examples

When we examined the content of these antisemitic and toxic comments, we observed hundreds of highly polarized, angry replies, laden with abuse and invective (calling members of Congress names, cursing, accusing them of lying, denigrating them, and so on). On average, 3.55% of comments were toxic. The total proportion of antisemitic comments was lower (0.60%) but increased from fewer than one per post per day before February 4 to at least one per post, on average, starting on that date.

The following examples are a sample of those comments our classifier determined most likely to contain antisemitism. Among these 30 Jewish members of Congress, the most frequently targeted were Senators Bernie Sanders and Chuck Schumer and Reps. Brad Sherman and Brad Schneider; Rep. David Kustoff, R-TN, one of the two Jewish Republicans in our sample, was the 13th most targeted.

Typically, we observed a few explicitly antisemitic comments per post, including numerous instances of the use of “Zionist” as a slur (even though Meta declared last year that it would treat the term as violative when used that way).

The antisemitic comments often appeared alongside a flood of toxic invective: name-calling, expletives, accusations, anti-immigrant and anti-LGBTQ+ rhetoric, as well as pro-Trump sentiment.

The primary exceptions were posts addressing the Israeli-Palestinian conflict, which did attract multiple anti-Israel and anti-Jewish responses. 

The posts ran the gamut in terms of topics: political content addressing issues such as tax cuts, Medicaid or President Trump’s agenda, as well as holiday well-wishes and other non-political content. Most comments were highly politicized; only a few engaged in substantive discussion.

Some posts by the Jewish members of Congress received a mix of supportive comments and vindictive ones, but many appeared overwhelmed with hundreds of angry, virulent responses, or sometimes, just emojis and memes.

Conclusion: Rolling back content moderation allows hate to thrive

The results of this study support our expectation that Meta's new policies would allow increased hate, antisemitism, and toxicity on Facebook and potentially its other platforms as well. Meta contended that its previous enforcement caught too much (over-moderated) permissible content. But rolling back its content moderation practices means highly visible Jewish users, such as members of Congress are now receiving many times more antisemitic hate.

It is also possible that the policy change has signaled to hateful users that such abuse will now be tolerated. By allowing hateful content to remain on the platform, Meta is in effect encouraging this content on its platforms. ADL's research shows that Jews, women, people of color, LGBTQ+ people, and other marginalized groups are disproportionately targeted with identity-based harassment. This kind of harassment is associated with such groups withdrawing from online spaces and participation in public life.

This case study focused on Jewish public figures, whose Facebook pages are managed by staff who can report hateful comments. Their greater visibility may also mean that other users report these comments. Because Meta's changes are intended to be platform-wide, at least within the United States, regular users can expect to experience increased hateful comments in their feeds as well. Meta bears the responsibility for the harm that this change causes: with its moderation policy rollback, the company is enabling, if not actively encouraging, antisemitic, hateful, and toxic activity on its platforms. 

Methodology

ADL researchers collected 349,021 direct comments from the Facebook pages of the 30 self-identified Jewish members of Congress with Facebook accounts between January 1 and April 7, 2025.*  We analyzed these comments using ADL’s own antisemitism classifier to rate the likelihood of antisemitic content in each comment and a toxicity classifier (Google Jigsaw’s Perspective API).

Meta's prior policies governing hate did not consider all antisemitic content to be violative, specifically implicitly hateful tropes or narratives that often require broader context to recognize. An earlier ADL study of hate in Facebook neighborhood groups, for example, found many hateful, antisemitic comments that the company deemed acceptable under its community standards guidelines. We compared the total volume of comments to those our classifier identified as antisemitic.

We ran a linear regression to examine the relationship between the number of posts (3,598 total) by Jewish members of Congress on Facebook and the number of days since the start of data collection on January 1, 2025. The results revealed a statistically significant, gradual increase in post count over time, by 1% every 10 days. We also observed that post volume typically increased during weekdays and decreased over the weekend; comments follow this pattern to some degree. After February 4, approximately one month after Meta announced the policy change, there was a greater increase in comments: about four times as many per post.

The rise in antisemitic and toxic comments mirrored the overall increase in comment volume, with a slight increase in the rate of toxic comments and a slight decrease in the rate of antisemitic comments, both of which were statistically significant. The overall number of comments increased at a slightly greater rate than antisemitic comments. The sudden uptick in comment volume suggests a change in how Meta moderated comments before and after February 4.

*There are 35 self-identified Jewish members of Congress: 30 Democrats, four Republicans, and one independent. Four did not have Facebook accounts; one had an account that was created too recently to be included.