Skip to main content

Recently, a study conducted by Eko, a non-profit organization focusing on corporate responsibility, revealed that social media platforms Meta and X approved advertisements containing violent hate speech targeting Muslims and Jews in Germany, ahead of the country’s federal elections. The research aimed to investigate whether the ad review systems of these platforms would accept or reject submissions with hateful and violent content.

The researchers created test ads with anti-Muslim and anti-Jewish slurs, calls for violence against immigrants, and AI-generated images of mosques and synagogues being destroyed. They submitted these ads for review in mid-February, just before the federal elections in Germany, which were scheduled to take place on Sunday, February 23.

Hate Speech Ads Scheduled

The results showed that X approved all 10 of the hate speech ads submitted by the researchers just days before the election, while Meta approved half of the ads (5) for running on Facebook and potentially Instagram, but rejected the other 5. The reasons provided by Meta for the rejections indicated that the platform was concerned about the potential risks of political or social sensitivity that might influence voting.

However, the 5 ads approved by Meta included violent hate speech that compared Muslim refugees to “viruses,” “vermin,” or “rodents,” and labeled Muslim immigrants as “rapists.” Additionally, one approved ad called for synagogues to be set on fire to “stop the globalist Jewish rat agenda.”

Notably, none of the AI-generated images used in the hate speech ads were labeled as artificially generated, yet half of the 10 ads were still approved by Meta. This is despite Meta’s policy requiring disclosure of the use of AI imagery for ads related to social issues, elections, or politics.

On the other hand, X approved all 5 of the hateful ads, as well as 5 additional ads containing similarly violent hate speech targeting Muslims and Jews. These ads included messaging that attacked “rodent” immigrants and suggested that Jews were lying about climate change to destroy European industry and gain economic power.

One of the approved ads on X featured AI-generated imagery depicting a group of shadowy men surrounded by stacks of gold bars, with a Star of David on the wall above them, perpetuating antisemitic tropes. Another ad approved by X contained a direct attack on the SPD, the center-left party leading Germany’s coalition government, with a false claim that the party wanted to take in 60 million Muslim refugees from the Middle East.

Elon Musk, the owner of X, has used the platform to personally intervene in the German election, calling for voters to support the Far Right AfD party to “save Germany.” He has also hosted a livestream with the AfD’s leader, Alice Weidel, on X.

To prevent any harm, Eko’s researchers disabled all test ads before they were scheduled to run, ensuring that no users were exposed to the violent hate speech. The tests highlight significant flaws in the ad platforms’ approach to content moderation, with X approving all 10 violent hate speech ads and Meta approving half of them.

The findings also suggest that the ad platforms could be earning revenue from distributing violent hate speech. This raises concerns about the effectiveness of their content moderation policies and the potential harm caused by these ads.

EU’s Digital Services Act in the Frame

Eko’s tests indicate that neither platform is properly enforcing bans on hate speech in ad content, despite their own policies claiming to do so. The research also suggests that the EU’s Digital Services Act (DSA) has not had a significant impact on how Meta operates, with the company’s AI-driven ad moderation systems remaining “fundamentally broken” despite the DSA being in full effect.

“Our findings suggest that Meta’s AI-driven ad moderation systems remain fundamentally broken, despite the Digital Services Act (DSA) now being in full effect,” an Eko spokesperson stated. “Rather than strengthening its ad review process or hate speech policies, Meta appears to be backtracking across the board.”

Eko has submitted its findings to the European Commission, which oversees the enforcement of key aspects of the DSA. The group also shared the results with both Meta and X, but neither company responded. The EU has ongoing DSA investigations into Meta and X, including concerns about election security and illegal content, but has yet to conclude these proceedings.

The European Commission has the power to impose penalties of up to 6% of global annual turnover for confirmed breaches of the DSA. Additionally, systemic non-compliance could lead to temporary blocks on access to violating platforms. However, the EU is still taking its time to make a decision on the Meta and X probes, leaving any potential DSA sanctions up in the air.

Meanwhile, German voters are set to go to the polls, and a growing body of civil society research suggests that the EU’s flagship online governance regulation has failed to shield the country’s democratic process from tech-fueled threats. Earlier this week, Global Witness released the results of tests on X and TikTok’s algorithmic “For You” feeds in Germany, which suggest the platforms are biased in favor of promoting AfD content.

Civil society researchers have also accused X of blocking data access to prevent them from studying election security risks in the run-up to the German poll, despite the DSA being intended to enable such access. “The European Commission has taken important steps by opening DSA investigations into both Meta and X, now we need to see the Commission take strong action to address the concerns raised as part of these investigations,” Eko’s spokesperson said.

The spokesperson added, “Our findings, alongside mounting evidence from other civil society groups, show that Big Tech will not clean up its platforms voluntarily. Meta and X continue to allow illegal hate speech, incitement to violence, and election disinformation to spread at scale, despite their legal obligations under the DSA. Regulators must take strong action, including enforcing the DSA and implementing pre-election mitigation measures.”

The campaign group warns that the EU is facing pressure from the Trump administration to soften its approach to regulating Big Tech. “In the current political climate, there’s a real danger that the Commission doesn’t fully enforce these new laws as a concession to the U.S.,” they suggest.


Source Link