Imagine Truth Walking a Tightrope – and Meta Cutting the Net
In the fast-moving world of social media, truth is already fragile. Now picture Meta – the company behind Facebook, Instagram, and Threads – removing professional fact-checkers and replacing them with a crowdsourced system called Community Notes. It was supposed to be a safety net. Instead, it’s full of holes.
On August 4, The Washington Post revealed what happens when that net fails.
Community Notes: A Good Idea That’s Failing Fast
In early 2025, Meta CEO Mark Zuckerberg ended the company’s U.S. fact-checking partnerships. Instead of trained professionals, Meta now relies on users to submit “notes” correcting false or misleading posts. Sounds democratic – but the system is barely functioning. Tech columnist Geoffrey Fowler tested the feature over four months, submitting 65 corrections for viral hoaxes. Just three made it through. Even with credible sources like Snopes or Bloomberg News, most were never published.
Why? Because Meta’s “bridging algorithm” only displays notes when users across the political spectrum agree. That kind of consensus is rare – especially on topics like elections, public health, or hate speech. Even obvious AI-generated fakes often slip through. And some notes that do get published are inaccurate or unhelpful.
Meanwhile, Meta Refuses to Sign Europe’s AI Code
Meta’s failure to fix misinformation doesn’t stop with Community Notes. It also refused to sign the EU’s AI Code of Practice – a voluntary commitment to ethical AI use, transparency, and harm prevention. Zuckerberg has called the EU’s regulations “censorship.” But to us, and to many digital rights experts, they’re common sense. Platforms that reach billions must take responsibility for the tools they build. The EU’s Digital Services Act already requires companies like Meta to reduce systemic risks like disinformation. But by scaling back moderation and dodging commitments, Meta is signaling it’s not interested in global responsibility.
This Isn’t Just a Policy Shift. It’s a Public Risk. When over 50% of U.S. adults rely on social media for news, broken fact-checking systems aren’t just frustrating – they’re dangerous.
Lies spread faster than truth. And when platforms delay action, harmful content – including antisemitic conspiracy theories – gains traction. European fact-checkers, say harassment has increased since Meta’s rollback. That’s not a coincidence.
Why Community Notes Falls Short
Here’s what’s not working:
Consensus Paralysis: The algorithm demands too much agreement, even for clear-cut facts.
Limited Scope: Notes only apply to U.S. posts – a problem for a global platform.
Burnout: Contributors lose motivation when their corrections go nowhere.
Low Standards: Some published notes are inaccurate, vague, or cite “just Google it.”
As one former developer put it, “Community Notes was never meant to fly solo.”
There’s a Better Way
We believe that platforms must invest in hybrid systems: trained professionals and informed communities working together. Here’s how Meta could fix it:
- Allow notes on international posts
- Prioritize urgent misinformation
- Incentivize contributors with badges or recognition
- Reintegrate professional fact-checkers
- If Meta truly wants to combat disinformation, it must stop treating truth like a popularity contest.
Why This Matters to Us
Every day, FOA volunteers encounter the real-world fallout from unchecked lies – Holocaust denial, blood libels, and modern antisemitic conspiracies. Many are powered by AI, designed to mislead, and spread rapidly on the very platforms Meta controls.
We’ve always said: the fight against online hate begins with education and accountability. If platforms won’t step up, we will.
Take Action with FOA
Meta may have weakened its safety net – but you can still help catch the lies.
✅ Support FOA’s training programs to teach thousands how to detect and report hate
✅ Report misinformation when you see it
✅ Push platforms to prioritize truth over profit
Donate now – because the digital battlefield needs real defenders.