The Trump administration’s push to force U.S. control of TikTok has been framed as a national security measure. But for vulnerable communities, the far more urgent question is whether new owners will maintain moderation guardrails — or rip them out in the name of growth. What’s at stake is not only the fight against antisemitism but also the fight against racism, misogyny, conspiracy theories and health misinformation.
We have seen what happens when moderation collapses. Elon Musk gutted Twitter’s safety team — and antisemitic hate speech soared. Meta loosened rules on disinformation. YouTube reinstated creators promoting electoral and COVID falsehoods. The result was not richer conversation but amplified vitriol, conspiracy amplification and communities driven offline.
TikTok’s record is mixed. After the Oct. 7 Hamas attacks, Jewish creators reported waves of antisemitic abuse. Holocaust denial and blood libel memes circulated openly. But unlike some platforms, TikTok engaged with Jewish groups, hired an antisemitism policy lead and tightened enforcement. Imperfect — but progress.
Now the fear is that new American owners will backslide — because outrage is what fuels engagement. Algorithms push misogynistic tirades to teenage boys. Anti-vaccine propaganda spreads to young parents. Conspiracy theories about immigrants or election fraud reach millions before fact-checkers can catch up. New ownership alone doesn’t guarantee oversight if virality remains the business model.
Some object that moderation is censorship. But the First Amendment restricts government — not private actors. Newspapers reject op-eds all the time. Synagogues remove hecklers. Platforms enforcing their own rules is no different. To call it censorship is to confuse constitutional law with responsible stewardship.
And online speech isn’t like handing out leaflets. A flyer might reach a hundred people. A TikTok video can reach 5 million in an afternoon, algorithmically targeted to receptive — and vulnerable — audiences. The danger lies not only in content, but in the speed, scale and selectivity of its amplification. To treat it as just another soapbox is to ignore reality.
For Jews, the cost is painfully real. From medieval libels to modern conspiracy theories, antisemitism spreads fastest when lies go unchecked. Online platforms can turbocharge that within hours. But Jews are not alone. Women subjected to torrents of abuse, Black creators hit with racial slurs, parents bombarded with anti-vaccine lies — these communities all face the same silencing: They log off, pull back or stay silent. A hands-off internet yields less speech, because fragile voices drown.
Thoughtful moderation expands speech. By removing destructive content and slowing viral lies, platforms open space for more voices. That requires clear rules, consistent enforcement and transparency. Better systems would include published standards, enforcement data, meaningful appeals, friction before sharing, labeling falsehoods, limiting repeat offenders and opening algorithms to scrutiny.
As of early this month, the administration has approved a deal giving U.S. investors — led by Oracle and Silver Lake — a controlling stake in TikTok’s American operations, while ByteDance retains a minority share. The outcome will test whether U.S. oversight means safer platforms — or merely louder ones.
A digital commons without guardrails does not yield democracy. It yields a swamp where the loudest and most extreme voices dominate. If we demand an online world where all communities can participate safely, moderation isn’t optional. It is the price of a platform’s legitimacy.
Moderation is not the enemy of free expression. It is the foundation that makes free expression possible.