In the final part of our conversation with Dr. Halperin, we confront three urgent questions at the heart of our mission: How do we fight back against AI-generated images of suffering that are weaponized to demonize Israel, when so few pause to check if they are real? Where is the line between political outrage and the justification of violence, and have we already crossed it? Can regulation truly hold back this tide, or are we left clinging to hope?
Dr. Halperin does not hand us easy answers. Instead, she brings something far more valuable: clarity forged through decades of research and lived experience. Amid all this, she highlights a recent development, in which the UK Government’s crackdown on antisemitic doctors in the NHS, as proof of what is possible when institutions finally choose to act.
When manipulated footage and AI-generated imagery spread faster than corrections, what tools or standards can journalists and researchers realistically use to push back? And do you think audiences are becoming more or less equipped to tell the difference?
It is very difficult to change the minds of users who have been exposed to AI-generated content. For example, when activists of the pro-Palestine movement post AI-generated images of a skeleton with a text saying that this baby is dying because of hunger caused by Israel. Audiences do not necessarily look for the truth. The solution should lie in the inspection and removal of antisemitic content.
On March 24, 2026, the UK Government announced a crackdown on racist and antisemitic doctors in the National Health Service. Social media played a major role in this dramatic move. After October 7, there were cases of NHS doctors who allegedly posted antisemitic content on social media. The Government’s Independent Adviser Against Antisemitism, Lord John Mann, provided recommendations that led to this action. I follow this development closely. Both Lord Mann and I are members of the London Center for the Study of Contemporary Antisemitism, he as a patron and I as a fellow.
Editor’s note: The NHS development Dr. Halperin references is significant. On March 24, 2026, the UK Government launched the most sweeping reform of the General Medical Council in over four decades, directly targeting doctors who have used racist and antisemitic language, particularly on social media. The reforms, based on Lord Mann’s rapid review, will give regulators new powers to act swiftly when medical professionals cross professional boundaries. Health Secretary Wes Streeting stated that everyone, regardless of race, religion, or belief, should feel safe seeking NHS care. For us at FOA, this is a concrete example of what enforcement looks like when governments choose to act, and a reminder that social media accountability doesn’t stop at the platforms themselves. It extends to the professionals and institutions whose members use those platforms to spread hate.
In your view, what is the specific role of online narratives in moving people from political anger to the justification of violence? Is there a point of no return?
Social media serves as a tool for recruiting followers, facilitating protests, and raising funding. As such, it plays a direct role in violence on the ground. The narratives that terrorist organizations and extremists deliver online influence followers, mostly young people, around the globe, even in cases where they do not engage beyond the small screen. As I said earlier, in the current climate, there must be enforcement and regulation to prevent us from reaching the point of no return.
Considering everything you have studied about media narratives, conflict, and the spread of hate online, what gives you hope today? Are there any trends or developments that suggest the information environment could really get better?
My work requires resilience. If I were not strong, I would have failed to do my job. It is difficult to hold an optimistic approach when I see the scale of hate messages and online radicalization, including by senior professors and academic leaders. And this is not happening only online. Recently, there were arson attacks in the heart of the residential area of the Jewish community in London.
We are still in a war. An actual war, when people are being killed every day in Israel, and a daily fight to keep our heads above the water in the diaspora. The information environment could improve once there is firm regulation. Currently, the solutions offered by regulatory bodies, such as the Online Safety Act, do not fully protect citizens from harm.
Editor’s note: When we asked Dr. Halperin what gives her hope, she chose honesty over comfort. Her words, a daily fight to keep our heads above the water in the diaspora, will echo for many of us. But notice how she channels that struggle: researching, speaking, publishing, testifying before parliaments, and refusing to be silenced. That is the answer. It is the decision to keep showing up.
At FOA, this is what we ask of our community every day: show up, report, use your voice, and refuse to look away. If Dr. Halperin can persist for over twenty years, standing in the heart of the storm, then surely we can do our part, wherever we are.

