Home > Tech & Culture > Article

Julia Ebner: The extremism researcher on bot armies and online hate

Tech & Culture ✍️ Anna Berger 🕒 2026-03-20 16:57 🔥 Views: 1
Julia Ebner warns of digital manipulation and online hate

There's an invisible war raging out there in the digital sphere. While we're scrolling through our feeds, liking posts from long-forgotten acquaintances or getting annoyed at angry comments, they're already hard at work: armies of bots, steered by extremists, trolls and political strategists. No one in Europe has scrutinised this phenomenon more meticulously in recent years than Julia Ebner. The Austrian extremism researcher, who works at a leading research institute for strategic dialogue in London, has been warning for years about the systematic infiltration of our social networks. And her latest analyses are more alarming than ever.

The method: how bots are taking over our minds

It would be too simplistic to assume every hate campaign is just the work of a few angry individuals. What Julia Ebner and her team uncover through undercover research is highly professional, organised manipulation. It's no longer about individual trolls, but bot armies controlling thousands of accounts simultaneously. They don't just post radical slogans; they interact, amplify each other, and artificially boost the reach of extreme minorities to levels they'd never achieve in the real world. The tactic is always similar: under posts about refugees, vaccinations or elections, masses of identical narratives suddenly appear in the comment sections. For Julia Ebner, this is a clear pattern: "What looks like a spontaneous public outcry is often the result of carefully planned digital attacks," she sums up the findings from her undercover investigations. The really insidious part? The bots are learning. They mimic human behaviour, posting harmless cat pictures first to build trust, and then they strike.

The deadly impact of likes and shares

Many still underestimate the explosive power of this digital manipulation. But in books like Going Dark and The Rage, Julia Ebner has impressively documented how online hate speech translates into real-world violence. She shows how terrorist organisations and far-right groups use the same algorithms to recruit vulnerable young people. The platforms themselves become accomplices because their algorithms reward outrage and radicalism – they push the worst content to the top of timelines simply because it generates the most interaction. A particularly worrying example is so-called deepfakes. In a world where you soon won't be able to trust any video or audio, Julia Ebner sees a new dimension of disinformation heading our way. "We're facing a serious test for democracy," she warns. Because when facts no longer matter, only the loudest and most unscrupulous voices win out in the end.

What can we do? The expert has clear demands

But Julia Ebner wouldn't be the most distinguished researcher in this field if she only offered grim predictions. She's calling on tech companies to finally deliver radical transparency. Deleting a few obvious hate posts isn't enough. The algorithms need to be overhauled; they must no longer reward the spread of extremism. We also need:

  • Greater digital literacy in the community: We have to learn to recognise manipulative content and question it critically.
  • Independent research access: So far, platforms like Facebook or X (formerly Twitter) rarely provide insight into their data.
  • International cooperation: Digital manipulation doesn't stop at borders. Only if countries like Australia, the UK, the EU and others act together can we stop these virtual mercenaries.

The work of Julia Ebner is an essential compass in these chaotic times. She dives into the darkest corners of the internet to show us all what's brewing down there. We should take her warnings seriously – because the battle for control over our minds has long begun. And we're all in the middle of it, whether we like it or not.