From recruitment to resistance: When it comes to fighting radicalisation, fixing the tech is only one piece of the puzzle
An interview with Aizat Shamsuddin, the founder of Initiative to Promote Tolerance and Prevent Violence (INITIATE.MY), on how technology has allowed radicalisation efforts to scale up, and what needs to be done to address the problem.

Let’s start with your background: What drew you into this work on violent extremism and tech?
I’ve been working in the prevention of violent extremism space for over a decade now. I started by exploring why people join extremist groups, initially looking at more traditional, offline paths of radicalisation: religious gatherings, lectures at universities, face-to-face indoctrination. But, over time, that focus has shifted towards online radicalisation: social media, messaging apps and other digital tools have now become heavily involved in the recruitment process.
My own story is part of that. I was once radicalised myself. I was part of a group that spread hate and division, and I subscribed to those ideas. But I reached a point where I saw through it — how religion and identity were being exploited. That personal experience, paired with my background in law and security, pushed me to help others understand this process and advocate for better responses to it.
With your personal experience, you must see tech’s role very differently. Online platforms are often blamed for extremist recruitment. Does solving the tech issue solve the radicalisation issue? How does that intersect with what you’ve studied around persuasive tech — how does it compare to traditional forms of recruitment?
Solving the tech issue doesn’t solve radicalisation. It goes deeper than that. Radicalisation is a complex process rooted in real-world grievances — identity, injustice, exclusion. Technology doesn’t cause it, but it amplifies it. Platforms become tools — enablers that make recruitment faster, broader, and more emotionally charged.
In the past, radicalisation happened in physical spaces — religious classes, kinship circles, peer groups. It was personal. Now, with platforms like Telegram, WhatsApp and social media, extremist networks can scale those same dynamics. Encrypted chat groups mimic real-world intimacy: people talk about their day, what they’re cooking, share frustrations. Over time, radical messages are layered into that trust and familiarity. It’s subtle but powerful.
At the same time, tech platforms and algorithms quietly reinforce those pathways. You no longer have to seek out extremist content — it finds you. Based on what you engage with, the system feeds you more of the same. That’s persuasive tech in action: creating a radicalising environment without coordination, often without people realising they’re in one.
So yes, tech speeds things up — it adds scale, emotion and persistence. But unless we address the root causes — alienation, marginalisation, grievance — tech will remain just a powerful tool, not the origin.
AI is also accelerating things. Have you seen it affecting how radical content is created or spread?
Absolutely. Generative artificial intelligence (Gen AI) speeds up how messages are crafted and shared. During the Gaza conflict, we saw women using AI-generated images of themselves as warriors — something we hadn’t seen before. It’s a form of imagined solidarity. While not necessarily radical, it reflects how political identities and affiliations are being reimagined through new tech. Terror groups are also exploiting these tools to enhance their operations and messaging.
Who’s pushing these messages? And what makes up this “radicalising environment” you mentioned earlier?
It’s a mix. Sometimes it’s coordinated — organised groups deliberately spreading propaganda and hate. But often it’s uncoordinated behaviours amplified by algorithms. For example, someone expressing racially or religiously supremacist views gets algorithmically fed more of the same content. That expands the audience well beyond just the already radicalised; it pulls in casual viewers and creates a self-reinforcing loop.
That’s what we mean by a “radicalising environment”. It’s not just about bad actors — it’s the structure of the system itself. It can normalise extreme views, isolate people from broader perspectives and push them further down that path.
And what about the far-right? You've done work looking at those influences, too, right?
Yes. For too long, the world has focused on Islamist extremism — especially after 9/11. But since the Christchurch attack and the rise of neo-Nazism, there has been greater awareness of far-right extremism. What's striking is how these ideologies localise. In Southeast Asia, we’ve seen versions of racial and religious supremacy adapted to local contexts: East Asian supremacy in Singapore, Malay-Muslim majoritarianism in Malaysia, nationalist narratives in Indonesia. It’s evolving and we need to pay attention.
Can you tell me more about INITIATE.MY?
We formalised INITIATE.MY in 2020–2021, in response to rising inter-group tensions after Malaysia’s general election in 2018. We saw a spike in racial and religious hate, but there was little data to back up public concerns. So we began documenting and quantifying cases, turning trends into graphs and analysis. That became our core: a data-driven approach to understanding and countering extremism.
We now work with civil society, law enforcement and regional networks — offering research, training and policy support. Evidence has enabled us to engage more effectively, especially with institutions.
Do you feel hopeful? Are there signs of progress?
I’m quite pessimistic about this, but we have some solutions. Our flagship program, Peace Lab, trains participants in content creation and critical thinking. We’ve even brought in platforms like TikTok to show how positive actors can leverage algorithms, not just be hurt by them.
We’ve also briefed companies like Meta and TikTok on emerging threats — sharing keywords, contextual cues and trends to inform their policy teams. But the shift to AI-based moderation is worrying. From our experience, current AI systems still struggle with sarcasm, coded language and cultural nuance. They can’t replace human moderators — especially not in high-risk areas like extremism.
What keeps you going?
Honestly, despite the challenges, this is what I care about. I think we all have our own passion about what we want to do in our life. And I guess this is my passion. It sounds cliché, but I think this is something that I'm good at. I’ve invested more than a decade in this work. And I see others doing the same thing, too — human rights defenders, digital rights advocates, researchers wanting to do good for the world. In a world sliding into greater polarisation and violence, I think it’s vital that we all keep pushing back.
Aizat Shamsuddin is the founder of Initiative to Promote Tolerance and Prevent Violence (INITIATE.MY).
Join one of the following groups and we'll keep you in the loop, including sending you a copy of the digital version of the first issue of Balance.
Join Signal Group | Join Whatsapp Group | Join Telegram Group