Sasha Havlicek on Mitigating the Spread of Online Extremism

Machines Like Us - A podcast by The Globe and Mail - Martedì

Companies, international organizations and government agencies are all working to identify and eliminate online hate speech and extremism. It’s a game of cat and mouse: as regulators develop effective tools and new policies, the extremists adapt their approaches to continue their efforts. In this episode of Big Tech, Taylor Owen speaks with Sasha Havlicek, founding CEO of the Institute for Strategic Dialogue, about her organization and how it is helping to eliminate online extremism and hate speech. Several issues make Havlicek’s work difficult. The first challenge is context: regional, cultural and religious traditions play a factor in defining what is and what is not extremist content. Second, there isn’t a global norm about online extremism to reference. Third, jurisdictions present hurdles; who is responsible for deciding on norms and setting rules? And finally, keeping up with evolving technology and tactics is a never-ending battle. As online tools become more effective in identifying and removing online extremism and hate speech, extremist groups find ways to circumvent the systems. These problems are amplified by engagement-driven algorithms. While the internet enables individuals to choose how and where they consume content, platforms exploit users’ preferences to keep them engaged. “The algorithms are designed to find ways to hold your attention, … that by feeding you slightly more titillating variants of whatever it is that you're looking for, you are going to be there longer. And so that drive towards more sensationalist content is I think a real one,” Havlicek says. These algorithms contribute to the creation of echo chambers, which are highly effective tools for converting users to extremists.

Visit the podcast's native language site