Violent extremism: The ghost or the machine?

Lydia Khalil explains a parliamentary inquiry into violent extremism should call on tech companies to reveal their recommendation algorithms.

The Australian parliament’s Joint Committee on Intelligence and Security is currently holding an inquiry into extremist movements and radicalism in Australia. It is only the second issues-based inquiry that this particular committee has conducted; the first was into the politically charged question of foreign interference. The hearings indicate the importance that parliament has placed on addressing concerns around violent extremism, an issue that is challenging many democracies around the world.

The threat of terrorism and the nature of violent extremism has shifted substantially in the two decades since the 11 September 2001 attacks, which led to the establishment of most of the present crop of programs, departments and paradigms for counterterrorism and countering violent extremism. While the threat from international and homegrown jihadist actors remains, increasing polarisation and disinformation has contributed to the growth of a diverse array of extremist movements across the ideological spectrum, particularly among the extreme right. The inquiry’s terms of reference will allow the committee to examine whether government’s current policy settings and legislation are adequate to address a diverse, complex and decentralised violent extremist landscape.

Such a focus has generated substantial interest, with government, technology companies, academics and civil society groups offering submissions for consideration, not all yet published. The committee even received attention from the subjects of the inquiry, with at least one extremist group putting forward its own submission to argue it should not be considered as an extremist organisation. The committee sensibly declined this submission on the grounds of “parliamentary procedure and standing orders”.

The committee is seeking to understand the ways in which extremist groups can recruit, mobilise, incite violence, and put forward extremist and hateful narratives via internet enabled communications. (Full disclosure: I appeared as a witness during the committee’s two-day public hearings, putting forward my own submission focusing on the role technology, particularly social media, plays in extremism.) 

However, there are broader considerations around extremism and technology that go beyond how extremists are using the internet. Such questions centre on the internet platforms themselves – whether there is something about their design, logic and permissive environment that contributes to and facilitates extremism.

The concern is that the structure of internet platforms contributes to the growing exposure of individuals to extremist content which has driven polarisation, and contributes to other social harms that undermine democracy, including violent extremism.

The Australian Security Intelligence Organisation has said that parts of these internet platforms act as “echo chambers of hate”. The inquiry will seek to determine whether this is indeed the case, and, if so, how to address the problem. For its part, Australian technology industry representatives stated during the hearings that it has adequately monitored and moderated extremist content on internet platforms, allowing that it remains ongoing work.

Crucial yet unanswered questions revolve around how the recommendation algorithms used by various internet platforms such as Google, Facebook and Twitter have potentially led the average user to more extremist content, and how easy access to that content plays into a person’s radicalisation process.

Such questions can’t be fully answered, because there is a lack of transparency around how recommendation algorithms are designed. So should government regulate algorithmic transparency? I think it should. Algorithmic transparency is an important issue relating to not just extremism and technology, but to understanding how computers affect more and more of our daily lives and decisions. Computer algorithms must be known and explainable in order to govern them.

This article first appeared on The Interpreter by the Lowy Institute on 20 May 2021. Access the original article here.

Lydia Khalil is a Research Fellow at the Lowy Institute and an Associate Research Fellow at the Alfred Deakin Institute for Citizenship and Globalisation. She is also a founder and director of Arcana Partners, a research analysis and strategic consultancy firm. Lydia has spent her career focusing on the intersection between governance and security.