A Social Learning Theory Examination Of The Complicity Of Personalization Algorithms In The Creation Of 'echo chambers' That Lead To Radicalization

By: Michael Wolfowicz

Today's internet, or Web 2.0, is a space that is governed by personalization algorithms which dictate what information a user will be provided with and shielded from based on their previous online activities.  The search results, friend recommendations and video suggestions that all users receive are outputs of these algorithms.  Some have referred to this as a "filter bubble", in which a user's future online exposures are decided for them and where certain content will be made more available, or even recommended to them based on the algorithms' perceptions of their preferences.  One of the negative side-effects of the "filter bubble" is that it can lead users to be disproportionally exposed to material and associations that confirm and reinforce their previously held biases.  The result is that a user becomes part of an online "echo chamber", characterized by homogeneity and where polarization of ideas can lead to the adoption of more extreme stances.  These processes are especially important considerations for the study of online radicalization to extremist violence.  Since the likelihood that an individual will be radicalized on the internet is a function of a network of factors, this raises the question, are internet personalization algorithms responsible for providing a user with a greater volume, frequency and intensity of radicalizing material and associations in comparison to a situation in which internet personalization algorithms would not be influencing the user's online experience, and does this have a significant impact on the likelihood that an individual will be radicalized? 

The current research examines the issue from the perspective of Social Learning Theory, testing both the strength of the theory and the degree to which internet personalization, specifically with respect to new social media (NSM) platforms, may be complicit in online radicalization.  The research examines the issue by employing a Randomized Controlled Trial (RCT) to examine differences between user experiences under the normal influence of personalization compared to experiences in a space where algorithms and personalization settings are suppressed or quieted.  The research also seeks to test the impact of personalization algorithms on radical attitudes and beliefs by checking for differences in responses between the treatment and control groups.  The proposed research's findings may have usefulness in directing countering violent extremism (CVE) policies, regulations and strategies, and may assist in focussing efforts on effective prevention and intervention approaches in an area that has received little attention to date.  

Read More