New Research Shows Facebook Algorithm Does Not Deepen Political Polarization
A groundbreaking research project conducted by Meta, the parent company of Facebook and Instagram, in collaboration with academic institutions in the US, has published its initial findings on Thursday. Contrary to popular belief, the study reveals that Facebook’s content-ranking algorithm does not shape users’ political beliefs.
The research, which focused on the social media giant’s role in American democracy during the 2020 US presidential election, consisted of four papers published in scientific journals, Science and Nature. Led by Talia Stroud of the University of Texas at Austin and Joshua Tucker of New York University, the study found that while the algorithm significantly influenced users’ on-platform experiences and determined the content they saw, it did not impact their political attitudes.
The academics conducted experiments on tens of thousands of users, altering how they received content for a period of three months. However, the researchers acknowledged that the short duration of the experiments may have limited their ability to observe significant changes in political beliefs, given the long-standing polarization in the United States.
The findings challenge the prevailing idea that social media echo chambers contribute to the problems of American democracy. One of the papers published in Nature stated that “these findings challenge popular narratives blaming social media echo chambers for the problems of contemporary American democracy.”
Facebook’s algorithm, which employs machine-learning to prioritize posts based on users’ interests, has been accused of creating “filter bubbles” and facilitating the spread of misinformation. To investigate this, the researchers recruited approximately 40,000 volunteers through invitations on their Facebook and Instagram feeds. They conducted an experiment where one group saw posts listed chronologically, while the other group experienced the normal algorithm-driven feed. The study discovered that users in the chronological group spent significantly less time on the platforms compared to the algorithm group. While the chronological feed exposed users to more content from moderate friends and ideologically mixed sources, it also increased the amount of political and untrustworthy content. However, these changes did not cause any noticeable alterations in users’ political attitudes.
Another study found that suppressing reshared content, which constitutes a significant portion of what users see on Facebook, resulted in a reduced proportion of political content and political knowledge. However, it did not affect downstream political attitudes or behaviors.
A third paper examined the impact of content from like-minded users, pages, and groups on users’ feeds. The research revealed that like-minded content accounted for the majority of what American Facebook users see. However, suppressing this content did not have any effect on ideological extremity or belief in false claims.
In contrast, the fourth paper confirmed that there is a significant ideological segregation on Facebook, with politically conservative users more isolated in their news sources compared to liberals. Additionally, the study found that more conservatives than liberals encountered politically false news on the platform.
Meta, formerly known as Facebook, welcomed the research findings, stating that they contribute to a growing body of evidence that social media has little impact on harmful political polarization or key political attitudes and behaviors.
This research provides valuable insights into the role of social media platforms in shaping political beliefs and addressing the concerns surrounding Facebook’s algorithm. While the algorithm does influence users’ on-platform experiences, its impact on political polarization seems to be limited, indicating that it does not serve as a driving force behind deepening political divisions.