Facebook has long struggled with controlling extremist content on its platform. From the 2016 US elections, when Russians were able to manipulate American voters through polarizing ads, to propaganda that spread through the social network and led to violence in Myanmar.
A new report by Jeff Horwitz and Deepa Seetharaman in the Wall Street Journal suggests that Facebook knew that its algorithm was dividing people, but did very little to address the problem. It noted that one of the company’s internal presentations from 2018 illustrated how Facebook’s algorithm aggravated polarizing behavior in some cases.
A slide from that presentation said if these algorithms are left unchecked they would feed users more divisive content:
Our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked, Facebook would feed users more and more divisive content in an effort to gain user attention & increase time on the platform.
According to WSJ, Zuckerberg & Co. shelved this presentation and decided not to apply its observations to any of the social network’s products. Plus, Joel Kaplan, Facebook’s chief of policy, thought these changes might have affected conservative users and publications.
In a statement, Facebook said it has learned a lot since 2016 and has built a robust integrity team to tackle such issues:
We’ve learned a lot since 2016 and are not the same company today. We’ve built a robust integrity team, strengthened our policies and practices to limit harmful content, and used research to understand our platform’s impact on society so we continue to improve.
However, WSJ’s report noted that even before the company formed this team, a Facebook researcher named Monica Lee found in 2016 that “64% of all extremist group joins are due to our recommendation tools.”
Facebook even sought to tackle the polarization problem with proposed ideas such as tweaking its algorithm, and temporary sub-groups to host heated discussions. However, these concepts were shot down because of they were “antigrowth.”
In the end, the social network didn’t do much, in favor of upholding the principle of free speech — a value that Zuckerberg has talked about a lot lately.
Earlier this month, Facebook named its Oversight Board —its Supreme Court, if you will, which can overrule the social network’s decision on content moderation. Hopefully, the company will be forthcoming in sharing its research and learnings to the board, and not wait for someone to report glaring problems with its products first.
You can read WSJ’s full report on Facebook’s divisive algorithms and its internal studies here.