Facebook’s ban on extremist figures is not enough to stop fake news, we need more

Last week, Facebook and Instagram finally banned some of the leading far-right extremists in the United States, many of whom have contributed forcefully to the spread of fake news online. Among them were Alex Jones, the founder of Infowars, Milo Yiannopoulos, former editor of Breitbart News, and Paul Joseph Watson, conspiracy theorist. Infowars, an American fake news website, will suffer the strictest ban. Facebook will remove all its content, as well as Facebook groups and events set up to discuss Infowars content. Although Facebook may feel as if it has cut the head of the snake, many thought this step was long overdue. Still, the bans will not stop or decrease the spread of fake news on social media.

The problem with misinformation on social media is that most Americans get their news through their accounts. According to a study by Pew Research Center, 68% of American adults occasionally got news on social media in 2018. Facebook is the most popular platform to get news, followed by YouTube. However, as time goes by, more of them are becoming skeptical of what they read—57% of them believe the news they see on their social accounts is inaccurate—and they should.

 

In social media algorithms prioritize engagement and not authority, the number of likes is more important than the author. The post with most shares and retweets reaches the widest audience. That’s it. The same system that has helped create influencers is perfect for the birth and spread of fake news. Facebook’s ban towards these extremist figures will not stop that spread, as there are thousands of users engaging with false content online. Those accounts belong to two different types of users: humans or bots.

On the one hand, bots are automated social media accounts, mostly present on Twitter. In 2017, a study estimated that 8.5% of all Twitter accounts (around 23 million,) 5.5% of all Facebook accounts (140 million) and 8.2% of all Instagram accounts (27 million) were bots. However, the core of their success did not lie within the number of accounts but how they spread the content. Bots work by making posts popular enough that real users believe it and share it. That fake news brings new ideas and content into the discussion (as it is false, it is also novel), which makes human users more willing to share it. An MIT study on the likeliness of a tweet being retweeted found that what makes a human retweet something to his or her followers is not the fact that he or she knows it is fake, it is its novelty or its degree of “outrageousness.”

On the other hand, there are people behind fake news, such as Alex Jones through Infowars, or other sites like America’s Last Line of Defense. This last page, as reported by The Washington Post, is a liberal satire site which posts fake news for conservatives. Even though the page explicitly states in its header that all its content is made up, people share them and after a few shares the trust in them “appears,” making these news items viral. As the founders of the website stated, the more extreme the content, the more it gets shared.

The consequences are inevitable: dozens of social media users have harassed survivors and parents of the Sandy Hook shooting claiming it did not happen, hundreds of parents have decided not to vaccinate their children believing vaccines lead to autism, and, of course, fake news led many voters to hate Hillary Clinton in 2016 (“crooked Hillary.”) There are thousands of people that believe that the world is flat.

As Facebook and Instagram ban extremist figures, we are left to wonder whether that is enough and what responsibility do these platforms have. Under US law, not much. Section 230 of the Communications Decency Act of 1996 states that internet platforms are not publishers nor are legally responsible for the content distributed in their sights. Although some people are asking for a stricter social media regulation, not much has been done yet. Last Friday, May 3rd, at the World Press Freedom Day event held at the United Nations, speakers were asked whether platforms should be considered publishers. The New York University data journalism professor Meredith Broussard had a straight answer: “Should they be regulated? Yes. Are they publishers? Yes.” Other speakers were not so sure. But if Facebook or Twitter cannot be held liable, what can we do to stop fake news?

For now, there are multiple and scattered solutions. Newsrooms have doubled their efforts to discredit online rumors, such as the anti-vax movement. Facebook has tried to delete fake news content and educate journalists on how to use social media, and several nonprofits started media literacy projects. However, in an environment that is hostile towards media, the phrase ‘fake news’ seems to apply to anything contrary to one’s interests, not just actual misinformation, and reporters find it harder to state their authority. Users tend to read what they already agree with, and discredit what they don’t believe true.

What is clear is that Facebook’s recent ban will not have a substantial effect on the more significant misinformation problem. Infowars will find new platforms and new users to convey its message. Jones will survive and move towards new mediums. Banning one single extremist figure is not enough. We need harder regulations on platforms and broader media literacy initiatives.

About Josep Valor

Josep Valor-Sabatier is professor of information systems and information technology and holder of the Indra Chair of Digital Strategy. He received his Ph.D. in Operations Research from MIT, and his Sc.D. in Medical Engineering from the Harvard-MIT Division of Health Sciences and Technology. Josep Valor teaches extensively at the senior executive level on Management Information Systems, Media Management, Management of Technology, and Strategy.

About Carmen Arroyo Nieto

One thought on “Facebook’s ban on extremist figures is not enough to stop fake news, we need more

Post a comment

Your email address will not be published.