Three ways Facebook could reduce fake news without resorting to censorship

by | Dec 5, 2016 | Uncategorized

Three ways Facebook could reduce fake news without resorting to censorship
(republished with permission from The Conversation.)

Jennifer Stromer-Galley, Syracuse University

The public gets a lot of its news and information from Facebook. Some of it is fake. That presents a problem for the site’s users, and for the company itself.

Facebook cofounder and chairman Mark Zuckerberg said the company will find ways to address the problem, though he didn’t acknowledge its severity. And without apparent irony, he made this announcement in a Facebook post surrounded – at least for some viewers – by fake news items.

Other technology-first companies with similar power over how the public informs itself, such as Google, have worked hard over the years to demote low-quality information in their search results. But Facebook has not made similar moves to help users.

What could Facebook do to meet its social obligation to sort fact from fiction for the 70 percent of internet users who access Facebook? If the site is increasingly where people are getting their news, what could the company do without taking up the mantle of being a final arbiter of truth? My work as a professor of information studies suggests there are at least three options.

Facebook’s role

Facebook says it is a technology company, not a media company. The company’s primary motive is profit, rather than a loftier goal like producing high-quality information to help the public act knowledgeably in the world.

Nevertheless, posts on the site, and the surrounding conversations both online and off, are increasingly involved with our public discourse and the nation’s political agenda. As a result, the corporation has a social obligation to use its technology to advance the common good.

Discerning truth from falsehood, however, can be daunting. Facebook is not alone in raising concerns about its ability – and that of other tech companies – to judge the quality of news. The director of, a nonprofit fact-checking group based at the University of Pennsylvania, told Bloomberg News that many claims and stories aren’t entirely false. Many have kernels of truth, even if they are very misleadingly phrased. So what can Facebook really do?

Option 1: Nudging

One option Facebook could adopt involves using existing lists identifying prescreened reliable and fake-news sites. The site could then alert those who want to share a troublesome article that its source is questionable.

One developer, for example, has created an extension to the Chrome browser that indicates when a website you’re looking at might be fake. (He calls it the “B.S. Detector.”) In a 36-hour hackathon, a group of college students created a similar Chrome browser extension that indicates whether the website the article comes from is on a list of verified reliable sites, or is instead unverified.

These extensions present their alerts while people are scrolling through their newsfeeds. At present, neither of these works directly as part of Facebook. Integrating them would provide a more seamless experience, and would make the service available to all Facebook users, beyond just those who installed one of the extensions on their own computer.

The company could also use the information the extensions generate – or their source material – to warn users before they share unreliable information. In the world of software design, this is known as a “nudge.” The warning system monitors user behavior and notifies people or gives them some feedback to help alter their actions when using the software.

This has been done before, for other purposes. For example, colleagues of mine here at Syracuse University built a nudging application that monitors what Facebook users are writing in a new post. It pops up a notification if the content they are writing is something they might regret, such as an angry message with swear words.

The beauty of nudges is the gentle but effective way they remind people about behavior to help them then change that behavior. Studies that have tested the use of nudges to improve healthy behavior, for example, find that people are more likely to change their diet and exercise based on gentle reminders and recommendations. Nudges can be effective because they give people control while also giving them useful information. Ultimately the recipient of the nudge still decides whether to use the feedback provided. Nudges don’t feel coercive; instead, they’re potentially empowering.

Option 2: Crowdsourcing

Facebook could also use the power of crowdsourcing to help evaluate news sources and indicate when news that is being shared has been evaluated and rated. One important challenge with fake news is that it plays to how our brains are wired. We have mental shortcuts, called cognitive biases, that help us make decisions when we don’t have quite enough information (we never do), or quite enough time (we never do). Generally these shortcuts work well for us as we make decisions on everything from which route to drive to work to what car to buy But, occasionally, they fail us. Falling for fake news is one of those instances.

This can happen to anyone – even me. In the primary season, I was following a Twitter hashtag on which then-primary candidate Donald Trump tweeted. A message appeared that I found sort of shocking. I retweeted it with a comment mocking its offensiveness. A day later, I realized that the tweet was from a parody account that looked identical to Trump’s Twitter handle name, but had one letter changed.

I missed it because I had fallen for confirmation bias – the tendency to overlook some information because it runs counter to my expectations, predictions or hunches. In this case, I had disregarded that little voice that told me this particular tweet was a little too over the top for Trump, because I believed he was capable of producing messages even more inappropriate. Fake news preys on us the same way.

Another problem with fake news is that it can travel much farther than any correction that might come afterwards. This is similar to the challenges that have always faced newsrooms when they have reported erroneous information. Although they publish corrections, often the people originally exposed to the misinformation never see the update, and therefore don’t know what they read earlier is wrong. Moreover, people tend to hold on to the first information they encounter; corrections can even backfire by repeating wrong information and reinforcing the error in readers’ minds.

If people evaluated information as they read it and shared those ratings, the truth scores, like the nudges, could be part of the Facebook application. That could help users decide for themselves whether to read, share or simply ignore. One challenge with crowdsourcing is that people can game these systems to try and drive biased outcomes. But, the beauty of crowdsourcing is that the crowd can also rate the raters, just as happens on Reddit or with Amazon’s reviews, to reduce the effects and weight of troublemakers.

Option 3: Algorithmic social distance

The third way that Facebook could help would be to reduce the algorithmic bias that presently exists in Facebook. The site primarily shows posts from those with whom you have engaged on Facebook. In other words, the Facebook algorithm creates what some have called a filter bubble, an online news phenomenon that has concerned scholars for decades now. If you are exposed only to people with ideas that are like your own, it leads to political polarization: Liberals get even more extreme in their liberalism, and conservatives get more conservative.

The filter bubble creates an “echo chamber,” where similar ideas bounce around endlessly, but new information has a hard time finding its way in. This is a problem when the echo chamber blocks out corrective or fact-checking information.

If Facebook were to open up more news to come into a person’s newsfeed from a random set of people in their social network, it would increase the chances that new information, alternative information and contradictory information would flow within that network. The average number of friends in a Facebook user’s network is 338. Although many of us have friends and family who share our values and beliefs, we also have acquaintances and strangers who are part of our Facebook network who have diametrically opposed views. If Facebook’s algorithms brought more of those views into our networks, the filter bubble would be more porous.

All of these options are well within the capabilities of the engineers and researchers at Facebook. They would empower users to make better decisions about the information they choose to read and to share with their social networks. As a leading platform for information dissemination and a generator of social and political culture through talk and information sharing, Facebook need not be the ultimate arbiter of truth. But it can use the power of its social networks to help users gauge the value of items amid the stream of content they face.

The Conversation

Jennifer Stromer-Galley, Professor of Information Studies, Syracuse University

This article was originally published on The Conversation. Read the original article.