Introduction
The days when social networks were dismissed as entertaining outlets for teenagers are long gone. Today they have become the primary medium over which we consume news, form our political identities and spend a considerable portion of our time. Consequently, it shouldn’t come as a surprise that social media now are used to influence and manipulate public opinion and political behaviour around the world.Footnote 1 A growing number of governments, political parties and even some misguided individuals, such as the Macedonian teenagers who fabricated fake stories to draw traffic to their websites during the US elections, are turning to internet platforms to exert influence over the flow of information.Footnote 2 Governments, for example, typically set up teams, made of public officials, volunteers, fake accounts, and bots – a software application that runs automated tasks over the internet to interact with and mimic human users – or a mix of those to manage and influence public opinion online. These actors comment on shared posts or create content such as blog posts, YouTube videos, fake news stories or manipulated images.
Granted, fabricated information has always existed. What is new is this contemporary version’s tendency to spread globally at an extraordinary pace. Clickbait headlines and made-up stories typically spread faster than the well-researched articles of established news channels. As a result, the spread of news intentionally misleading readers has become an increasing problem for the functioning of our democracies, affecting individuals’ understanding of reality. Indeed, what is most disturbing is not so much the amount of fake news on social media, but where it is purposely directed. In particular, computational propaganda flourished during the 2016 US presidential election and continues to target low-information voters to determine the victory of candidates in contentious elections.Footnote 3
As regulators lose patience and promise to crack down on the proliferation of online disinformation across the bloc, the foundational question should revolve around how to define the phenomenon we want to tackle. “Fake news” has a variety of definitions, most of which emphasise the breadth of the term. As a result, there is no universal agreement on where the problem lies and how to frame it.Footnote 4 The European Commission defines “fake news” as “intentional disinformation spread via online social platforms, broadcast news media or traditional print.”Footnote 5 A report by FacebookFootnote 6 defines “fake news” as “a catch-all phrase to refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumours, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces.”Footnote 7 The BBC usesFootnote 8 the definition “false information deliberately circulated by hoax news sites to misinform, usually for political or commercial purposes” and distinguishes it from false news,Footnote 9 while the Guardian suggestsFootnote 10 the definition of “fictions deliberately fabricated and presented as non-fiction with intent to mislead recipients into treating fiction as fact or into doubting verifiable fact.”Footnote 11 As for academia, the most persuasive definition comes from Allcott and Gentzkow’s paper, in which they define “fake news” as “news articles that are intentionally and verifiably false, and could mislead readers.”Footnote 12
Given this elusiveness surrounding the notion of fake news, how do we put the genie back in its bottle? Here is an initial taxonomy of a few emerging approaches.
Solution 1: state intervention
According to the most prescriptive model, public authorities are expected to police the media environment by themselves. However, this approach has been criticised insofar as it entails the creation of “Ministries of Truth”. This is the case of the recently-created Global Engagement Center,Footnote 13 which helps the US government ensure that streams of data are not contaminated by state-sponsored misinformation or falsehoods. The European Union has created a similar office, called Disinformation Review.Footnote 14 This is a network of 400-plus experts, journalists, officials, NGOs and Think Tanks in over 30 countries reporting disinformation articles to EU officials, and then to the public. It is devoted to debunking fake news and Russian propaganda. I’ve submitted a request for documents to the EEAS to seek further information about the EEAS East Stratcom Team,Footnote 15 namely the criteria it uses to identify disinformation/fake news, and how it notifies/interacts with entities that are placed on the Disinformation Review,Footnote 16 and how the Task Force selects members for its network of academics and NGOs. The response from the EEASFootnote 17 did not outline clear criteria for labelling disinformation/fake news, and stated that the Task Force does not systematically communicate with any entity listed on the Disinformation Review. Furthermore, the EEAS was unclear about how to join the stakeholder network. The concern is that the criteria appear to be vague and subjective and the review violates due process in relation to enlisted sources of information.
Solution 2 – make social media platforms liable for third-party contents
An alternative, equally prescriptive form of State intervention consists of imposing penalties to entities that engage, not just in content-creation but even mere circulation of “illegal content”. A good example of this is the German Network Enforcement Act,Footnote 18 which entered into force on 1 October 2017 and has been effective since January 2018. Under this controversial law, social media companies have 24 hours to remove “obviously illegal” content, such as hate speech and defamation. Failing that, they face harsh fines. In parallel, social media networks, such as Facebook, Twitter, and YouTube, are required to submit public reports detailing how many posts were flagged and how many reports were removed. Again, failing to do so may lead to an initial fine of €5 million, which could rise to €50 million. The UN’s Special Rapporteur on Freedom of Expression, has written to the German government to warn about the potential consequences of its law. “With these 24 hour and seven day deadlines – if you are a company you are going to want avoid fines and bad public branding of your platform,” he says. “If there is a complaint about a post you are just going to take it down. What is in it for you to leave it up? I think the result is likely to be greater censorship.”
In February 2017 a draft law was introduced to the Italian Parliament with the declared purpose of countering “Fake News”. The law would criminalise the posting or sharing of “false, exaggerated or tendentious news”, imposing fines of up to €5,000 on those responsible. In addition, the law proposed imprisonment for the most serious forms of fake news such as those that might incite crime or violence, and imposed an obligation on social media platforms to monitor their services for such news. Moreover, the Italian government has created an online portal where people can report hoaxes. The portal prompts users to supply their email address, a link to the misinformation they are reporting and any social network they found it on. The requests are conveyed to authorities at the Polizia Postale, a unit of the state police that investigates cybercrime, who will fact-check them and, if laws were broken, will pursue legal action. In cases where no laws were broken, the service will still draw upon official sources to deny false or misleading information.
French President Emmanuel Macron is the latest political leader to hop on the anti-fake news bandwagon. He recently vowed to propose a law – a so-called “emergency legal action” – that would include measures to make the backers of sponsored content transparent and empower an interim relief judge to either scrap fake news from the internet, or even block websites altogether during political elections.
Under this course of actions the legislator and ultimately the courts can either decide what constitutes fake news, or outsource this responsibility immediately to social media. Unsurprisingly, the latter are uneasy about playing the role of arbiters of truth, all the more so given their pay-as-you-go business model.Footnote 19 As a result, companies like Facebook Germany have hired more human curators and partnered with fact-check organisations in an attempt to keep misinformation out of people’s feeds. As regards the French solution, there seems to be a clear risk that an incumbent government constrains the freedom of expression of its opponents, be they citizens writing on their blogs or accredited journalists writing for major publications. Moreover, both systems contain one major flaw: when fake news stories do get denounced as potentially false, or the interim judge is ready to take action, it is already too late and the story has gone viral. As evidence suggests, to categorise a piece of news as fake and thereby give it greater publicity, gives the news piece a boost and spreads its reach even further.
Online disinformation is a complex phenomenon that regulators have yet to really master. Therefore, it’s too soon to create regulation that can be effective. Nonetheless there is something that can be done.
Solution 3: Swamping fake news with the truth
There is indeed a third, counterintuitive approach that remains largely overlooked in today’s public debate. Instead of killing the story, you surround that story with related articles so as to provide more context and alternative views to the reader. In other words, the social platform hosting the disputed news alters the environment in which that story is presented and consumed.
That’s exactly what Facebook is doing with its newly-released feature offering “Related Articles”Footnote 20 directly beneath the disputed story. This invites “easier access to additional perspectives and information, including articles by third-party fact checkers”.
Facebook is testing the “swamping” approach on a voluntary basis. But it could be mandated by law across virtually all social networks. Although this method still leaves the deeper problem of algorithmic accountability open (as to the choice of the related articles representing alternative views), it appears a sensible approach worth experimenting with. It boils down to an empirical question whether exposure to alternative viewpoints plays a role in combating (or reinforcing) misperceptions. But academic researchFootnote 21 suggests that this design-centred approach could make a real difference in readers’ perceptions. In addition, unlike the prescriptive approach embraced by Germany and France, a function such as “Related Articles” doesn’t necessarily imply any editorial judgement about their truthfulness. Rather it opens the door to a welcome architecture of serendipity that reminds readers of the beauty of the chance encounters that characterise real life.
To be sure, this opens the question of algorithmic accountability – how exactly are those related articles and alternative views chosen? But it is a worthy experiment. New research suggests exposure to alternative viewpoints has a tangible effect on readers.
The emergence of this feature underlines the ability of a platform such as Facebook to seriously engage with a problem as thorny as fake news. It also suggests its readiness to set aside – at least for a while – an obsessive business model based on increasing users’ engagement and monetising their data.
The implementation of such an approach across social networks would set an important precedent. It could help close the gap between what is best for users and the dominant advertising business model.
Conclusions
Fake news is a symptom of deeper structural problems in our societies and media environments. To counter it, policymakers need to take into account the underlying, self-reinforcing mechanisms that make this old phenomenon so pervasive today. Only by taking a step back can we examine the vulnerabilities these fake news narratives exploit.
Part of the problem is the fact that tech companies such as Facebook and Google have appropriated – and monopolised – the online advertising market. This has led to a pay-as-you-go business model, in which advertisers are only charged when a page is viewed or clicked on. This ensures that social media companies have no incentive to playing the role of arbiters of truth.
Seen from this perspective, proposed anti-fake news laws focus on the trees rather than the forest. As such, they will not only remain irrelevant but also aggravate the root causes fuelling the fake news phenomenon.
This is the cautionary tale that I would like to address to the High-Level Group on fake news that has been set up to advise the European Commission on scoping the fake news phenomenon, defining the roles and responsibilities of relevant stakeholders, grasping the international dimension, taking stock of the positions at stake, and formulating recommendations.Footnote 22