A. Introduction
Digitalization and new technologies have transformed society in several ways, affecting various aspects of our everyday life. In particular, they have profoundly changed how we communicate with, connect to and interact with others; and how we seek, how we impart, and how we access information and ideas. Digital platforms and social media have heavily impacted the information and communication spheres. The contemporary media and information ecosystem is characterized by easy access to information, the ability to have your voice heard by many without geographical limitations, unprecedented levels of user interaction and engagement, and a range of capabilities facilitating users’ participation in content creation. Services provided by platforms and social media have become essential pathfinders to information and knowledge. They also offer spaces for public debate and scrutiny and for shaping and influencing public opinion, providing opportunities for democratic citizenship while complementing traditional media in this respect. The speed with which information can circulate online, the power of algorithms when it comes to moderating or structuring content for consumption, and the broader impact content made available online can have—in political, social, and other terms—and especially illegal and harmful content—are also important attributes of the current media and information environment, which is largely platform-dominated.
As digitalization and online platforms reshape information and media markets, regulators in Europe are grappling with the problem of hate speech online. Although hate speech is not new, it has become of growing concern in recent years.Footnote 1 Over and above socio-economic factors—economic and social crises, migration, the COVID-19 pandemic, etcetera—digital innovation has played an important role in accentuating the phenomenon. By expanding the ways in which individuals can exercise the right to freedom of expression and the right to seek, receive and impart information, platforms and social media have concurrently provided a space in which hatred can spread.Footnote 2
Hate speech lacks a universally accepted legally binding definition. In what should be seen as the first European attempt at reaching a common—yet non-binding—understanding of the concept, the Council of Europe (CoE) defined hate speech in 1997 as “all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.”Footnote 3 States party to the CoE were invited to “establish […] a sound legal framework,” combining civil, criminal and administrative law provisions on hate speech,Footnote 4 and narrowly circumscribe any interference with freedom of expression.Footnote 5
At the EU level, along with broader action taken to combat discrimination,Footnote 6 initial efforts to curb hate speech drew on hate speech becoming illegal speech under EU law. They have been complemented in recent years by regulatory action aimed at combatting hatred specifically in the digital environment. Although certain EU Member States have also sought national solutions of their own,Footnote 7 platforms and digital intermediaries too have developed policies to address hate speech online. The latter have worked closely with the EU institutions, especially the European Commission (hereinafter Commission), to improve standards and enforcement practices.
Distinct types of, and models for, regulatory intervention to cope with hate speech have thus been introduced. This Article sets out to develop a better understanding of the ways in which the EU seeks to combat hate speech online. The analysis explores the various instruments employed by the EU for that purpose, examining their main characteristics, strengths and weaknesses whilst shedding light on what is clearly a multi-faceted and daunting regulatory task.
B. The EU Regulatory Toolbox for Combatting (Digital) Hate Speech
All forms and manifestations of hatred and intolerance are incompatible with the values of respect for human dignity, freedom, democracy, equality, the rule of law, and respect for human rights upon which the EU is founded. Article 2 of the Treaty on European Union (TEU) enshrines these values, proudly proclaiming that they are common to Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail. EU action against hate speech is a reflection of its attachment to its values and rests on an array of instruments that combine distinct regulatory approaches. Some of these instruments specifically address the digital environment; others do not.
I. Combatting Racist and Xenophobic Hate Speech Through Criminal Law
Council Framework Decision 2008/913/JHA on combating certain forms and expressions of racism and xenophobia by means of criminal law is the Union’s criminal law response to racism and xenophobia.Footnote 8 As the title of the framework decision suggests, this is not an instrument focused on hate speech. Its aim is to approximate Member States’ laws regarding certain offences involving racism and xenophobia. However, by prohibiting certain forms of expression and acts as “racist and xenophobic offences,”Footnote 9 the framework decision also determines the Union’s approach to racist and xenophobic hate speech in addition to hate crime more broadly.Footnote 10
Adopted on the basis of the pre-Lisbon TEU provisions on police and judicial cooperation in criminal matters,Footnote 11 the framework decision defines hatred as “hatred based on race, color, religion, descent or national or ethnic origin.”Footnote 12 This list of grounds is also used to define the offence of racist and xenophobic hate speech. Member States are required to criminalize public “incit[ement] to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, color, religion, descent or national or ethnic origin.”Footnote 13 Intentional conduct is required and the act can be committed by any means,Footnote 14 offline and online. Member States are also required to punish the acts of publicly condoning, denying, or grossly trivializing certain crimes against humanity.Footnote 15 Member States remain free to cover in their national legislation other protected characteristics,Footnote 16 besides race, color, etcetera, and they can also choose from certain optional qualifiers of punishable behavior.Footnote 17 In addition, the framework decision sets forth rules on jurisdiction,Footnote 18 provides that criminal penalties must be effective, proportionate, and dissuasiveFootnote 19 and enables ex officio investigations and prosecutionFootnote 20 —an attempt to address underreporting.
The framework decision does not set a clear benchmark for the definition of the offences at issue and affords Member States a significant degree of flexibility. This has resulted in varied understandings of hate speech as well as divergence and gaps in transposition.Footnote 21 The Commission follows closely the implementation of the framework introduced.Footnote 22 The EU High Level Group (EUHLG) on combating racism, xenophobia, and other forms of intolerance, established in 2016, also assists in implementation. It does so mostly by promoting best practices, cooperation among key stakeholders, practical guidance, and training on matters such as effective law enforcement, access to justice, victim support, and recording of hate speech and hate crime.
The Commission’s European Democracy Action Plan, published in December 2020 to strengthen democratic resilience in the EU, announced further measures against hate speech. Acknowledging that digital hate speech can deter people from expressing their views and participating in online debates, the Commission envisaged an extension of the list of EU crimes under Article 83(1) of the Treaty on the Functioning of the European Union (TFEU) to cover hate speech, including online hate speech, and hate crime.Footnote 23 Article 83(1) TFEU lays down an exhaustive list of areas of “particularly serious crime with a cross-border dimension,” such as terrorism, trafficking in human beings and sexual exploitation of women and children, computer crime, organized crime, etcetera, allowing the European Parliament and the Council to establish, through directives, minimum rules concerning the definition of criminal offences and sanctions applicable in all Member States. Article 83(1) TFEU adds that on the basis of “developments in crime” the Council, after obtaining the consent of the European Parliament, may unanimously adopt a decision identifying additional areas of such particularly serious crime, enabling the adoption of secondary legislation on common standards.
In December 2021, following an external support study that mapped Member States’ laws against hate speech and hate crime,Footnote 24 the Commission presented an initiative based on Article 17(1) TEUFootnote 25 for a Council decision extending the areas of crime covered by Article 83(1) TFEU to include hate speech and hate crime.Footnote 26 The initiative, the Commission argued, should be seen as an effective means of comprehensively addressing the challenges posed by hate speech and hate crime, going beyond the protected grounds covered by the framework decision. However, careful attention should be paid to the fundamental rights repercussions of any action taken, with due respect for the principle of proportionality and the essence of free speech.Footnote 27 Given Member States’ differences in and fragmented approaches to the criminalization of hate speech and hate crime thus far, the unanimity requirement in the Council may prove an unsurmountable hurdle.
In March 2022, the Commission proposed new legislation on combatting violence against women and domestic violence on the basis of Articles 82(2)Footnote 28 and 83(1) TFEU, which also addresses cyber incitement to violence or hatred.Footnote 29 The Commission observed that “the increase in internet and social media usage has led to a sharp rise in public incitement to violence and hatred, including based on sex or gender.”Footnote 30 It also noted that “the easy, fast and broad sharing of hate speech through the digital word is reinforced by the online disinhibition effect,” given that “the presumed anonymity on the internet and sense of impunity reduce people’s inhibition to engage in such speech.”Footnote 31 The proposal criminalizes cyber incitement to violence or hatred based on sex or gender, namely “the intentional conduct of inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to sex or gender, by disseminating to the public material containing such incitement by means of information and communication technologies.”Footnote 32 According to the legal basis of the proposal, the legislative instrument put forward is a directive adopted in accordance with the ordinary legislative procedure, which is expected to facilitate agreement in the Council and European Parliament.Footnote 33
II. Banning Hate Speech in Audiovisual Media Services and on Video-Sharing Platforms
As part of the television broadcasting policy of the then European Economic Community, the 1989 Television Without Frontiers Directive (TWFD) required Member States to ensure that broadcasts by operators under their jurisdiction do not contain “any incitement to hatred on grounds of race, sex, religion or nationality.”Footnote 34 The TWFD indicated sex as a ground for protection against hate speech but did not address color, descent, or ethnic origin, as Framework Decision 2008/913/JHA subsequently would. The rule has been retained in all subsequent amendments of the directive, leading through to the Audiovisual Media Services Directive (AVMSD)Footnote 35 and rendered applicable to all audiovisual media services coming within the purview of the AVMSD: Traditional broadcasting, in other words linear audiovisual media services, and “television-like” services, in other words on-demand audiovisual media services, also known as non-linear audiovisual media services,Footnote 36 like Netflix or Hulu. In Mesopotamia Broadcast and Roj TV, the CJEU coined the concept of “incitement to hatred” as referring to “an action intended to direct specific behavior and […] a feeling of animosity or rejection with regard to a group of persons.”Footnote 37
The revised AVMSD, adopted in 2018, modified its hate speech provision and also extended the list of protected grounds.Footnote 38 Article 6(1)(a) of the AVMSD now mandates Member States to ensure “by appropriate means” that audiovisual media services provided by media service providers under their jurisdiction do not contain “any incitement to violence or hatred against a group of persons or a member of a group” based on any of the grounds referred to in Article 21 of the Charter, namely sex, race, color, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age, or sexual orientation.Footnote 39 The fight against hate speech in audiovisual media services is firmly embedded within a fundamental rights context: State measures must be necessary and proportionate, respect the rights and observe the principles enshrined in the CFR.Footnote 40
With the latest revision of the AVMSD, the scope of the directive has also been extended to cover “video-sharing platforms” (VSPs),Footnote 41 with specific rules introduced in their regard. Particularly as regards hate speech, the AVMSD requires Member States to ensure that VSPs take appropriate measures not only against the dissemination of programs, user-generated videos, and audiovisual commercial communications that infringe Framework Decision 2008/913/JHA, but also against content containing “incitement to violence or hatred against a group of persons or a member of a group” based on any of the grounds referred to in Article 21 CFR.Footnote 42 Relevant measures must be “practicable and proportionate,” taking into account the size and nature of the VSP service concerned.Footnote 43 They may range from safeguards in VSPs’ terms and conditions to transparent and user-friendly content reporting and flagging mechanisms, accountability tools, effective complaint-handling and resolution procedures, and media literacy measures.Footnote 44 National regulatory authorities are entrusted with the task of assessing the measures taken,Footnote 45 and provision has to be made for judicial reviewFootnote 46 and out-of-court redress mechanisms for the settlement of disputes between users and VSPs.Footnote 47 The use of co-regulation is particularly promoted,Footnote 48 with the Commission assuming a key role in helping VSPs share best practice on the basis of co-regulatory codes of conduct.Footnote 49 Self-regulatory codes, broadly accepted by the main stakeholders, are also encouraged at Union level.Footnote 50 The provisions make clear that fighting hate speech on VSPs is a shared responsibility and implicates a broad range of actors, including the VSPs themselves. However, according to a 2021 study on the implementation of the revised AVMSD, VSPs have not been particularly consistent in their approach to controls on hate speech, mostly due to the fact that “definitions and guidance for users vary widely.”Footnote 51
Whilst seeking to keep EU audiovisual media law up to date with technological developments, the directive’s principal rule remains freedom of reception: Member States are not allowed to restrict retransmissions of audiovisual media services from other Member States on their territory.Footnote 52 Audiovisual media services are regulated in the country of origin, in other words the Member State from which they emanate and not the country of destination. However, Article 3(2) of the AVMSD allows Member States to provisionally derogate from the principle of the country of origin, subject to substantive and procedural conditions, enabling inter alia restriction of the cross-border transmission of an audiovisual media service where this “manifestly, seriously and gravely” infringes the requirements set forth in Article 6(1)(a) in relation to hate speech.Footnote 53 The derogation must be notified to the media service provider concerned, the Member State of origin and the Commission. The CJEU has also interpreted the AVMSD in ways that provide Member States with significant room for maneuver to restrict audiovisual media services on grounds of public order,Footnote 54 a concept that can accommodate hate speech concerns.
III. Tackling ‘Illegal’ Content Online
Arrangements similar to those of the AVMSD have been made with Directive 2000/31/EC, also known as the e-Commerce Directive,Footnote 55 whose aim is to contribute to the proper functioning of the internal market by ensuring the free movement of information society services between the Member States.Footnote 56 Although Member States may not, for reasons falling within the coordinated field of the directive, restrict the freedom to provide information society services from another Member State, Footnote 57 derogations are allowed under certain conditions in respect of “a given information society service,”Footnote 58 provided that both the Commission and the Member State where the provider of the service in question is established have been notified. These include measures necessary for reasons of “public policy, in particular the prevention, investigation, detection and prosecution of criminal offences, including […] the fight against any incitement to hatred on grounds of race, sex, religion or nationality.”Footnote 59 Interestingly, the e-Commerce Directive retains the hate speech wording of the original TWFD.
Until the enactment of the Digital Services Act,Footnote 60 Directive 2000/31/EC was the principal framework at EU level on issues pertaining to the liability of digital intermediaries and therefore a key point of reference for action taken to remove and disable access to illegal content online, including hate speech. The basic rule set forth in the directive was that digital intermediaries are exempted from liability, so long as they transmit or store information in a merely “technical, automatic and passive” manner, which implies that they have “neither knowledge of nor control over the information which is transmitted or stored,” and provided that they take expeditious action on infringements after obtaining knowledge or becoming aware of them.Footnote 61 Article 15(1) of the e-Commerce Directive further precluded Member States from imposing a general obligation on digital intermediaries to monitor the information they transmit or store and/or to actively seek facts or circumstances that may indicate illegal activity.Footnote 62 There should accordingly be no general duty imposed on intermediaries to monitor content and actively seek instances of infringement.
The Commission’s 2017 Communication, Tackling Illegal Content Online: Towards an Enhanced Responsibility of Online Platforms, signaled the EU institutions’ wish to revisit long-standing understandings of the obligations of digital intermediaries in relation to illegal content online.Footnote 63 Stressing the “significant societal responsibility” of online platforms, which mirrored arguments revolving around the public functions of online intermediaries as enablers of speech and gatekeepers of information,Footnote 64 the Commission called upon digital intermediaries to “decisively step up their actions” aimed at detecting and removing illegal content quickly and efficiently, including by means of “voluntary, proactive measures.”Footnote 65 The Commission emphasized that proactive measures should not automatically entail losing the benefit of the “safe harbor” provisions of the e-Commerce DirectiveFootnote 66 and underlined the need for bolstering cooperation and investment in, and use of, automated systems,Footnote 67 acknowledging that proactive measures can rest on automation.Footnote 68
Commission Recommendation 2018/334 on measures to effectively tackle illegal content online sought to make progress in the field,Footnote 69 despite its non-binding nature. Tackling illegal content online would yet prove a key component of the unprecedented reforms introduced by the EU by means of the Digital Services Package. In particular, the Digital Services Act (DSA), which supplements the e-Commerce Directive, aspires to revolutionize the provision of online intermediary services and platform oversight in the Union, reflecting the Union’s resolve to engage in genuine platform regulation.Footnote 70 The DSA applies to providers who offer intermediary services in the Union, irrespective of their place of establishment.Footnote 71 It lays down horizontal due diligence rules whose regulatory intensity is graduated, depending on the type of intermediary, its size and impact on society. It targets “providers of intermediary services,”Footnote 72 the broadest category of operators who fall within the scope of the rules enacted, which also apply to “providers of hosting services,”Footnote 73 “online platforms,”Footnote 74 “very large online platforms” (VLOPs), and “very large online search engines” (VLOSEs).Footnote 75 The DSA retains the e-Commerce Directive’s rules on liability exemptions for intermediaries and the general prohibition on monitoring, and acknowledges that voluntary own-initiative investigations, measures aimed at detecting, identifying, and removing or disabling access to illegal content, and any measures taken to comply with EU law requirements do not entail the loss of the liability protections.Footnote 76
A set of substantive rules are then laid down to tackle illegal content,Footnote 77 which is broadly defined as “any information that in itself or in relation to an activity […] is not in compliance with Union law or the law of any Member State which is in compliance with Union law, irrespective of the precise subject matter or nature of that law.”Footnote 78 This includes illegal hate speech.Footnote 79 The DSA provides rules for framing transparency and due diligence obligations concerning operators’ content moderation policies and practice,Footnote 80 highlighting that relevant obligations should aim in particular to guarantee different public policy objectives such as the safety and trust of users, including users at particular risk of being subject to hate speech.Footnote 81 More stringent transparency duties are imposed on online platforms, VLOPs and VLOSEs.Footnote 82 Providers of hosting services are also mandated to put in place notice-and-action mechanisms facilitating the submission of “sufficiently precise and adequately substantiated notices”Footnote 83 of the presence of illegal content, and to justify any restrictions imposed, from undermining the visibility of content deemed to be illegal, removing it, disabling access to it or demoting it, to suspending or terminating the provision of the service or the user’s account, among other issues.Footnote 84 Obligations become more stringent for online platforms, VLOPs and VLOSEs. Online platforms need to take the necessary technical and organizational measures to prioritize, process and decide upon notices submitted by trusted flaggers without delay.Footnote 85 They also need to provide for internal complaint-handling systems,Footnote 86 with users also being allowed to resort to certified out-of-court dispute procedures to seek redress,Footnote 87 and introduce measures to protect against the misuse of their services.Footnote 88 VLOPs and VLOSEs are additionally required to, at least once a year, diligently identify, analyze and assess any “systemic risks” stemming from the design or functioning of their service, including their algorithmic systems, or from the use made of their services.Footnote 89 Systemic risks may involve the dissemination of illegal content, as well as any negative effects—actual or foreseeable—on inter alia the exercise of fundamental rights, civic discourse, public security, the protection of minors, and individual physical and mental well-being.Footnote 90 Relevant provisions create ample room for hate speech to come under the DSA rubric of “systemic risk.”Footnote 91 Further, VLOPs and VLOSEs are mandated to consider the ways in which their content moderation systems influence systemic risksFootnote 92 and to put in place reasonable, proportionate, and effective mitigation measures tailored to the specific systemic risks identified.Footnote 93 Mitigating measures may involve actions such as adapting and applying terms and conditions as necessary or adjusting content moderation systems and/or relevant decision-making processes and resources, including the content moderation staff, their training and expertise with regard in particular to the speed and quality of the notice processing they carry out.Footnote 94 In this regard, the DSA, which recognizes the adoption of voluntary codes of conduct as a means to implement its provisions,Footnote 95 identifies risk mitigation measures against illegal content as an area that should receive consideration through self- and co-regulatory instruments,Footnote 96 and makes express reference to the Code of Conduct on Countering Illegal Hate Speech Online.Footnote 97
Signed in May 2016 by major digital intermediaries,Footnote 98 the Code of Conduct on Countering Illegal Hate Speech Online is the result of a process facilitated by the Commission in accordance with Article 16 of the e-Commerce Directive, which encourages the drawing up of codes of conduct at Union level as a contribution to the implementation of the directive.Footnote 99 The Code is based on the premise that proper enforcement of Framework Decision 2008/913/JHA must be complemented by action taken by digital intermediaries to ensure that online hate speech is dealt with expeditiously upon receipt of a valid, in other words precise and properly substantiated, notification. The Code requires operators to put in place their own rules and community standards prohibiting hate speech, as well as clear and effective procedures for reviewing notifications on the basis of such rules and community standards and “where necessary” Member States’ laws transposing Framework Decision 2008/913/JHA into their legal orders. Parties to the Code are committed to reviewing the majority of flagged content in less than 24 hours and to remove or disable access to it, if required. The Code also contains provisions on sharing information with Member States on notifications received and how they were dealt with; engaging in partnerships and training activities with civil society to establish a network of trusted flaggers of hate speech; providing regular staff with training; sharing best practices; and supporting independent counter-narratives and educational programs which foster positive narratives. Compliance with the Code is to be regularly reviewed and assessment must take place through a structured process of periodic monitoring involving a host of civil society organizations across the Union, which act as trusted flaggers along with self-reporting by the Code’s signatories to the Commission. Findings from the 7th monitoring round in 2022Footnote 100 show that while the average number of notifications reviewed within 24 hours has fallen from 81% in 2021 to 64.4% in 2022, the average removal rate of 63.6% was similar to the 2021 rate of 62.5%, though lower than the rate of 71% in 2020.
The Code has been praised for fostering mutual learning and synergies between digital intermediaries, civil society, and Member States’ authorities,Footnote 101 but the truth is that concrete standards are only set with regard to the speed with which flagged content should be addressed. Clearly, there is an overemphasis on ensuring compliance with digital intermediaries’ own rules and standards, though evaluation, as the Commission has revealed, rests on flagged content in the light of domestic laws transposing Framework Decision 2008/913/JHA.Footnote 102 In addition, no meticulous definition of what constitutes a precise and properly substantiated notification is given, and there are no procedural safeguards in place guaranteeing the provision of systematic feedback to users. According to data from the 7th monitoring round, operators provided feedback to notifiers in 66.4% of cases,Footnote 103 compared to 60.3% in 2021, but no data has been disclosed concerning feedback to users whose content has been removed and the provision of information on the remedies available.Footnote 104 Notably, the monitoring process does not include an evaluation of the intermediaries’ decision-making process in relation to what content is removed and what content is not.
C. Definitional and Automation Hindrances
Variation in regulatory instruments and techniques is a key feature of the EU’s action against hate speech. A hard law, criminal law-based approach to racial and xenophobic hate speech through Council Framework Decision 2008/913/JHA has been combined with a special liability regime for digital intermediaries, originally set forth in the e-Commerce Directive, and regulation specifically targeting audiovisual media service providers and video-sharing platforms through the AVMSD. There are also soft law measures, such as the non-binding Commission Recommendation 2018/334 on measures to effectively tackle illegal content online, and the Code of Conduct on Countering Illegal Hate Speech, which, as it was made possible by the Commission which is also involved in the monitoring of its implementation, verges on transnational co-regulation. With the adoption of the DSA, EU regulatory efforts have stiffened substantially, resulting in the imposition of asymmetric due diligence obligations on digital intermediaries. The DSA also underlines the importance of the Code and the time benchmark it sets for the processing of valid notifications and the removal of hate speech with regard to the measures that VLOPs and VLOSEs must take to mitigate systemic risks.
Variation in EU regulatory approaches vis-à-vis hate speech goes hand in hand with varied understandings of hate speech. Whilst the e-Commerce Directive did not define “illegal information or activity” when laying out its liability protections, the DSA does not provide a substantive definition of illegal content, but rather cross-refers to EU law and Member States’ laws to capture illegality. At the same time, regulatory instruments such as Council Framework Decision 2008/913/JHA and the AVMSD advance different definitions of hate speech and build flexibilities into EU law that support and intensify differences in national legislation. Indeed, research has confirmed that Member States have diverging rules on matters pertaining to hate speech: Some Member States define hate speech on the basis of a limited number of protected attributes, while others have more extensive lists of protected characteristics, and while the lists of protected grounds may be exhaustive in some Member States, they are open-ended in others.Footnote 105 It should therefore come as no surprise that in 2017, in a motion for a resolution, the European Parliament called on the Commission to explore the feasibility of establishing a common legal definition of hate speech in the EU.Footnote 106
This lack of a common legal definition means that digital intermediaries will often be required to comply with different legal approaches to hate speech which derive respectively from EU law and national rules. At the same time, they have their own say on permissible and impermissible expression on their services. Research suggests the absence of a common approach here, too.Footnote 107 Key players have developed detailed policies on what they consider hate speech, and they have also expanded the list of protected grounds beyond those identified in instruments such as Council Framework Decision 2008/913/JHA. Thus, characteristics such as veteran status, immigration status, socio-economic status, caste, age, or weight stand alongside protected attributes, including race, ethnicity, religion/religious affiliation, national origin, sex/gender, sexual orientation, disability, or disease, which generally feature prominently in operators’ policies. Other operators have, as of yet, refrained from identifying specific protected characteristics.
The fact that digital intermediaries may be considering a wider range of content as hate speech, either because they define hate speech through more compendious lists of protected characteristics or because they proscribe hate speech in general, which can imply a wide interpretation of such speech, is problematic. That content which is not illegal as per EU law and/or Member States’ rules can still be outlawed through private enforcement has important implications for freedom of expression and freedom of information. The fact that there may be a thin line between free speech and hate speech caught by the algorithm is also a source for concern. In the context of steps taken to improve the detection of hate speech—in light of the short time windows imposed for content takedowns by instruments such as the Code of Conduct on Countering Illegal Hate Speech Online—digital intermediaries are increasingly resorting to technology and automation.Footnote 108 According to data from the latest monitoring round of the implementation of the Code, between April and June 2022, Facebook took action on 13.5 million items of hate speech, of which 95.6% was proactively detected.Footnote 109 Facebook claims to have “pioneered the use of artificial intelligence technology to remove hateful content proactively, before users report it. . .”Footnote 110 Instagram discloses similarly high levels of proactive detention at 91.2%.Footnote 111
Automated detection mechanisms may now be common, but they tend to be context blind.Footnote 112 As hate speech detection is most often a contextual exercise, automated tools may lead to mislabeling, over-detection, or no detection. Reliance on a combination of machines and human moderators is essential to assuage concerns over both over-inclusive and under-inclusive approaches to hate speech. The DSA has taken some steps towards improving transparency in the use of automation by digital intermediaries.Footnote 113 It also requires providers of hosting services to state when automated means are employed in the processing of notices of illegal content on their service and in their relevant decision-making,Footnote 114 and to include “information on the use made of automated means” in their “statement of reasons” regarding any restrictive measures taken against illegal content, including information on whether decisions were “taken in respect of content detected or identified using automated means”.Footnote 115 Finally, it precludes online platforms from relying solely on automation when handling complaints.Footnote 116
D. Conclusion
Hate speech undermines the very foundations of a democratic and pluralistic society and the common values enshrined in Article 2 TEU. It is not unique to the online world, but digitalization has brought with it unprecedented challenges with regard to its volume, dissemination and reach. In recognition of the societal impact hate speech has in terms of eroding social cohesion, solidarity and trust between members of a society, the EU, as a Union of values,Footnote 117 has strengthened its efforts to combat digital hatred in recent years. The EU regulatory toolbox against hate speech presently includes instruments with varying degrees of regulatory breadth and intensity, while the EU is also a staunch supporter of voluntary codes of conduct by the industry and co-regulation. However, the mechanisms that ensue lack a common definition of hate speech. Definitions vary at both the EU and national level, and digital intermediaries are advancing their own policies and standards on what is treated as hate speech on their services. This entails a complex multi-level approach to hate speech and the interplay of different regimes in terms of the rules set forth, their nature and scope, and models of regulation. Further complexity stems from the fact that the detection, removal and disabling of hate speech is increasingly reliant on automated means, which may not be a good fit for hate speech detection, which tends to be contextual. This lack of contextual understanding can lead to “false positives” and “false negatives.” While the former has a negative bearing on free speech, the latter may impact individual rights, human dignity, equality and non-discrimination.
In such a context, it becomes essential to ensure that fundamental rights protection is not fully outsourced or automated. According to the OSCE 2021 Policy Manual on Artificial Intelligence and Freedom of Expression, strong automation transparency frameworks should be combined with “human rights due diligence” as part of human rights-compliant content governance policies.Footnote 118 The Policy Manual advocates against requiring intermediaries to implement proactive measures based on automated tools. Instead, it recommends introducing an obligation on digital intermediaries to perform robust human rights impact assessments vis-à-vis their algorithmic content moderation, whilst noting the importance of inclusive and participatory processes for designing and implementing automated systems.
Still, the EU may need to mobilize a broader range of policies and instruments to cope with hate speech effectively. No society is immune to hate, but whether such hatred is tamed or diffused, bolstered, and consolidated also depends on measures taken to address the underlying tensions which provide the fertile ground in which hate speech can flourish. There is much to be gained from EU policy in the spheres of education, culture, cohesion, research, or immigration including actions aimed at decoding and mitigating the hate narrative, fostering social inclusion and resilience, and supporting the integration and empowerment of vulnerable and marginalized groups. Support measures such as the Citizens, Equality, Rights and Values Programme,Footnote 119 Horizon Europe,Footnote 120 Creative Europe,Footnote 121 the European Social Fund Plus,Footnote 122 the European Regional Development Fund,Footnote 123 and the Asylum, Migration and Integration FundFootnote 124 could make an important contribution in this respect.
Acknowledgements
An earlier version of this article was presented at the e-Conference “EU Values, Diversity and Intercultural Dialogue: Enhancing the Debate”, co-organized by the Jean Monnet Project EU VaDis (https://jmpeuvadis.uom.gr/) with the Hellenic Association for European Law (HAEL), in collaboration with the Center for Research on Democracy and Law, and with the support of the European Parliament Liaison Office in Greece, on 21-23 April 2021. I would like to thank the organizers for their kind invitation and the anonymous reviewers at GLJ for their helpful comments.
Competing Interest
The author declares none.
Funding Statement
No specific funding has been declared for this article.