15.1 Introduction
Our lives are increasingly inhabited by technological tools that help us with delivering our workload, connecting with our families and relatives, as well as enjoying leisure activities. Credit cards, smartphones, trains, and so on are all tools that we use every day without noticing that each of them may work only through their internal ‘code’. Those objects embed software programmes, and each software is based on a set of algorithms. Thus we may affirm that most of (if not all) our experiences are filtered by algorithms each time we use such ‘coded objects’.Footnote 1
15.1.1 A Preliminary Distinction: Algorithms and Soft Computing
According to computer science, algorithms are automated decision-making processes to be followed in calculations or other problem-solving operations, especially by a computer.Footnote 2 Thus an algorithm is a detailed and numerically finite series of instructions which can be processed through a combination of software and hardware tools: Algorithms start from an initial input and reach a prescribed output, which is based on the subsequent set of commands that can involve several activities, such as calculation, data processing, and automated reasoning. The achievement of the solution depends upon the correct execution of the instructions.Footnote 3 However, it is important to note that, contrary to the common perception, algorithms are neither always efficient nor always effective.
Under the efficiency perspective, algorithms must be able to execute the instructions without exploiting an excessive amount of time and space. Although technological progress allowed for the development of increasingly more powerful computers, provided with more processors and a better memory ability, when algorithms execute instructions that produce great numbers which exceed the space available in memory of a computer, the ability of the algorithm itself to sort the problems is questioned.
As a consequence, under the effectiveness perspective, algorithms may not always reach the exact solution or the best possible solution, as they may include a level of approximation which may range from a second-best solution,Footnote 4 to a very low level of accuracy. In this case, computer scientists use the definition of ‘soft computing’ (i.e., the use of algorithms that are tolerant of imprecision, uncertainty, partial truth, and approximation), due to the fact that the problems that they are addressing may not be solved or may be solved only through an excessive time-consuming process.Footnote 5
Accordingly, the use of these types of algorithms involves the possibility to provide solutions to hard problems, though these solutions, depending on the type of problems, may not always be the optimal ones. Given the ubiquitous use of algorithms processing our data and consequently affecting our personal decisions, it is important to understand in which occasions we may (or should) not fully trust the algorithm and add a human in the loop.Footnote 6
15.1.2 The Power of Algorithms
According to Neyland,Footnote 7 we may distinguish between two types of power: one exercised by algorithms, and one exercised across algorithms. The first one is the traditional one, based on the ability of algorithms to influence and steer particular effects. The second one is based on the fact that ‘algorithms are caught up within a set of relations through which power is exercised’.Footnote 8 In this sense, it is possible to affirm the groups of individuals that at different stages play a role in the definition of the algorithm share a portion of power.
In practice, one may distinguish between two levels of analysis. Under the first one, for instance when we digit a query over a search engine, the search algorithm activates and identifies the best results related to the keywords inserted, providing a ranked list of results. These results are based on a set of variables that are dependent on the context of the keywords, but also on the trust of the source,Footnote 9 on the previous history of searches of the individual, and so forth. The list of results available will then steer the decisions of the individual and affect his/her interpretation of the information searched for. Such power should not be underestimated, because the algorithm has the power to restrict the options available (i.e., avoiding some content because evaluated as untruthful or irrelevant) or to make it more likely to select a specific option. If this can be qualified as the added value of algorithms able to improve the flaws of human reasoning, which include myopia, framing, loss aversion, and overconfidence,Footnote 10 then it also shows the power of the algorithm over individual decision-making.Footnote 11
Under the second level of analysis, one may widen the view taking into account the criteria that are used to identify the search results, the online information that is indexed, the computer scientist that set those variables, the company that distributes the algorithm, the public or private company that uses the algorithm, and the individuals that may steer the selection of content. All these elements have intertwining relationships that show a more distributed allocation of power – and, as a consequence, a subsequent quest for a shared type of accountability and liability systems.
15.1.3 The Use of Algorithms in Content Moderation
In this chapter, the analysis will focus on those algorithms that are used for content detection and control over user-generated platforms, the so-called content moderation. Big Internet companies have always used filtering algorithms to detect and classify the enormous quantity of uploaded data daily. Automated content filtering is not a new concept on the Internet. Since the first years of Internet development, many tools have been deployed to analyse and filter content, and among them the most common and known are those adopted for spam detection or hash matching. For instance, spam detection tools identify content received in one’s email address, distinguishing between clean emails and unwanted content on the basis of certain sharply defined criteria derived from previously observed keywords, patterns, or metadata.Footnote 12
Nowadays, algorithms that are used for content moderation are widely diffuse, having the advantage of scalability. Such systems promise to make the process much easier, quicker, and cheaper than would be the case when using human labour.Footnote 13
For instance, the LinkedIn network published the update of the algorithms used to select the best matches between employers and potential employees.Footnote 14 The first steps of the content moderation are worth describing: at the first step, the algorithms check and verify the compliance of the content published with the platform rules (leading to a potential downgrade of the visibility or complete ban in case of incompliance). Then, the algorithms evaluate the interactions that were triggered by the content posted (such as sharing, commenting, or reporting by other users). Finally, the algorithms weigh such interactions, deciding whether the post will be demoted for low quality (low interaction level) or disseminated further for its high quality.Footnote 15
As the example of the LinkedIn algorithm clearly shows, the effectiveness of the algorithm depends on its ability to accurately analyse and classify content in its context and potential interactions. The capability to parse the meaning of a text is highly relevant for making important distinctions in ambiguous cases (e.g., when differentiating between contemptuous speech and irony).
For this task, the industry has now increasingly turned to machine learning to train their programmes to become more context sensitive. Although there are high expectations regarding the ability of content moderation tools, one should not underestimate the risks of overbroad censorship,Footnote 16 violation of the freedom of speech principle, as well as biased decision-making against minorities and non-English speakers.Footnote 17 The risks are even more problematic in the case of hate speech, an area where the recent interventions of European institutions are pushing for more human and technological investments of IT companies, as detailed in the next section.
15.2 The Fight against Hate Speech Online
Hate speech is not a new phenomenon. Digital communication may be qualified only as a new arena for its dissemination. The features of social media pave the way to a wider reach of harmful content. ‘Sharing’ and ‘liking’ lead to a snowball effect, which allows the content to have a ‘quick and global spread at no extra cost for the source’.Footnote 18 Moreover, users see in the pseudonymity allowed by social media an opportunity to share harmful content without bearing any consequence.Footnote 19 In recent years, there has been a significant increase in the availability of hate speech in the form of xenophobic, nationalist, Islamophobic, racist, and anti-Semitic content in online communication.Footnote 20 Thus the dissemination of hate speech online is perceived as a social emergency that may lead to individual, political, and social consequences.Footnote 21
15.2.1 A Definition of Hate Speech
Hate speech is generally defined as speech ‘designed to promote hatred on the basis of race, religion, ethnicity, national origin’ or other specific group characteristics.Footnote 22 Although several international treaties and agreements do include hate speech regulation,Footnote 23 at the European level, such an agreed-upon framework is still lacking. The point of reference available until now is the Council Framework Decision 2008/913/JHA on Combatting Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law.Footnote 24 As emerges from the title, the focus of the decision is the approximation of Member States’ laws regarding certain offences involving xenophobia and racism, whereas it does not include any references to other types of motivation, such as gender or sexual orientation.
The Framework Decision 2008/913/JHA should have been implemented by Member States by November 2010. However, the implementation was less effective than expected: not all the Member States have adapted their legal framework to the European provisions.Footnote 25 Moreover, in the countries where the implementation occurred, the legislative intervention followed different approaches than the national approaches to hate speech, either through the inclusion of the offence within the criminal code or through the adoption of special legislation on the issue. The choice is not without effects, as the procedural provisions applicable to special legislation may be different to those applicable to offences included in the criminal code.
Given the limited effect of the hard law approach, the EU institutions moved to a soft law approach regarding hate speech (and, more generally, also illegal content).Footnote 26 Namely, EU institutions moved toward the use of forms of co-regulation where the Commission negotiates a set of rules with the private companies, under the assumption that the latter will have more incentives to comply with agreed-upon rules.Footnote 27
As a matter of fact, on 31 May 2016, the Commission adopted a Code of Conduct on countering illegal hate speech online, signed by the biggest players in the online market: Facebook, Google, Microsoft, and Twitter.Footnote 28 The Code of Conduct requires that the IT company signatories to the code adapt their internal procedures to guarantee that ‘they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary’.Footnote 29 Moreover, according to the Code of Conduct, the IT companies should provide for a removal notification system which allows them to review the removal requests ‘against their rules and community guidelines and, where necessary, national laws transposing the Framework Decision 2008/913/JHA’.
As is evident, the approach taken by the European Commission is more focused on the timely removal of the allegedly hate speech than on the procedural guarantees that such private enforcement mechanism should adopt in order not to unreasonably limit the freedom of speech of users. The most recent evaluation of the effects of the Code of conduct on hate speech shows an increased number of notifications that have been evaluated and eventually led to the removal of hate speech content within an ever-reduced time frame.Footnote 30
In order to achieve such results, the signatory companies adopted a set of technological tools assessing and evaluating the content uploaded on their platforms. In particular, they finetuned their algorithms in order to detect potentially harmful content.Footnote 31 According to the figures provided by the IT companies regarding the flagged content, human labour alone may not achieve such task.Footnote 32 However, such algorithms may only flag content based on certain keywords, which are continuously updated, but they always lag behind the evolution of the language. And, most importantly, they may still misinterpret context-dependent wording.Footnote 33 Hate speech is a type of language that is highly context sensitive, as the same word may radically change its meaning if used at different places over time. Moreover, algorithms may be improved and trained in one language, but not in other languages which are less prominent in online communication. As a result, an algorithm that works only through the classifications of certain keywords cannot attain the level of complexity of human language and runs the risk of producing unexpected false positives and negatives in the absence of context.Footnote 34
15.2.2 The Human Intervention in Hate Speech Detection and Removal
One of the strategies able to reduce the risk of structural over-blocking is the inclusion of some human involvement in the identification and analysis of potential hate speech content.Footnote 35 Such human involvement can take different forms, either internal content checking or external content checking.Footnote 36
In the first case, IT companies allocate to teams of employees the task of verifying the sensitive cases, where the algorithm was not able to single out if the content is contrary to community standards or not.Footnote 37 Given the high number of doubtful cases, the employees are subject to a stressful situation.Footnote 38 They are asked to evaluate in a very short time frame the potentially harmful content, in order to provide a decision regarding the opportunity to take the content down. This will then provide additional feedback to the algorithm, which will learn the lesson. In this framework, the algorithms automatically identify pieces of potentially harmful content, and the people tasked with confirming this barely have time to make a meaningful decision.Footnote 39
The external content checking instead involves the ‘trusted flaggers’ – that is, an individual or entity which is considered to have particular expertise and responsibilities for the purposes of tackling hate speech. Examples for such notifiers can range from individual or organised networks of private organisations, civil society organisations, and semi-public bodies, to public authorities.Footnote 40
For instance, YouTube defines trusted flaggers as individual users, government agencies, and NGOs that have identified expertise, (already) flag content frequently with a high rate of accuracy, and are able to establish a direct connection with the platform. It is interesting to note that YouTube does not fully delegate the content detection to trusted notifiers but rather affirms that ‘content flagged by Trusted Flaggers is not automatically removed or subject to any differential policy treatment – the same standards apply for flags received from other users. However, because of their high degree of accuracy, flags from Trusted Flaggers are prioritized for review by our teams’.Footnote 41
15.3 The Open Questions in the Collaboration between Algorithms and Humans
The added value of the human intervention in the detection and removal of hate speech is evident; nonetheless, concerns may still emerge as regards such an involvement.
15.3.1 Legal Rules versus Community Standards
As hinted previously, both algorithms and humans involved in content detection and removal of hate speech evaluate content vis-à-vis the community standards adopted by each platform. Such distinction is clearly affirmed also in the YouTube trusted flaggers programme, where it is affirmed that ‘the Trusted Flagger program exists exclusively for the reporting of possible Community Guideline violations. It is not a flow for reporting content that may violate local law. Requests based on local law can be filed through our content removal form’.
These standards, however, do not fully overlap with the legal definition provided by EU law, pursuant to the Framework Decision 2008/913/JHA.
Table 15.1 shows that the definitions provided by the IT companies widen the scope of the prohibition on hate speech to sex, gender, sexual orientation, disability or disease, age, veteran status, and so forth. This may be interpreted as the achievement of a higher level of protection. However, the width of the definition is not always coupled with a subsequent detailed definition of the selected grounds. For instance, the YouTube community standards list the previously mentioned set of attributes, providing some examples of hateful content. But the standard only sets two clusters of cases: encouragement towards violence against individuals or groups based on the attributes, such as threats, and the dehumanisation of individuals or groups (for instance, calling them subhuman, comparing them to animals, insects, pests, disease, or any other non-human entity).Footnote 45 The Facebook Community policy provides for a better example, as it includes a more detailed description of the increasing levels of severity attached to three tiers of hate speech content.Footnote 46 In each tier, keywords are provided to show the type of content that will be identified (by the algorithms) as potentially harmful.
Facebook definitionFootnote 42 | YouTube definitionFootnote 43 | Twitter definitionFootnote 44 | Framework Decision 2008/913/JHA |
---|---|---|---|
| Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as:
| Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories. |
|
As a result, the inclusion of such wide hate speech definitions within the Community Guidelines or Standards become de facto rules of behaviour for users of such services.Footnote 47 The IT companies are allowed to evaluate a wide range of potentially harmful content published on their platforms, though this content may not be illegal according to the Framework Decision 2008/914/JHA.
This has two consequences. First, there is an extended privatisation of enforcement as regards those conducts that are not covered by legal provisions with the risk of an excessive interference with the right to freedom of expression of users.Footnote 48 Algorithms deployed by IT companies will then have the power to draw the often-thin line between legitimate exercise of the right to free speech and hate speech.Footnote 49
Second, the extended notion of harmful content provided by community rules imposes a wide obligation on platforms regarding the flow of communication. This may conflict with the liability regime adopted pursuant relevant EU law, namely the e-Commerce Directive, which imposes a three-tier distinction across intermediary liability and, most importantly, prohibits any general monitoring obligation over ISP pursuant art. 15.Footnote 50 As it will be addressed later, in the section on liability, striking the balance between sufficient incentives to block harmful content and over-blocking effects is crucial to safeguard the freedom of expression of users.
15.3.2 Due Process Guarantees
As a consequence of the previous analysis, the issue of procedural guarantees of users emerges.Footnote 51 A first question is related to the availability of internal mechanisms that allow users to be notified about potentially harmful content, to be heard, and to review or appeal against the decisions of IT companies. Although the strongest position safeguarding freedom of expression and fair trial principle would suggest that any restriction (i.e., any removal of potentially harmful content) should be subject to judicial intervention,Footnote 52 the number of decisions adopted on a daily basis by IT companies does not allow either the intervention of potential victims and offenders, or the judicial system. It should be noted that the Code of Conduct does not provide for any specific requirement in terms of judicial procedures, nor through alternative dispute resolution mechanisms, thus it is left to the IT companies to introduce an appeal mechanism.
Safeguards to limit the risk of removal of legal content are provided instead in the Commission Recommendation on Tackling Illegal Content Online,Footnote 53 which includes within the wider definition of illegal content also hate speech.Footnote 54 The Recommendation points to automated content detection and removal and underlines the need for counter-notice in case of removal of legal content. The procedures involve the exchange between the user and the platform, which should provide a reply: in case of evidence provided by the user that the content may not be qualified as illegal, the platform should restore the content that was removed without undue delay or allow for a re-upload by the user; whereas, in case of a negative decision, the platform should include reasons for said decision.
Among the solutions, the signatories to the Code of Conduct proposed Google provides for a review mechanism, allowing users to present an appeal against the decision to take down any uploaded content.Footnote 55 Then, the evaluation of the justifications provided by the user is processed internally and the final decision is sent afterward to the user, with limited or no explanation.
A different approach is adopted by Facebook. In September 2019, the social network announced the creation of an ‘Oversight Board’.Footnote 56 The Board has the task of providing the appeals for selected cases that address potentially harmful content. Although the detailed regulation concerning the activities of the board is still to be drafted, it is clear that it will not be able to review all the content under appeal.Footnote 57 Although this approach has been praised by scholars, several questions remain open: the transparency in the selection of the people entrusted with the role of adjudication, the type of explanation for the decision taken, the risk of capture (in particular for the oversight board), and so on. And, at the moment, these questions are still unanswered.
15.3.3 Selection of Trusted Flaggers
As mentioned previously in Section 15.2.2., the intervention of trusted flaggers in content detection and removal became a crucial element in order to improve the results of said process. The selection process to identify and recruit trusted flaggers, however, is not always clear.
According to the Commission Recommendation, the platforms should ‘publish clear and objective conditions’ for determining which individuals or entities they consider as trusted flaggers. These conditions include expertise and trustworthiness, and also ‘respect for the values on which the Union is founded as set out in Article 2 of the Treaty on European Union’.Footnote 58
Such a level of transparency does not match with the practice: although the Commission Monitoring exercise provides for data regarding at least four IT companies, with a percentage of notifications received by users vis-à-vis trusted flaggers as regards hate speech,Footnote 59 apart from the previously noted YouTube programme, none of the other companies provide a procedure for becoming a trusted flagger. Nor is any guidance provided on whether the selection of trusted notifiers is a one-time accreditation process or rather an iterative process whether the privilege is monitored and can be withdrawn.Footnote 60
This issue should not be underestimated, as the risk of rubberstamping the decisions of trusted flaggers may lead to over-compliance and excessive content takedown.Footnote 61
15.3.4 Liability Regime
When IT companies deploy algorithms and recruit trusted flaggers in order to proactively detect and remove potentially harmful content, they may run the risk of losing their exemption of liability according to the e-Commerce Directive.Footnote 62 According to art. 14 of the Directive, hosting providers are exempted from liability when they meet the following conditions:
– Service providers provide only for the storage of information at the request of third parties;
– Service providers do not play an active role of such a kind as to give it knowledge of, or control over, that information.
According to the decision of the CJEU in L’Oréal v. eBay,Footnote 63 the Court of Justice clarified that whenever an online platform provides for the storage of content (in the specific case offers for sale), sets the terms of the service, and receives revenues from such service, this does not change the position of the hosting provider denying the exemptions from liability. In contrast, this may happen when the hosting provider ‘has provided assistance which entail, in particular optimising the presentation of the offers for sale in question or promoting those offers’.
This indicates that the active role of the hosting provider is only to be found when it intervenes directly in user-generated content.Footnote 64 If the hosting provider adopts technical measures to detect and remove hate speech, does it fail its neutral position vis-à-vis the content?
The liability exemption may still apply only if two other conditions set by art. 14 e-Commerce Directive apply. Namely,
– hosting providers do not have actual knowledge of the illegal activity or information and, as regards claims for damages, are not aware of facts or circumstances from which the illegal activity or information is apparent; or
– upon obtaining such knowledge or awareness, they act expeditiously to remove or to disable access to the information.
It follows that proactive measures taken by the hosting provider may result in that platform obtaining knowledge or awareness of illegal activities or illegal information, which could thus lead to the loss of the liability exemption. However, if the hosting provider acts expeditiously to remove or to disable access to content upon obtaining such knowledge or awareness, it will continue to benefit from the liability exemption.
From a different perspective, it is possible that the development of technological tools may lead to a reverse effect as regards monitoring obligations applied over IT companies. According to art. 15 of the e-Commerce Directive, no general monitoring obligation may be imposed on hosting providers as regards illegal content. But in practice, algorithms may already deploy such tasks. Would this indirectly legitimise monitoring obligations applied by national authorities?
This is the question posed by an Austrian court to the CJEU as regards hate speech content published on the social platform Facebook.Footnote 65 The preliminary reference addressed the following case: in 2016, the former leader of the Austrian Green Party, Eva Glawischnig-Piesczek was the subject of a set of posts published on Facebook by a fake account. The posts included rude comments, in German, about the politician, along with her image.Footnote 66
Although Facebook complied with the injunction of the First Instance court across the Austrian country, blocking access to the original image and comments, the social platform appealed against the decision. After the appeal decision, the case achieved the Oberste Gerichtshof (Austrian Supreme Court). Upon analysing the case, the Austrian Supreme Court affirmed that Facebook can be considered as an abettor to the unlawful comments; thus it may be required to take steps so as to repeat the publication of identical or similar wording. However, in this case, the injunction regarding such a pro-active role for Facebook could indirectly impose a monitoring role, which is in conflict not only with art. 15 of the e-Commerce Directive but also with the previous jurisprudence of the CJEU. Therefore, the Supreme Court decided to stay the proceedings and present a preliminary reference to the CJEU. The Court asked, in particular, whether art. 15(1) of the e-Commerce Directive precludes the national court to make an order requiring a hosting provider, who has failed to expeditiously remove illegal information, not only to remove the specific information but also other information that is identical in wording.Footnote 67
The CJEU decided the case in October 2019. The decision argued that as Facebook was aware of the existence of illegal content on its platform, it could not benefit from the exemption of liability applicable pursuant to art. 14 of the e-Commerce Directive. In this sense, the Court affirmed that, according to recital 45 of the e-Commerce Directive, national courts cannot be prevented from requiring a host provider to stop or prevent an infringement. The Court then followed the interpretation of the AG in the case,Footnote 68 affirming that no violation of the prohibition of monitoring obligation provided in art. 15(1) of the e-Commerce Directive occurs if a national court orders a platform to stop and prevent illegal activity if there is a genuine risk that the information deemed to be illegal can be easily reproduced. In these circumstances, it was legitimate for a Court to prevent the publication of ‘information with an equivalent meaning’; otherwise the injunction would be simply circumvented.Footnote 69
Regarding the scope of the monitoring activity allocated to the hosting provider, the CJEU acknowledged that the injunction cannot impose excessive obligations on an intermediary and cannot require an intermediary to carry out an independent assessment of equivalent content deemed illegal, so automated technologies could be exploited in order to automatically detect, select, and take down equivalent content.
The CJEU decision tries as much as possible to provide a balance between freedom of expression and freedom to conduct a business, but the wide interpretation of art. 15 of the e-Commerce Directive can have indirect negative effects, in particular when looking at the opportunity for social networks to monitor through technological tools the upload of identical or equivalent information.Footnote 70 This approach safeguards the incentives for hosting providers to verify the availability of harmful content without incurring additional levels of liability. However, the use of technical tools may pave the way to additional cases of false positives, as they may remove or block content that is lawfully used, such as journalistic reporting on a defamatory post – thus opening up again the problem of over-blocking.
15.4 Concluding Remarks
Presently, we are witnessing an intense debate about technological advancements in algorithms and their deployment in various domains and contexts. In this context, content moderation and communication governance on digital platforms have emerged as a prominent but increasingly contested field of application for automated decision-making systems. Major IT companies are shaping the communication ecosystem in large parts of the world, allowing people to connect in various ways across the globe, but also offering opportunities to upload harmful content. The rapid growth of hate speech content has triggered the intervention of national and supranational institutions in order to restrict such unlawful speech online. In order to overcome the differences emerging at the national level and enhance the opportunity to engage international IT companies, the EU Commission adopted a co-regulatory approach inviting the same table regulators and regulates, so as to defined shared rules.
This approach has the advantage of providing incentives for IT companies to comply with shared rules, as long as non-compliance with voluntary commitments does not lead to any liability or sanction. Thus the risk of over-blocking may be avoided or at least reduced. Nonetheless, considerable incentives to delete not only illegal but also legal content exist. The community guidelines and standards presented herein show that the definition of hate speech and harmful content is not uniform, and each platform may set the boundaries of such concepts differently. When algorithms apply criteria defined on the basis of such different concepts, they may unduly limit the freedom of speech of users, as they will lead to the removal of legal statements.
The Commission approach explicitly demands proactive monitoring: ‘Online platforms should, in light of their central role and capabilities and their associated responsibilities, adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive’. But this imposes de facto monitoring obligations which may be carried out through technical tools, which are far from being without flaws and bias.
From the technical point of view, the introduction of the human in the loop, such as in the cases of trusted flaggers or the Facebook Oversight board, does not reduce the questions of effectiveness, accessibility, and transparency of the mechanisms adopted. Both strategies, however, show that some space for stronger accountability mechanisms can be found, though the path to be pursued is still long.