Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-23T23:40:29.890Z Has data issue: false hasContentIssue false

Continuities and disruptions in the National Council of Justice’s strategy of AI implementation in Brazil: the data-seafarers of Justice 4.0

Published online by Cambridge University Press:  04 December 2024

Luisa Hedler*
Affiliation:
Department of Business Humanities and Law, Copenhagen Business School, Frederiksberg, Denmark
Rights & Permissions [Opens in a new window]

Abstract

The introduction of algorithms in courts is currently the object of much scrutiny, attempting to find balance between increasing efficiency to the legal system and avoiding associated risks. This paper aims to explore how the judiciary organisations responsible for implementing algorithms describe the temporal demands of continuity and change, through a case-study of the Brazilian National Council of Justice (CNJ). Through its centralised, collaborative and open-access based platform, the CNJ views algorithms as a necessary step to deal with excessive case-load, and as a rationality-instituting mechanism for a currently dysfunctional situation. It attempts to deal with risks of disruption by placing itself in a supervisory role regarding all algorithms developed by courts across the country, ensuring the quality of the data and excluding the moment of judicial decision-making from automation, but this does not exclude the representation of the new technology as an instrument to implement specific doctrinal positions.

Type
Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

To sail is necessary: to live is not.

Plutarch

1 Introduction

‘I would like to remind you of [… the] Portuguese poet, Fernando Pessoa, who said “To sail is necessary” […] And I think we have to sail, not in the seas of the era of discovery, but in the sea of discovery of this network, this new methodology, this new world, so that we can innovate. To innovate is risky, but we have to take the risk, lest we stagnate and impede our way forward.’ (canaltjrn 2021)

It was with these nautical metaphors that auxiliary judge Fábio Porto conceptualised the mixture of excitement, risk and inevitability of change during the launch ceremony of the Justice 4.0 Programme, an initiative of the Brazilian National Council of Justice (CNJ)Footnote 1 to incorporate many digital technologies – including the development of AI applications – into the Brazilian judiciary. In a mixture of management imperatives and metaphors, technological futures and complicated colonial pasts, he relates the future to the past as he justifies the need for technological change.

Developments in natural language processing have allowed for incredible amounts of data to become machine readable, serving as an extensive basis for training machine learning algorithms that can identify correlations in a scale hitherto unimagined (Hildebrandt Reference Hildebrandt2016, 56–57). In this sense, Brazil is not alone in making use of algorithms (as well as other digitalisation technologies) within the judiciary. Aside from more prosaic ‘IT solutions’ and case management tools used for the administration of courts (Realing Reference Realing2020), judiciaries in Estonia (Council of Europe 2022) and China (Papagianneas Reference Papagianneas2022) have invested heavily in digitalising the whole procedure as much as possible and instituting ‘online courts’. There is also the heavily debated use of algorithms for risk assessment of recidivism in US criminal law (Ugwudike Reference Ugwudike2020). There has been no shortage of both institutional and academic attention paid to this matter, with a proliferation of books and research papers,Footnote 2 guidelines, chartersFootnote 3 and regulationsFootnote 4 regarding the advantages and pitfalls of introducing these new technologies into a legal system.

Back to the Brazilian judiciary and their metaphors, these discussions point to the precarious balance between the promise of revolutionary changes in the digitalisation and automation of many court functions and the maintenance of the core values of law – that is, allowing for the legal system to innovate and change, but not at the cost of its own de-characterisation. The path chosen by the CNJ – which was enthusiastically described and extensively defended in the launch event of Justice 4.0 – has been to embark on an open source and collaborative approach to the use of AI applications, supervised and ultimately controlled by the CNJ itself. This includes centralising the models for machine readability of individual cases and concentrating all applications, training models and individual projects already conducted by courts into a single platform hosted in the cloud, where it would become available to all participating tribunals and courts to use and modify to their needs (Conselho Nacional de Justiça 2021b).

But within these articulations of their strategies to algorithmic regulation in the judiciary, as the shape of the ‘disruptive innovations’ are outlined, the issue of temporality emerges repeatedly. Whereas discussing organisational change leads to discussions of the problems of the past and expectations for a better future, there are larger considerations about how slowly changing the legal system follows the rest of society, the consequences of the speed through which social conflicts get resolved and how the use of new technology affects these time-related issues.

The objective of this paper is to explore how judiciary organisations self-describe the implementation of algorithms into the legal system, especially in how this relates to handling the demands in terms of temporality. This will be done through an analysis of the CNJ’s Justice 4.0 Programme, through document analysis of legislation, regulations, news articles, management reports and informative materials produced by the CNJ, supplemented by a content analysis of the launch event of the programme, which occurred in a webinar format in February 2021. I have found that the CNJ strategy emerges from a perceived need for complexity reduction as the Brazilian judiciary faces a ‘numeric crisis’, both in terms of an enormous courts case-load and as a response to a perception of changes in the society that is their environment. They attempt to mitigate the risks of algorithms by insulating the moment of judicial decision-making and conceptualising the changes as something that would be constrained to the administrative sphere – a statement which is partially undermined by explicit references to specific normative legal theories that the implementation of algorithms makes possible.

2 The temporality of law in the face of (algorithmic) change

While poetic quotes about the sea and the colonial history of Portuguese navigations might seem strange in the context of technological change in the judiciary, it appears in the CNJ as an attempt to connect the accumulated experiences of the past to project a certain attitude towards technological change in the future. By evoking the historical experience of the Portuguese explorers, it reinforces a disposition towards a future of radical change, where there is both a dimension of big risk and the possibility of abundant rewards – and it is not the only moment when time gets brought up in the context of the implementation of new technologies.

From the simple comparison between the speed at which machines and their human counterparts work, to the fact that machines do not need to stop to rest, questions of speed and compatibility with human rhythms and thinking have been dealt with since the Industrial Revolution. When it comes to the legal system, discussing the implementation of new technologies happens against the backdrop of existing tensions between the speed and quality of legal decisions, where speed would lead to faster conflict-solving, but properly analysing the complexity of a given situation takes time (Dworkin Reference Dworkin1986).

The legal system itself is often perceived as having temporal problems regarding the rest of society, described as being too slow a pace of conflict resolution (Cobbe Reference Cobbe, Deakin and Markou2020). Time – or rather, timeliness – is considered an essential element of the legal principle of access to justice (CEPEJ 2020), and used as one of the main arguments for furthering the digitalisation and automation of courts (Shi et al. Reference Shi, Sourdin and Li2021). Considering the various ways in which time emerges – both as a problem and a solution – for the discussion of algorithms and the legal systemFootnote 5 requires a theoretical approach that is sensitive to time as a conceptual category beyond the simple measurement of intervals. This calls for a notion of temporality, that is, of the human interpretation and meaning-making that is added to the inexorable passage of time (Pickering Reference Pickering2004, 273). Applying a temporality framework to the CNJ’s descriptions of algorithms allows for interrogations of how previous experience (their distant and recent past, institutional accounts, notions about how the legal system has worked so far) is articulated and represented in the present, and how this, in turn, informs the different expectations that they have with regard to the new technologies that are being implemented.

The fact that these technological changes are being applied to courts, which are institutions that deal with jurisdictional activities, demand that special attention be paid to the interplay between temporality and law. François Ost, in Le Temps du Droit (2005), dedicates himself to exploring the foundational role of law in instituting a negentropic (that is, entropy-denying, sense-making) temporality in society, and how strategic uses of the past and the future are essential to a rationalising project in society, where the rule of law is made possible. Most importantly, Ost also identifies four destructive tendencies that threaten this rationalisation of time, which he calls ‘risk of de-temporalization’ (Ost Reference Ost2005, 25). He conceptualises his categories as ‘legal-philosophical archetypes’ (Ost Reference Ost2005, 19), inasmuch as they relate to the normativity of law in its own terms, so rather than a complete correspondence in reality, they would rather refer to competing or concurrent tendencies. The first type of de-temporalisation, called ‘nostalgia for eternity’, is characterised by an overemphasis on the past which is used to reject any type of change that is not geared towards returning to the ‘golden age of the past’ (Ost Reference Ost2005, 25). The second one, ‘entropy vertigo’, is characterised by the absence of connection to the past which, almost ironically, has as a consequence the inability to connect with the future (Ost Reference Ost2005, 29). Here, the lack of connection to the past – historicity and relatability – makes the present moment disconnected to experiences, and thereby incapable of projecting anything into the future, paralysing the moment in an eternally uncertain present. The third, characterised by an excess of determination of the future, is aptly named ‘temptation of determinism’. It prevents the potential for change by over-determining the future, with no space for the unexpected or for deviations (Ost Reference Ost2005, 31). Finally, the fourth risk of de-temporalisation, called dischrony, does not refer to an excess or lack of future or past, but to an imbalance or insufficient linkage between two different social times – such as the velocity of algorithmic calculations and the time it takes a human being to thoroughly reflect on an issue.

In summary, looking at the temporalities evoked by the descriptions and justifications of the presence of algorithms in the Brazilian legal system – especially when this relates to possible tendencies towards irrational, de-temporalising states – allows for an interesting outlook on how both problems and solutions are articulated, and the role these new technologies would have in solving them. It would also allow for the identification of potential de-temporalising tendencies within those same solutions.

3 The risks of algorithms in the legal system

When talking about introducing new technologies into a system, it is difficult not to engage with time, since questions of continuity, change and even disruption have to do with the effects of time in a given organisation (Hernes Reference Hernes2022, 240). Critically engaging with the introduction of new technology to courts already includes notions of projecting a certain future. There is an exponentially growing literature on law and technology engaging in such exercises, whether in an encouraging way, projecting into the future the many possible advantages of bringing the rule of law to hitherto unthinkable fields,Footnote 6 or projecting negative consequences of certain uses of technology to warn against them.Footnote 7

While there are discussions around the possibility of a singularity of law, where judges and lawyers could be completely replaced by a sufficiently complex algorithm that could analyse and judge new cases (Alarie Reference Alarie2016; Cobbe Reference Cobbe, Deakin and Markou2020), the particular circumstances of the CNJ (and most judiciaries which are using algorithms in their system) rather rely on new technologies to assist and augment, rather than replace, human cognition. Even so, whenever the human being is dislocated as the central or sole agent in a given task, be it within courts or in other parts of society, there have been concerns within legal scholarship about the ability of legal categories and procedures to fully grasp them (Bennett Moses Reference Bennett Moses, Brownsword, Scotford and Yeung2016; Hildebrandt Reference Hildebrandt2016) and whether they would possibly render law, as a form of regulation, incapable of capturing and regulating their actions – that is, deeply disrupting the role of the legal system (Brownsword Reference Brownsword2019).

In more directly time-related terms, the speed at which technology develops, as well as a general feeling of accelerating time in contemporary society (Virilio Reference Virilio2012), brings with it the fear that law will become more and more incapable of ‘catching up’ or matching the speed of society (here disruption would come in the form of dischrony, as described by Ost) – a situation in which the incorporation of technology in law could be a solution rather than a problem. This very possibility of speeding the legal system up, however, is also the origin of many concerns regarding the quality of decision-making, such as the possibility of permanently enshrining historic discriminations and biases into the functioning of the legal system (Pasquale and Cashwell Reference Pasquale and Cashwell2018) or creating gaps of responsibility in the case of mistakes (Delacroix Reference Delacroix2018).

Aside from discussions in the literature of law and technology, there are many ongoing discussions about the risks of the use of algorithms in different sectors of society – where law is no exception. Discussions in the European Parliament about the Artificial Intelligence Act, controversies about the use of risk-assessment algorithms in US courts and the presence of racial bias, and ‘domestic’ discussions about the effect of social media algorithms on spreading misinformation and even harming democracy contribute to an environment where algorithms are seen as an object of scrutiny with the capacity of projecting many different types of future – some of them more beneficial than others.

When it comes to the situation of the Brazilian judiciary during the launch of the Justice 4.0 Program – and the CNJ’s pivotal role in co-ordinating the implementation of what it calls artificial intelligence applications – the potential of future harms emerges as a pivotal discussion point, but, contrary to many of the discussions identifying algorithms as the larger risk, the identification of undesirable outcomes is much more varied and diverse.

4 The National Council of Justice and the Justice 4.0 Program

The National Council of Justice (CNJ) was made in 2004 by the Constitutional Amendment n. 25 of 2004Footnote 8 and officially implemented a year later. It is regulated by Art. 103-B of the Federal Constitution of Brazil, and is composed of fifteen Council members, divided between nine judges from different levels and jurisdictions and six non-judges, who are public prosecutors, lawyers selected by the national council of the BAR association and two individuals of ‘notable legal knowledge’ selected by the Legislative.Footnote 9 The CNJ’s function is to ‘control the administrative and financial operation of the judicial branch and the proper discharge of official duties by judges’ (Brasil 1988). This ranges from disciplining any member of the judiciary to ensuring the autonomy of the judiciary in the country, issuing regulatory acts and recommendations, and keeping and publishing statistics. Moreover, as enunciated in Art. 103-B paragraph 4, II, its activities include assuring the compliance of the judiciary with Art. 37 of the Constitution, which enlists all the general principles and duties of public administration – including transparency and efficiency, these being one of the main justifications not only for the implementation of algorithms into court management but also for the digitalisation of justice in broader terms.

But this increase in efficiency is not only a matter of optimisation but also of a perceived scenario of a ‘numeric crisis’ in terms of the sheer volume of cases that await judgment, failing the constitutional imperative of reasonable duration of the process (Roque Reference Roque2021). The Federal Republic of Brazil is a country of continental dimensions, with a judiciary system currently overloaded by a gargantuan case-load. According to the CNJ ‘Justice in Numbers’ report of 2021, there were 75 million pending cases in the Brazilian judiciary in December of that year – which was already a 2.1 million decrease when compared to 2019, despite all the delays and adaptations caused by the pandemic (Conselho Nacional de Justiça 2021a).

In response to this situation, many courts have been employing their resources to develop their own administrative solutions, which include, in some cases, the development of algorithms. According to a report of the Getúlio Vargas Foundation, since 2018 all superior courts, many of the regional federal courts and a considerable number of state courts have developed or are in the process of developing their own AI projects, sixty-three different ones so far. While the majority of them were developed internally by the court’s own IT division (forty-seven projects), thirteen of them are being developed with the collaboration of private companies, three of them in collaboration with universities and one of them with ‘other entities’ (Salomão Reference Salomão LF2022, 65–69).

It is, therefore, in this mixture between the perception of a necessity for change and the proliferation of independent projects that the CNJ aims to bring a unified governance strategy through the Justice 4.0 Program.

4.1 Justice 4.0

The institutional website of the CNJ describes the Justice 4.0 Programme as ‘innovation and effectivity in the realization of Justice for all’, and its main purpose as ‘promoting access to Justice through actions and projects developed for collaborative use, and products that make use of new technologies and artificial intelligence’ (Conselho Nacional de Justiça 2021). The programme is co-funded by the UN Development Programme, a technical partnership which aims to ‘develop new methodologies, studies and tools for the promotion of innovation, with focus on the effectivity of Justice for all’ (UNDP 2020, 1).

The programme, beyond unifying, monitoring and sharing of algorithms, involves a plethora of digitalising initiatives with varying degrees of technical complexity. First, there is the implementation of ‘100 per cent digital judgment’,Footnote 10 which aims to extend the already existing initiatives of electronic procedure and progressively discontinue the paper-based processes. This is complemented by the second initiative, the ‘Virtual Counter’, which substitutes the in-person assistance services provided by a court to video-conference format. A completely digitalised legal procedure, then, would be made available in a uniform way by the development of the third initiative, which is the ‘Digital Platform of the Judiciary (DataJud)’, which in turn makes it possible for all the combined data of all Brazilian courts to become machine readable. This goal is further developed by the fourth initiative, the implementation of the ‘Codex Platform’, which automatically transforms decisions and petitions into machine-readable text. The fifth, and complementary, initiative consists in assisting all tribunals in making these changes in their data management, especially when it comes to publicising their case-law in the judiciary database. And finally, the sixth initiative is the development and dissemination of the ‘Sinapses’ platform, which is a common depository for all AI models developed by individual courts, both for making them available for sharing between different courts and for submitting them to CNJ control (Conselho Nacional de Justiça 2021).

Aside from the second initiative, which implements the possibility of getting into contact with a court clerk remotely for information requests regarding cases, the remaining initiatives are quite relevant for the possibility (and quality) of developing, training and implementing algorithms in courts. While the possibility of a 100 per cent digital procedure, from the first complaint all the way to the hearing and even execution of the sentence, does not need algorithms to function, it eliminates the necessity of scanning physical documents, making them more machine readable from the outset. Preparing judicial documents to become machine-readable data is precisely the function of the Codex system, which in turn feeds the DataJud database in a unified manner. Currently, aside from developing and implementing these initiatives in all jurisdictions across Brazil, there is a concerted effort in teaching both judges and non-legal workers in the courts about the new systems, and how to best operate them (Conselho Nacional de Justiça 2024, 18). By the end of 2024, the CNJ expects that the level of digital integration will be so advanced that the entirety of Brazilian justice will be accessible from a unified web portal.

4.2 The Sinapses platform

Initially developed in 2017 by the Court of Justice of Rondônia, the Sinapses platform was adapted for countrywide availability in a partnership with the CNJ a year later. It is responsible for managing the supervised training, versioning and availability of AI models for the entire Brazilian judiciary (Conselho Nacional de Justiça 2020a). The platform is regulated by CNJ resolution n. 332/2020, which regulates ‘ethics, transparency and governance in the production and use of Artificial Intelligence in the Judiciary’. The resolution entered into force before any laws regulating the use of AI in courts (or in the government in general) came into force – while law n. 14129/2021 establishes principles and rules for the ‘digital government’, a law regulating artificial intelligence is still being analysed by the Brazilian Senate.

Article 3 of resolution n. 332/2020 defines the Sinapses platform as a ‘computational solution, maintained by the National Council of Justice, with the objective of storing, testing, training, distributing and auditing Artificial Intelligence models’. Every AI model that is made for the judiciary will have to be shared in the Sinapses platform, not only for auditing purposes but also for other courts to potentially make use of them. In the name of efficient use of the budget, it forbids the development of new models when an already existing project has reached the same results and objectives.

5 Methods and data

This qualitative analysis was conducted within the case-study framework, as it allows for the collecting of data from a multiplicity of sources in order to explore social and contextual dimensions and examine correlations between different sources of data (Roller Reference Roller2019). The legal analysis of the CNJ regulation of algorithms will be complemented by a qualitative content analysis of the launch event of the programme, which occurred in a webinar format in February 2021, as well as the document analysis of institutional reports and internal news that have been produced regarding the Justice 4.0 Program since then.

The launch event of the Justice 4.0 Program was planned to originally be hosted in person by the Tribunal of Justice of the State of Rio Grande do Norte, but was instead held in a webinar format and hosted on the Tribunal’s YouTube channel. The event, which was open to the public but predominantly geared towards judges and other employees of the judiciary, took place over three days, in livestreams that lasted about three hours each. The webinar presented local initiatives and already functioning algorithms that would or already had been made available through the Sinapses platform – but mostly, the main topic of discussion was about the potential of this new programme to (positively) impact the Brazilian judiciary. It was subdivided into a number of panels that highlighted different aspects of the Justice 4.0 Program: beginning with a more general opening ceremony on the first day, with speakers such as Luiz Fux, president of the CNJ at the time, and followed by a technical panel presenting the ‘technological solutions of Justice 4.0’. The second day focused on the range of different initiatives – both at a local and a regional level – which would be affected by or benefit from the centralisation of the programme. The panels, which had more generic titles of ‘Innovation, the Pandemic and the Public Sector’ and ‘Innovative Structures and Practices in the Judiciary’ spoke of both technological and non-technological innovations, from public-facing bots that helped by answering questions regarding emergency benefits during the pandemic to a working group reflecting on the role of the judiciary in combating climate change and contributing to other sustainable development goals. The third day focused on strategies for litigation prevention, precedent management and integration strategies, featuring intelligence centres within their judiciaries, and highlighting the role of technology in fulfilling their goals.

The majority of the speakers were judges, mostly working in the CNJ – out of thirty-two different speakers, fourteen worked for the CNJ in some capacity, and twenty-seven of them had job descriptions that explicitly included ‘judge’. Among the non-judge participants were administrative leaders such as the IT Director, who spoke on the first day during the more technical panel. Auxiliary judges of the CNJFootnote 11 were in the majority, which makes sense considering that many were especially allocated by the president of the CNJ to oversee the Justice 4.0 Programme alongside other technical initiatives. Among the remaining judges, there were representatives of federal, state and labour spheres, with a heavier representation of higher court judges during the opening, and a preponderance of first instance judges who were project leaders of innovation working groups within their jurisdictions on the second and third days. All three judge labour unions – the sectorial organisations of labour, federal and state judges – sent a representative who talked about their respective organisation’s work within the Justice 4.0 framework. The remaining speakers who were neither judges nor CNJ workers were private lawyers who were involved in relevant research and work groups. The only person who was neither a worker at the CNJ nor legally educated was a self-described ‘digital evangelist’ with a background in communication, who opened the second day of the webinar with a motivational speech on more general patterns of innovation and the digitalisation of society.

Out of the thirty-one speakers, only twelve of them were women, three of whom were panel moderators. This does not stray far from the gender make-up as shown by national statistics on the Brazilian judiciary, where only 38 per cent of magistrates are women.Footnote 12 The ethnic make-up of the launch event was also coherent with current statistics of the Brazilian judiciary, with a preponderance of white speakers, and very few visibly non-white members of panels.Footnote 13

While a legal analysis of the current regulation allows for some insights on how the normative parameters relate to the potential temporal issues caused by the introduction of algorithms, moments like these, in which highly committed members of the Brazilian judiciary and the CNJ concomitantly celebrate their achievements and attempt to persuade the audience of the importance of their endeavour, provide examples of accounts (Orbuch Reference Orbuch1997) – that is, attempts at explaining and justifying their behaviour, which allow for the articulation of how they temporally conceptualise the effect of algorithms in the legal system.

The transcripts of the event, as well as other reports, were analysed through qualitative content analysis – that is, by separating and comparing relevant passages in the data by the use of coding (Schreier Reference Schreier2012), with a theory-driven coding frame initially focused on three main categories: explanations of the function of the algorithms; continuity; and disruption; complemented by data-driven coding of sub-categories. Curiously, after the President of the Council, Justice Luiz Fux, compared the current technological developments to the age of discoveries, where humanity navigates the web instead of the seas, many other speeches contained water- and navigation-related metaphors in different instances that, if anything, facilitated the weaving of metaphors throughout the speeches and, subsequently, of this very text.

6 The data-seafarers of Justice 4.0 – temporality in the CNJ’s handling of algorithms

As previously discussed, it is almost impossible to talk about the implementation of a new technology without bringing up issues of time – if only to compare the ‘before’ and ‘after’ situations. When considering the diversity of sources, however, it might be useful to be aware of the constraints and possibilities of each one when it comes to expressions of temporality.

While laws and regulations can explicitly refer to the past mostly through their preambles, the objective of an abstract enunciation of rules tends to be more concerned with how situations in the future will be handled (especially considering the temporal effects of legal norms themselves), which means it might be more directly accessible to infer which possible futures they are trying to prevent or bring about than where the motivation for each article came from. For example, resolution 332/2020, in its preamble, refers directly to the European Charter of Ethics on the use of Artificial Intelligence in Courts, explicitly singling out this document as inspiration, but otherwise does not contain many more references to the motivations behind its crafting.Footnote 14

Management reports and internal statistics, on the other hand, are themselves the act of organisational memory being communicated, as they represent the selectiveness of which aspects of the court’s functioning are deemed worthy of note and communication – the same being true for institutional news websites. It is in the live events – or written interviews – where recounting of a shared past and projections of the future come to the forefront, as they have a higher degree of informality and freedom of format than laws and regulations. So while regulations do reflect some of the future expectations in their wording and in the type of situation that is expected to be dealt with, public speeches and interviews are a privileged site for collecting projections of the future, be they in the form of risks of future harm, or promises of future benefit.

6.1 Resolution 332/2020 – regulating uses of AI in the judiciary

The concerns raised by legal scholars have been reflected in contemporary attempts to regulate the use of AI by courts. Even though some of the potential risks of disruption – such as the overreliance on past data that could lead to stagnation of jurisprudence, or epistemological incompatibility of languages between algorithm and legal system – are harder to address directly in regulatory form, but at a glance the CNJ resolution 332/2020, which regulates the ethics of introducing AI into courts, attempts to address these concerns by setting strict parameters for a centralised control of the models. Aside from conditioning the use of AI to full compliance with the fundamental rights of the people subject to jurisdiction and prohibiting discriminatory outcomes, the regulation has an arsenal of concrete measures that give more substance to the promises of the larger principles. Art. 7, which regulates non-discrimination, institutes the requirement of homologation by the CNJ to check for potential discriminatory bias before an algorithm is implemented, and if discrimination is identified, the project must be either corrected or discontinued. There is no further detailing of how exactly this auditing will happen, or what the specific threshold of evidence for discrimination is – but this article does allow for some forms of legal contestation. In this context, the Sinapses platform, which is predominantly described as a tool for co-operation between courts and a ‘marketplace’ for AI solutions, acts as a centralising instance of control, especially considering that all algorithms must be uploaded to the platform, according to Art. 9, III.

On a comparative note, it might be worth mentioning that the European ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment, published by the European Commission for the Efficiency of Justice (CEPEJ), was explicitly mentioned in the preamble of resolution n. 332, and was quite possibly a major source of inspiration for the drafting of the resolution. The situation in Europe is described, in Annex 1 of the Charter, as one where ‘public decision-makers are beginning to be increasingly solicited by a private sector wishing to sell these tools – which are sometimes “beta” versions, that is, they will evolve over time – integrated into public policies’ (CEPEJ 2018, 14). In Brazil, on the other hand, the CNJ’s resolution requires that even private ‘solutions’ developed for Brazilian courts be made available internally in the Sinapses platform, which would mean that, even if they can be excepted from replication, they would be able to be audited.

This particular aspect was not widely advertised during the opening ceremony – while there were several instances of affirming that the new AI development was ‘ours’, it referred to the judiciary in general, and the private sector was only mentioned in a negative light in the case of the precedent management panels, when referring to malicious litigation operated by a few private law firms. It was only Judge Ângelo Vettorazzi, representative of the labour union of state judges (AMB), who pointed out the existence of data integrity risks present in making use of third-party platforms for video-conferencing which could compromise the data safety of citizens.

The non-integrated state of digitalisation of the judiciary, on the other hand, was emphatically described by more than one auxiliary judge of the CNJ as a clear problem – or, more metaphorically put, as ‘a chaotic archipelago, full of little islands that barely communicated’. In this sense, while the sense of belonging and control over the algorithms is something that is constructed as a measure of certainty in the future of Brazilian AI in courts, the lack of integration is described as another ‘undesirable present’ that can be modified with the Justice 4.0 Program, especially by the speakers who observed the issue from a centralised standpoint.

Since the launch of the Justice 4.0 Programme, the subsequent reports up to 2024 have eloquently stressed the benefits of integrating the different systems, and plan to release, until the end of 2024, a ‘Unified Portal of Judicial Services’ to extend this integration to the citizens (Conselho Nacional de Justiça 2024). Every report since has also stressed the collaborative aspects of the technological development, highlighting both the economic efficiency and the possibilities for even smaller, less well funded national courts of having access to models developed elsewhere in the country (Conselho Nacional de Justiça 2022).

6.2 Data management as ‘basic sanitation’

Present both in the regulation and constantly reiterated by the speakers in the launch event is the importance of preparing the data for being machine readable. The first requirements regarding the preparation of the data are in Art. 6, under the rubric of ‘respect of fundamental rights’, and read ‘when the development and training of intelligence models demands the usage of data, the samples have to be representative and observe the necessary cautions regarding sensitive personal data and secrecy of justice’ (Conselho Nacional de Justiça 2020b). There is a chapter dedicated to data safety, which prioritises the utilisation of data from ‘trustworthy sources’, especially the government.

From the side of the judges, there was a considerable effort from the CNJ in highlighting the need to supply this unified database with raw data in order to form a reliable and uniform basis upon which the models can be trained and later applied. Auxiliary Judge Rafael Leite, in the specific panel about describing AI applications in the judiciary, first compared this uniformisation work of database building as the ‘basic sanitation’ project that serves as the basis for the more than thirty models that were already available in the platform. The Codex project is, after all, a natural language processing program of its own designed to automate this task – but requiring the mobilisation of the individual courts to put it into practice in their own archives and case-loads. This sentiment was echoed later by the representatives of the ‘intelligence centres’ of the judiciary, requiring this ‘basic sanitation’ so that they could have access to a bigger wealth of machine-readable data.

While the external temporal implications of this emphasis are not apparent at first glance, the zealousness regarding the quality of data production shows some level of sophistication in the discussion of algorithms, since data quality strikes at the heart of many epistemological issues that the literature has raised in relation to the temporal effects of the use of AI: how a seamless interface in the final product can, as Susser (Reference Susser2019) posits, obscure the incompleteness and possibly dubious quality of the data used to train a model; how bias and discrimination are introduced by methodological choices and availability of training data, among others. This connection between rights and data quality, however, only becomes apparent in legislation, while the launch programme set the focus on data collection in order to advance the strategic goals.

The centrality of the development and use of the Codex platform in subsequent reports shows how critical this goal is – especially because it is the gathering and uniformity of the data that can guarantee both the training of algorithms and the development of new strategies. Since March 2022, all electronic judicial data has run exclusively through Codex, and 91 per cent of national courts are already using it (Conselho Nacional de Justiça 2024).

6.3 Algorithms as a response to crisis

When it comes to representations of the past (or the present, in cases where the promise of the new technologies is in the future), the answer is surprisingly straightforward and uniform throughout the different sources: the motivation for the implementation of any changes to the judiciary, from the perspective of the CNJ, is firmly based on all the temporal distortions created by the ‘numerical crisis’ of the judiciary, which produce an unsustainable situation in which law is already de-characterised and disrupted by the sheer numbers. The plight of the common citizen, and the enormous amount of time it takes for their social conflicts to be resolved, gets brought up several times during the event, and is then measured and counted in the ‘Justice by Numbers’ reports. The fact that the launch seminar took place in the middle of the pandemic, in 2021, brought a wealth of examples of legal conflicts regarding emergency benefits that were handled through the digitalisation (and at times, automation) of legal services provided by the courts. This became especially highlighted in the panel ‘Public Service, Innovation and the Pandemic’, while presenting local digitalisation initiatives that enabled the judiciary to quickly transition into the digital realm. The reference to regular citizens and their problems was often made in conjunction to the constitutional principle of access to justice. This was then tightly coupled to the motivation for introducing new technologies, a line of argument which featured most heavily during the opening ceremony.

Aside from the ‘numerical problem’, the panel on ‘strategic intelligence units’ and ‘precedent management’ in the judiciary were the ones who most acutely reminded the audience of the dire current situation that has the potential of being solved by AI applications – how repetitive and even malicious mass litigation can be detected and grouped by case management algorithms. In this sense, the problem is not only framed as a question that impacts the citizens with legal problems, but also as an internal issue that affects the functioning of the judiciary. The repetitive appeal is a special procedure in which a huge number of cases with the same legal controversy are selected and judged collectively through the selection of a small number of paradigm cases, whose judgment ‘cascades down’ to apply to all cases which had the same relevant controversy. While they might be easier to track without technological intermediation in a single court district, the knowledge of similar issues that appear across different lower jurisdictions has the potential to give appeal courts great celerity advantages in being able to identify this beforehand – and this capability is represented as an instance where algorithms are already an essential part of solving the numerical problem in the Brazilian judiciary. In this sense, these accounts stress aspects of Ost’s entropy of vertigo – where malicious litigation and disrespect to jurisprudence produce a situation where the legal past is inaccessible as a resource to orient future expectations, and algorithms would have to be employed with the justification of at the very least restoring some sense of integrity to the legal system that is already believed to be lost.

But while internalised factors in the judiciary, such as the numerical crisis or even the lack of respect for precedent (which will be addressed in more detail later) were accounted for as compelling reasons for the introduction of algorithms, another, more general narrative was commonplace in many speeches in all three days of the launch event: the notion that there is something about society in general that has sped up, demanding that the legal system change to accommodate it. As Judge Vivaldo Pinheiro, president of the Regional Tribunal of Rio Grande do Norte, observes during the opening ceremony:

‘The technological accelerations impact everything, from people’s behaviour down to the most traditional sectors [of society]. We are in the era where the speed of transition takes us to an unprecedented pattern of global change. We are living through a real rupture with the past. We stand in front of a digital society, of the expectations of citizens who are connected and well-informed, of citizens who want more access to information and more agility in solving their conflicts. Even though there is a consensus that the evolution of society is quicker than the evolution of law, it is unquestionable that law needs to walk faster, and reduce this difference.’

Thus Judge Pinheiro vividly describes a perception of dischrony between society and the law, aggravated by technological developments on the outside, which would accelerate society away from law’s relevance as an instrument for solving conflicts.

While the temporal disturbances in the already existing situation make for a compelling argument for the need for change, however, they do not necessarily point towards the use of algorithms as a solution – one could think about procedural changes, an increase in the hiring of judges and administrative staff, and even legal reforms. However, there are a few hints towards motivations for algorithms as the only possible solution in continuing to follow Judge Pinheiro’s speech:

‘[…] after the Constitution of 1988, and 15 years after the reform of the Judiciary with the resources and structure in the Judiciary [the efficiency-driven reforms of 2004], these changes do not represent a substantial alteration in the productivity of the Judiciary, especially since the courts have been through budget restrictions.’

As has been observed in other instances of the Brazilian judiciary (Hedler Reference Hedler2022), economic incentives (or more precisely, the lack thereof) are one of the biggest driving factors in terms of turning towards algorithms as a solution. Between budget constraints and political instability, as well as considering that a large-scale, efficiency-based legal reform in the judiciary already took place in 2004 and did not solve the problem, there seem to be few other alternatives. This might be part of explaining the remarkable enthusiasm that was expressed in the launch event: far from a new technology that comes to disrupt and disturb a well-oiled jurisprudential machine, the current situation in Brazil fits neatly into the urgency and need expressed in Pessoa’s quote: they see the necessity of setting sail towards these new technologies, despite its many risks.

6.4 Technological development and the (absence of) risks

While it is understandable that the launch event of a platform is not the forum to expose the difficulties and flaws of the project, the perspective of risks make themselves present by the reassurances that are pre-emptively offered by both speeches and regulations:Footnote 15 that is, that the negative future outcomes that one wishes to avoid are exactly what will be the object of legal protection, or, in the case of speeches, of reassurances and counter-arguments.

The resolution n. 332/2020 attempts to mitigate the threats of discrimination, opacity, alienation of the decision-maker and data protection issues, mostly referring to the centralised control by the CNJ as an organisation with the mandate to have oversight, host the cloud on which all applications have to be added, and have the power to decide whether a certain algorithm should be discontinued for failing to meet any of the standards proposed in the regulation. Thus, it is undoubtable that there is awareness and care about discussions in the literature about the most represented risks of algorithms, including extra protections for the use of algorithms in criminal law, and taking into consideration the many different steps of development and testing of algorithms. The future of AI in the Brazilian judiciary, in other words, is conceptualised as being under the supervision and control of the CNJ – and, from the point of view of other sectors of society, can shift the possibilities of future harm stemming from algorithms towards the risks that might stem from how the CNJ is going to exercise its powers.

During the launch event webinar, however, the projections of possible harm into the future could be grouped into three different categories: the prospect of automating away the role of the human being (especially the judge); the threat to democracy; and the more vague notion that any change, especially change that is expected to be radical or ‘disruptive’, brings some measure of risk.

First, the one most directly mentioned (and vehemently rejected) as a risk is that of substitution – it would be expected that technological developments controlled and funded by an organisation made up of judges would not automate away their own jobs, and this will be mentioned in more detail later.

The second appearance of the possibility of negative outcomes is the now well-unfurled metaphor of auxiliary judge Fábio Porto, with the vague notions of a dangerous – but very necessary – voyage that is tempered with the promise of great gain. While this was not the only journey-related metaphor – there were mentions of the judiciary needing to take the ‘high-speed train of technological advancement, lest it be left behind’ – it was the one that included in itself this perspective of general risks. The focus on the second part of Pessoa’s poem – the shadow of harm and death that stalks the necessary change – brings to evidence the consciousness that there are some aspects of harm that might be unknown or even unaccounted for until the changes have already happened. While they are in no way stronger than the imperative for change, this general notion of the possibility of future harm, at the very least, prompts some vigilance rather than unfettered optimism.

The notion of the judiciary itself making mistakes while attempting to do something new was a concern raised a few times in the specific discussions about innovation labs on the second day – but in a context where innovation labs would have the freedom to test out solutions in a smaller scale, thereby minimising damage. Here it is institutional and social, rather than technological change that is put at the centre in terms of change, where technology figures as a means of assisting experimentation by way of data availability.

However, one of the risks that was only very marginally accounted for was raised, once again, by representatives of labour unions. Judge Noemia Porto, in her capacity as representative of ANAMATRA, the labour judge’s union, and herself a labour judge, highlighted the possible risks to democracy deriving from the development of the Justice 4.0 Programme. While there were, during the event in 2021, very few mentions of democracy outside the reaffirmation of constitutional principles of access to justice, she stressed the need for assurance that all systems developed would be compatible with democratic control, lest they become autocratic instruments. Considering that all the legal provisions made to control algorithms were the product of internal works of the CNJ, and have the very same institution as their main guarantor, there is indeed a lack of external oversight (not to mention democratic control) over the decisions of the CNJ. While the ultimate instance of the Brazilian judiciary, the Supreme Federal Court, could potentially overturn decisions made by the CNJ, the fact that some Supreme Court Justices are also councillors of the CNJ makes it a rather weak set of checks and balances vis-à-vis the CNJ’s power.

While Judge Porto’s risk assessment was isolated in the launch of the project in 2021, there was a shift in accounts about concerns regarding the maintenance of a democratic society, especially after 8 January 2023, when an anti-democratic mob invaded and vandalised the main seat of all three Brazilian powers, including our Supreme Court. These risks to democracy, however, are not accounted for as being a result of an overreach of the judiciary – as mentioned by the representative of the UNDP in the foreword to the latest Justice 4.0 report:

‘Justice 4.0. is aligned to the objectives of the UNDP in Brazil and the world, as the guarantee of access to Justice, democratic governance and reduction of poverty are the faces of the same coin: democracy is threatened where there is no access to justice.’ (CNJ 2024, 8)

In this sense, the democratic risk of lack of access to justice is accounted for as a more pressing concern in the present than the negative consequences of the lack of democratic control of the judiciary.

6.5 Futures to protect: the centrality of judges and the respect for legal precedent

As mentioned previously, one of the most important negative outcomes that the CNJ seems to want to protect the development of AI against is that of substitution and constraining of human judges, but it is, however, tempered by two considerations: the innovative potential to change the channels of conflict resolution in society, and the need to protect the primacy of precedent within the Brazilian legal system.

As a constant, however, many of the speakers had the need to reiterate the central value of human intelligence as an integral part of all algorithmically mediated considerations. These utterings were clearly present in general reflections of authorities regarding the phenomenon as a whole during the opening on the first day, but also in many concluding remarks from different panels and perspectives, reaffirming the need for human competence and the centrality of ‘human work’. This was especially critical in reaffirming that the moment of decision-making, by a judge, would not be fundamentally changed. Resolution n. 332/2020 explicitly reflects this concern by dedicating three articles to the matter, under the rubric of ‘user control’, where the autonomy of ‘internal users’ – that is, decision-makers – should not be constrained, and that no result of an algorithmic operation should be binding (Art. 17). The two subsequent articles include some measures of explainability for both internal and external users, but the paragraph below is very clear that any computer system should allow for direct supervision of the competent judge. It is perhaps not surprising that a normative and technical framework with such a high degree of participation of judges would have the profession’s centrality and control as a priority in the development of technological tools for the judiciary.

One of the most ‘disruptively innovative’ [sic] groups to speak during the launch event of Justice 4.0 was the ‘intelligence groups’ which took over most of the third day of the launch event. There, all the different speakers greatly emphasised the conflict-prevention potentialities of this new and improved self-knowledge gained by data processing, and how it will fundamentally change the face of justice. In what was described as a positive experience, the intelligence centre could identify a big number of incoming cases in different courts peppered throughout one state of Brazil, and took the initiative to negotiate with other branches of the government in order to adapt what they saw as a mistake in administrative provisions pertaining to fishing licences. While this type of action does work towards the general goals of solving social conflicts in a timelier manner (and fulfilling the imperative of acceleration that has been expressed), it does call into question whether it is within the judiciary’s purview to interfere before being provoked, and outside the structure of individual cases in the first instances, or jurisprudence-settling in higher courts. Federal Judge Marco Bruno Miranda Clementino dedicated his entire speech to articulating arguments in favour of the judiciary’s role in preventing conflicts, but there are many unanswered questions regarding this preventive activity. How would the activity of these intelligence centres be legally described? Would their decision be subject to appeal, and how? There are more questions than laws regulating the activities of court intelligence divisions so far – but that seems to be no impediment in developing both theoretical construction and practical means for a judiciary that acts in such a way.

But when it comes to the centrality of the person of the judge, there was one instance where the notion of limiting a fellow judge’s autonomy was considered as a possibility, even a positive one. All panels on the third day, regarding precedent management, litigation prevention and integration strategies of intelligence centres, decried the fact that there were many judges contributing to the heavy case-load of the Brazilian Justice by not conforming to the doctrine of precedent. Even though Brazil is ostensibly a Civil Law country, and the judges in the panel admitted to some level of doctrinal controversy over the matter, one of the potential advances that the algorithms could bring in the matter of judicial strategy was the possibility of ‘transitioning the Brazilian system from a Civil Law to a Common Law system’.

This startling change is not the result of any legal reform, nor is it explicitly articulated in the Brazilian Constitution, but is the common understanding of both federal and state judges who are involved in working with precedent management, who see dissidence as undesirable and a problem to be solved technologically. While the latest procedural law changes in Brazil have gone in the direction of imposing more barriers to the broad right to appeal, including giving more weight to the precedent of higher courts, there would be no specific legal provision instituting a ‘common law regime’ in Brazil. The shape that this particular version of the future would take is relying on progressively automated decision-making in order to filter away any cases that were not in accordance with settled precedent. How it would handle other common law concepts such as distinguishing a case from precedent, for example, is unclear, and as it stands, a horizon of expectations where technical capabilities, and not necessarily democratic procedures, would be able to determine the extent of the right to appeal in Brazil, in a procedure that would be made and controlled by the perspective of higher courts.

7 Conclusion – the ‘great discoveries’

When talking about how the legal system will change, the self-descriptions and representations from the part of the CNJ are careful to leave the protagonism – or more precisely said, the moment of legal decision-making – out of the scope of AI. They are constantly reassuring the audience and themselves about the importance of human expertise, and of the menial, bureaucratic or repetitive nature of the tasks being subjected to automation, utilising mostly a language of management to refer to the tasks that can be automated. That did not exclude, however, the explicit linkage of the use of algorithms with specific views on the function of the legal system – most notably, the necessity of adapting the legal system to facilitate the legal sphere, on the extra-legal side, but also the consolidation of the doctrinal ‘theory of the precedent’, whose considerations about justification of appeals can be more easily enforced. In this sense, as the end of the voyage is tastefully left out of the narrative of the ‘great discoveries’ during the Portuguese colonisation of the New World, so is the vulnerability of the system to the discretion of the CNJ itself who, in the absence of external regulation of AI, concentrates an enormous amount of control over the development of algorithms in the Brazilian judiciary.

As protective as the CNJ is of human decision-making, the introduction of algorithms was framed as part of a specific strategy, a specific view of the ends of justice, and even a specific side on a doctrinal legal debate. But it is undeniable that there are also structural forms of regulating the implementation of algorithms that enable a sociotechnical configuration that affords a large measure of judicial control. The treatment of judicial machine-readable data, the terms of public-private partnerships that don’t make space for proprietary, un-auditable opacity, the possibilities of refusing engagement with the technology in different phases, as well as the explicit exclusion of the usage of predictive capabilities in the criminal law sphere, seem to circumvent many of the potential pitfalls.

Considering the Brazilian context of the numeric crisis in the judiciary, coupled with jurisprudence insubordination in the first instances, creates the notion of a highly volatile present, where the entropy of vertigo already describes an imbalanced, almost irrational present that impedes the law – already seen as ‘late’ compared to society – in realising its full potential. In this sense, algorithms are regarded as a temporally rationalising force, which, before being ‘revolutionary’ or ‘disruptive’, promise the possibility of rationalising, and in some ways, restoring the legal system to some semblance of order.

Further observations would be needed, however, to observe how the promises and potentialities manifest in the day-to-day of a court applying those systems, as there was no possibility of obtaining access to this dimension by the analysis of an event which featured the most enthusiastic and knowledgeable judges. Moreover, a comparative study with another court that used a different model – based on contracting the services of a legal tech company, for example – could be fruitful in articulating how different notions of the future are made possible, or plausible, depending on the level of control that judges are perceived to have in a given judiciary.

Finally, the imaginings of the future return with a question about the use of metaphors of the past: what is, then, the use of evoking the Portuguese colonisers when talking about the exposure to unknown risks that are necessary? Are the Portuguese just incidental in the evocation of sailors of ages past, innocently following the watery, nautical metaphors from monster-filled oceans to orderly data pipes?

While the intentions of Judge Porto’s word choice were not the subject of analysis, there is a specific function that the choice of the Portuguese fulfils that can bring some richness, and perhaps even closure, to this short analysis: the fact that, through their own lens, the Portuguese took a risk and were successful. In a narrative that is part of their national romance that the poet Fernando Pessoa is inserted into – and consequently reproduced, in a more or less problematised way, in the former colonies – the Great Navigations are seen as a product of Portuguese ingenuity, where they technologically advanced and courageously explored the seas. Similarly, then, do the Data Navigators draw metaphorically from this continuity with other explorers of the past, hoping that the risks that they themselves identify, with more or less uncertainty, will be able to pay off with similar success.

While this narrative of success of the ‘Great Navigations’ can be questioned in terms of the cost of the outcome – in terms of human suffering caused by colonisation and the slave trade, for example – it seems quite clear that the judges of the CNJ were not metaphorically placing their own citizens in the place of the colonised, nor of the prize to be conquered at the end of the perilous journey. However, evoking a glorious past does mean that alternative perspectives tend to be left out.

Competing interests

The author declares no conflict of interest.

Footnotes

1 The Justice 4.0 Program, which was funded by the UN Development Programme, has been running since 2020 and subsidised research and implementation of studies, methodologies and tools to promote innovation in the Brazilian judiciary, with a focus on the development of ‘disruptive’ technologies (UNDP 2020).

2 See, for example, and non-exhaustively, Hildebrandt Reference Hildebrandt2016; Gowder Reference Gowder2018; Realing Reference Realing2020; Veale et al., Reference Veale, Matus and Gorwa2023.

3 See, for example, CEPEJ (2018).

4 In the European sphere the ‘AI Act’, which was proposed by the European Commission in 2021, classifies AI systems which deal with assistance in legal interpretation and application of the law as high risk. It was approved by the European Parliament in March 2024 (European Parliament 2024).

5 For a more thorough discussion of the centrality of temporality in the discussion of algorithms, see Hedler (Reference Hedler2023).

6 See, for example, Gowder (Reference Gowder2018) for a mixture of future potential and warnings about negative consequences, and Susser (Reference Susser2019) for a future projection that is more positive overall.

7 See, for example, Pasquale and Cashwell (Reference Pasquale and Cashwell2018), Hildebrandt (Reference Hildebrandt2019), Pasquale (Reference Pasquale2019) and Ugwudike (Reference Ugwudike2020).

8 The amendment, commonly known as the ‘judiciary reform amendment’, also added the principle of efficiency and celerity in the duration of legal procedures to the roll of fundamental rights, elevated all ratified human rights treaties to the status of constitutional amendments and officially recognised the jurisdiction of the International Criminal Court, among other administrative measures. It is described by Cunha and Franco (Reference Candido da Silva Franco and Cunha2013) as part of a movement of centralisation and rationalisation in the administrative structure of the judiciary.

9 As dictated by Art. 103-B of the Brazilian Constitution.

10 Law n. 11419/2006 regulates the possibility of an electronic procedure, introducing it to the Brazilian legal system. Since 2006 Art. 14 of the law prioritises the use of open-access technology and establishes the goal of facilitating the identification of procedural issues.

11 Auxiliary judges, regulated by Art. XXVIII of the Internal Regiment of the CNJ, are career judges who are requested by the President of the Council to serve for a period of up to two years in a function to be determined by the President. There are currently twenty-one auxiliary judges at the CNJ (Conselho Nacional de Justiça 2021c).

13 According to the CNJ’s latest statistics, 83.8 per cent of Brazilian judges identify as White, and only 1.7 per cent of judges are Black (Conselho Nacional de Justiça 2024).

14 While laws made by Brazilian legislative bodies have their preparatory works published, internal resolutions of the CNJ follow a less transparent path, and have no public record on their preparatory works.

15 For a more detailed analysis of notions of risk in the resolution 330/2020, see Hedler (Reference Hedler2024).

References

Alarie, B (2016) The path of the law: Towards legal singularity. University of Toronto Law Journal 66(4), 443455. https://doi.org/10.3138/utlj.4008.CrossRefGoogle Scholar
Bennett Moses, L (2016) Regulating in the face of sociotechnical change. In Brownsword, R., Scotford, E and Yeung, K (eds.), The Oxford Handbook of Law, Regulation and Technology, 574596. https://doi.org/10.1093/oxfordhb/9780199680832.013.49.Google Scholar
Brasil (1988) Constitution of the Federative Republic of Brazil. Documentation and Information Center of the Chamber of Deputies, Brazil.Google Scholar
Brownsword, R (2019) Law disrupted, law re-imagined, law re-invented. Technology and Regulation 2019, 1030. https://doi.org/10.26116/techreg.2019.002.Google Scholar
canaltjrn (2021) Webinário de Lançamento do Programa Justiça 4.0 (24-02-2021) - YouTube. Available at https://www.youtube.com/watch?v=aFV2YgeqNhA&t=6752s&ab_channel=canaltjrn (accessed 21 November 2021).Google Scholar
Candido da Silva Franco, I and Cunha, LG (2013) O CNJ e os discursos do Direito e Desenvolvimento. Revista Direito GV 9(2), 515534. https://doi.org/10.1590/S1808-24322013000200006.CrossRefGoogle Scholar
CEPEJ (2018) European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment. Strasbourg: Council of Europe.Google Scholar
CEPEJ (2020) The time parameter within article 5 ECHR – Towards reasonable timeframes for judicial proceedings, Working Group on Judicial time management (CEPEJ-SATURN). Available at https://rm.coe.int/cepej-saturn-2020-2-en-article-5-echr-time-parameter-within-article-5-/16809fea7f (accessed 26 November 2024).Google Scholar
Cobbe, J (2020) Legal Singularity and the Reflexivity of Law. In Deakin, S and Markou, C (eds.), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence. Hart Publishing, 286290. https://doi.org/10.5040/9781509937097.ch-005.CrossRefGoogle Scholar
Conselho Nacional de Justiça (2020a) Plataforma Sinapses – Histórico, Conselho Nacional de Justiça. Available at https://www.cnj.jus.br/sistemas/plataforma-sinapses/historico/ (accessed 10 November 2021).Google Scholar
Conselho Nacional de Justiça (2020b) Resolução n. 332 from 21/08/2020, DJe/CNJ n.274. Available at https://atos.cnj.jus.br/atos/detalhar/3429 (accessed 22 November 2021).Google Scholar
Conselho Nacional de Justiça (2021) Justiça 4.0 - Portal CNJ. Available at https://www.cnj.jus.br/tecnologia-da-informacao-e-comunicacao/justica-4-0/ (accessed 22 November 2021).Google Scholar
Conselho Nacional de Justiça (2021a) Justiça em Números 2021 - Sumário Executivo. Brasília. Available at https://www.cnj.jus.br/wp-content/uploads/2021/09/justica-em-numeros-sumario-executivo.pdf (accessed 21 November 2021)Google Scholar
Conselho Nacional de Justiça (2021b) Plataforma Sinapses/Inteligência Artificial - Portal CNJ. Available at https://www.cnj.jus.br/sistemas/plataforma-sinapses/ (accessed 21 November 2021).Google Scholar
Conselho Nacional de Justiça (2021c) Presidência - Portal CNJ. Available at https://www.cnj.jus.br/sobre-o-cnj/presidencia/ (accessed 22 November 2021).Google Scholar
Conselho Nacional de Justiça (2022) Relatório Final Gestão Luiz Fux - Programa Justiça 4.0. Brasília-DF. Available at http://repositorio.unan.edu.ni/2986/1/5624.pdf (accessed 22 November 2021).Google Scholar
Conselho Nacional de Justiça (2024) Cartilha Justiça 4.0. Brasília-DF. Available at https://www.cnj.jus.br/wp-content/uploads/2024/04/cartilha-justica-4-0-2024.pdf (accessed 5 August 2024).Google Scholar
Council of Europe (2022) European judicial systems CEPEJ Evaluation Report Part 1 - Tables, graphs and analyses, Cepej. Strasbourg: Council of Europe.Google Scholar
Delacroix, S (2018) Computer systems fit for the legal profession? Legal Ethics 21(2), 119135. https://doi.org/10.1080/1460728x.2018.1551702.CrossRefGoogle Scholar
Dworkin, R (1986) Law’s Empire. Harvard University Press.Google Scholar
European Parliament (2024) Artificial Intelligence Act: MEPs Adopt Landmark Law, European Parliament. Available at https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law (accessed 26 November 2024).Google Scholar
Gowder, P (2018) Transformative legal technology and the rule of law. University of Toronto Law Journal 68(Cl), 82105. https://doi.org/10.3138/utlj.2017-0047.CrossRefGoogle Scholar
Hedler, L (2022) Algorithms, efficiency and the two faces of courts – A case study of the Brazilian Superior Court of Justice (STJ). Soziale Systeme 26(1–2), 370395. Available at https://www.degruyter.com/document/doi/10.1515/sosys-2021-0014/html.CrossRefGoogle Scholar
Hedler, L (2023) Time, Law, and Tech: The Introduction of Algorithms to Courts of Law. Copenhagen Business School [Phd]. https://doi.org/10.22439/PHD.17.2023.CrossRefGoogle Scholar
Hedler, L (2024) Risk and danger in the introduction of algorithms to courts: A comparative framework between EU and Brazil. Oñati Socio-Legal Series. https://doi.org/10.35295/osls.iisl.1859.CrossRefGoogle Scholar
Hernes, T (2022) Organization and Time. Oxford University Press. https://doi.org/10.1093/oso/9780192894380.003.0001.CrossRefGoogle Scholar
Hildebrandt, M (2016) Smart Technologies and the End(s) of Law. Elgar.Google Scholar
Hildebrandt, M (2019) Privacy as protection of the incomputable self: From agnostic to agonistic machine learning. Theoretical Inquiries in Law 20(1), 83121. https://doi.org/10.1515/til-2019-0004.CrossRefGoogle Scholar
Orbuch, TL (1997) People’s accounts count: The sociology of accounts. Annual Review of Sociology 23, 455478. https://doi.org/10.1146/annurev.soc.23.1.455. CrossRefGoogle Scholar
Ost, F (2005) O Tempo do Direito. 1st edn. Edusc.Google Scholar
Papagianneas, S (2022) Towards smarter and fairer justice? A review of the Chinese scholarship on building smart courts and automating justice. Journal of Current Chinese Affairs 51(2), 327347. https://doi.org/10.1177/18681026211021412.CrossRefGoogle Scholar
Pasquale, F (2019) A rule of persons, not machines: The limits of legal automation. George Washington Law Review 87(1), 155.Google Scholar
Pasquale, F and Cashwell, G (2018) Prediction, persuasion, and the jurisprudence of behaviourism. University of Toronto Law Journal 68, 6381. https://doi.org/10.3138/utlj.2017-0056.CrossRefGoogle Scholar
Pickering, M (2004) Experience as horizon: Koselleck, expectation and historical time. Cultural Studies 18(2–3), 271289. https://doi.org/10.1080/0950238042000201518.CrossRefGoogle Scholar
Plutarch (1917) Lives, Volume V: Agesilaus and Pompey. Pelopidas and Marcellus. Translated by Perrin B. Loeb Classical LIbrary 87. Cambridge, MA: Harvard University Press.Google Scholar
Realing, AD (2020) Courts and artificial intelligence. International Journal for Court Administration 11(2), 110. https://doi.org/10.36745/IJCA.343.Google Scholar
Roller, MR (2019) A quality approach to qualitative content analysis: Similarities and differences compared to other qualitative methods. Forum Qualitative Sozialforschung 20(3). https://doi.org/10.17169/fqs-20.3.3385.Google Scholar
Roque, AV (2021) Inteligência Artificial na tomada de decisões judiciais: três premissas básicas. Revista Eletrônica de Direito Processual 22(1), 5878.CrossRefGoogle Scholar
Salomão LF, (2022) Artificial Intelligence Technology applied to conflict management within the brazilian judiciary. Available at https://repositorio.fgv.br/items/89149bfb-04df-4260-8a6c-6d5729cd622a (accessed 26 November 2024).Google Scholar
Schreier, M (2012) Qualitative Content Analysis in Practice City Road: SAGE Publications, Inc. https://doi.org/10.4135/9781529682571.CrossRefGoogle Scholar
Shi, C, Sourdin, T and Li, B (2021) The smart court – A new pathway to justice in China? International Journal for Court Administration 12(1), 119. https://doi.org/10.36745/ijca.367.CrossRefGoogle Scholar
Susser, D (2019) Invisible influence: Artificial intelligence and the ethics of adaptive choice architectures. In AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 403–408. https://doi.org/10.1145/3306618.3314286.CrossRefGoogle Scholar
Ugwudike, P (2020) Digital prediction technologies in the justice system: The implications of a “race-neutral” agenda. Theoretical Criminology, 24 (3), 120. https://doi.org/10.1177/1362480619896006.CrossRefGoogle Scholar
UNDP (2020) BRA/20/015 - Justiça 4.0. - Justiça para todos. Brasïlia-DF. Available at https://info.undp.org/docs/pdc/Documents/BRA/BRA20015%20Revis%C3%A3o%20Geral%201%20assinada.pdf (accessed 26 November 2024).Google Scholar
Veale, M, Matus, K and Gorwa, R (2023) AI and global governance: Modalities, rationales, tensions. Annual Review of Law and Social Science 19, 130. https://doi.org/10.1146/annurev-lawsocsci-020223-040749.CrossRefGoogle Scholar
Virilio, P (2012) The Great Accelerator. Cambridge: Polity.Google Scholar