We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Theologians often struggle to engage with scientific and technological proposals meaningfully in our contemporary context. This Element provides an introduction to the use of science fiction as a conversation partner for theological reflection, arguing that it shifts the science – religion dialogue away from propositional discourse in a more fruitful and imaginative direction. Science fiction is presented as a mediator between theological and scientific disciplines and worldviews in the context of recent methodological debates. Several sections provide examples of theological engagement in relation to the themes of embodiment, human uniqueness, disability and economic inequalities, exploring relevant technologies such as mind-uploading, artificial intelligence, and virtual reality in dialogue with select works of science fiction. A final section considers the pragmatic challenge of progress in the real world towards the more utopian futures presented in science fiction.
Artificial Intelligence (AI) is rapidly transforming the landscape of academic law libraries worldwide, offering new opportunities for enhancing legal research, information management, and user engagement. This article examines emerging trends in AI applications within academic law libraries, focusing on global developments alongside the unique challenges and opportunities faced in the Caribbean context. Key areas of exploration include AI-powered legal research tools, natural language processing (NLP) applications, and the ethical considerations surrounding AI integration. Drawing from insights presented at the CARALL Conference in July 2024, this article provides a comparative analysis of global best practices and proposes strategic recommendations for Caribbean academic law libraries to harness the potential of AI while addressing regional gaps in technological infrastructure and AI literacy.
Artificial Intelligence (AI) has enriched the lives of people around the globe. However, the emergence of AI-powered lethal autonomous weapon systems (LAWS) has become a significant concern for the international community. LAWS are computer-based weapon systems capable of completing their missions, including identifying and engaging targets without direct human intervention. The use of such weapons poses significant challenges to compliance with international humanitarian and human rights law. Scholars have extensively examined LAWS in the context of humanitarian law; however, their implications for human rights warrant further discussion. Against this backdrop, this paper analyzes the human rights challenges posed by LAWS under international law. It argues that using LAWS in warfare and domestic law enforcement operations could violate human rights, such as the rights to life, human dignity, and remedy, among others. Thus, it calls for a prohibition of the use of killer robots against humans.
As artificial intelligence (AI) continues to advance, it is natural to ask whether AI systems can be not only intelligent, but also conscious. I consider why people might think AI could develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, challenging the assumption that computation provides a sufficient basis for consciousness. I'll instead make the case that consciousness depends on our nature as living organisms – a form of biological naturalism. I lay out a range of scenarios for conscious AI, concluding that real artificial consciousness is unlikely along current trajectories, but becomes more plausible as AI becomes more brain-like and/or life-like. I finish by exploring ethical considerations arising from AI that either is, or convincingly appears to be, conscious. If we sell our minds too cheaply to our machine creations, we not only overestimate them – we underestimate ourselves.
Africa had a busy election calendar in 2024, with at least 19 countries holding presidential or general elections. In a continent with a large youth population, a common theme across these countries is a desire for citizens to have their voices heard, and a busy election year offers an opportunity for the continent to redeem its democratic credentials and demonstrate its leaning towards strengthening free and fair elections and a more responsive and democratic governance. Given the central role that governance plays in security in Africa, the stakes from many of these elections are high, not only to achieve a democratically elected government but also to achieve stability and development. Since governance norms, insecurity, and economic buoyancy are rarely contained by borders, the conduct and outcomes from each of these elections will also have implications for neighbouring countries and the continent overall. This article considers how the results of recent elections across Africa have been challenged in courts based on mistrust in the use of technology platforms, how the deployment of emerging technology, including AI, is casting a shadow on the integrity of elections in Africa, and the policy options to address these emerging trends with a particular focus on governance of AI technologies through a human rights-based approach and equitable public procurement practices.
Large language models (LLMs) such as ChatGPT, Gemini, and Claude are increasingly being used in aid or place of human judgment and decision making. Indeed, academic researchers are increasingly using LLMs as a research tool. In this paper, we examine whether LLMs, like academic researchers, fall prey to a particularly common human error in interpreting statistical results, namely ‘dichotomania’ that results from the dichotomization of statistical results into the categories ‘statistically significant’ and ‘statistically nonsignificant’. We find that ChatGPT, Gemini, and Claude fall prey to dichotomania at the 0.05 and 0.10 thresholds commonly used to declare ‘statistical significance’. In addition, prompt engineering with principles taken from an American Statistical Association Statement on Statistical Significance and P-values intended as a corrective to human errors does not mitigate this and arguably exacerbates it. Further, more recent and larger versions of these models do not necessarily perform better. Finally, these models sometimes provide interpretations that are not only incorrect but also highly erratic.
Skeptical theism attempts to address the problem of evil by appealing to human cognitive limitations. The causal structure of the world is opaque to us. We cannot tell, and should not expect to be able to tell, if there is gratuitous evil, that is, evil which isn’t necessary for achieving some greater good or for precluding some greater evil. At first, it seems tempting to think that the rapid development of artificial intelligence (AI) technologies might change this fact. Our cognitive limitations may no longer be a fixed point in responding to the problem of evil. But I argue that this won’t ultimately matter. The workings of any AI capable of rendering the problem of evil tractable will likely be just as opaque to us as the causal structure of the world God created. Interestingly, then, both God and a sufficiently advanced AI are alien intelligences to us. This reveals what is truly difficult, and perhaps intractable, about the problem of evil. For the evils to be defeated, we would need a relational understanding of why God permitted them. Yet such an understanding of an inscrutable God seems forever beyond our, and even our post-human descendants, ken.
Prior research has shown that people judge algorithmic errors more harshly than identical mistakes made by humans—a bias known as algorithm aversion. We explored this phenomenon across two studies (N = 1199), focusing on the often-overlooked role of conventionality when comparing human versus algorithmic errors by introducing a simple conventionality intervention. Our findings revealed significant algorithm aversion when participants were informed that the decisions described in the experimental scenarios were conventionally made by humans. However, when participants were told that the same decisions were conventionally made by algorithms, the bias was significantly reduced—or even completely offset. This intervention had a particularly strong influence on participants’ recommendations of which decision-maker should be used in the future—even revealing a bias against human error makers when algorithms were framed as the conventional choice. These results suggest that the existing status quo plays an important role in shaping people’s judgments of mistakes in human–algorithm comparisons.
This chapter roots the authors' insights about automated legal guidance in a broader examination of why and how to address the democracy deficit in administrative law. As this chapter contemplates the future of agency communications, it also explores in greater detail the possibility that technological developments may allow government agencies not only to explain the law to the public using automated tools but also to automate the legal compliance obligations of individuals. While automated legal compliance raises serious concerns, recent examples reveal that it may soon become a powerful tool that agencies can apply broadly under the justifications of administrative efficiency. As this chapter argues, the lessons learned from our study of automated legal guidance are critical to maintaining values like transparency and legitimacy, as automated compliance expands as a result of perceived benefits like efficiency.
The Conclusion emphasizes the growing importance of automated legal guidance tools across government agencies. It crystalizes the insight that automated legal guidance tools reflect a trade-off between government agencies representing the law accurately and presenting it in accessible and understandable terms. While automated legal guidance tools enable agencies to reach more members of the public and provide them quick and easy explanations of the law, these quick and easy explanations sometimes obscure what the law actually is. The Conclusion acknowledges and accepts the importance of automated legal guidance to the future of governance, and, especially in light of this acknowledgement, recommends that legislators and agency officials adopt the policy recommendations presented in this book.
As Chapter 4 demonstrated, automated legal guidance often enables the government to present complex law as though it is simple without actually engaging in simplification of the underlying law. While this approach offers advantages in terms of administrative efficiency and ease of use by the public, it also causes the government to present the law as simpler than it is, leading to less precise advice and potentially inaccurate legal positions. As the use of automated legal guidance by government agencies is likely to grow in the future, a number of policy interventions are needed. This chapter offers multiple detailed policy recommendations for federal agencies that have introduced, or may introduce, chatbots, virtual assistants, and other automated tools to communicate the law to the public. Our recommendations are organized into five general categories: (1) transparency; (2) reliance; (3) disclaimers; (4) process; and (5) accessibility, inclusion, and equity.
The Introduction presents an overview of the use of automated legal guidance by government agencies. It offers examples of chatbots, virtual assistants, and other online tools in use across US federal government agencies and shows how the government is committed to expanding their application. The Introduction sets forth some of the critical features of automated legal guidance, including its tendency to make complex aspects of the law seem simple. The Introduction previews how automated legal guidance promises to increase access to complex statutes and regulations. However, the Introduction cautions that there are underappreciated costs of automated legal guidance, including that its simplification of statutes and regulations is more likely to harm members of the public who lack access to legal counsel than high-income and wealthy individuals. The Introduction provides a roadmap for the remainder of the book.
This article presents a novel conversational artificial intelligence (CAI)-enabled active ideation system as a creative idea generation tool to assist novice product designers in mitigating the initial latency and ideation bottlenecks that are commonly observed. It is a dynamic, interactive, and contextually responsive approach, actively involving a large language model (LLM) from the domain of natural language processing (NLP) in artificial intelligence (AI) to produce multiple statements of potential ideas for different design problems. Integrating such AI models with ideation creates what we refer to as an active ideation scenario, which helps foster continuous dialog-based interaction, context-sensitive conversation, and prolific idea generation. An empirical study was conducted with 30 novice product designers to generate multiple ideas for given problems using traditional methods and the new CAI-based interface. The ideas generated by both methods were qualitatively evaluated by a panel of experts. The findings demonstrated the relative superiority of the proposed tool for generating prolific, meaningful, novel, and diverse ideas. The interface was enhanced by incorporating a prompt-engineered structured dialog style for each ideation stage to make it uniform and more convenient for the product designers. A pilot study was conducted and the resulting responses of such a structured CAI interface were found to be more succinct and aligned toward the subsequent design stage. The article thus established the rich potential of using generative AI (Gen-AI) for the early ill-structured phase of the creative product design process.
This chapter sets forth how government agencies are using artificial intelligence to automate their delivery of legal guidance to the public. The chapter first explores how many federal agencies have a duty not only to enforce the law but also to serve the public, including by explaining the law and helping the public understand how it applies. Agencies must contend with expectations that they will provide customer service experiences akin to those provided by the private sector. At the same time, government agencies lack sufficient resources. The complexity of statutes and regulations significantly compounds this challenge for agencies. As this chapter illustrates, the federal government has begun using virtual assistants, chatbots, and related technology to respond to tens of millions of inquiries from the public about the application of the law.
This chapter illuminates some of the hidden costs of the federal agencies’ use of automated legal guidance to explain the law to the public. It highlights the following features of these tools: they make statements that deviate from the formal law; they fail to provide notice to users about the accuracy and legal value of their statements; and they induce reliance in ways that impose inequitable burdens among different user populations. The chapter also considers how policymakers should weigh these costs against the benefits of automated legal guidance when contemplating whether to adopt, or increase, agencies’ use of these tools.
One of the most significant challenges in research related to nutritional epidemiology is the achievement of high accuracy and validity of dietary data to establish an adequate link between dietary exposure and health outcomes. Recently, the emergence of artificial intelligence (AI) in various fields has filled this gap with advanced statistical models and techniques for nutrient and food analysis. We aimed to systematically review available evidence regarding the validity and accuracy of AI-based dietary intake assessment methods (AI-DIA). In accordance with PRISMA guidelines, an exhaustive search of the EMBASE, PubMed, Scopus and Web of Science databases was conducted to identify relevant publications from their inception to 1 December 2024. Thirteen studies that met the inclusion criteria were included in this analysis. Of the studies identified, 61·5 % were conducted in preclinical settings. Likewise, 46·2 % used AI techniques based on deep learning and 15·3 % on machine learning. Correlation coefficients of over 0·7 were reported in six articles concerning the estimation of calories between the AI and traditional assessment methods. Similarly, six studies obtained a correlation above 0·7 for macronutrients. In the case of micronutrients, four studies achieved the correlation mentioned above. A moderate risk of bias was observed in 61·5 % (n 8) of the articles analysed, with confounding bias being the most frequently observed. AI-DIA methods are promising, reliable and valid alternatives for nutrient and food estimations. However, more research comparing different populations is needed, as well as larger sample sizes, to ensure the validity of the experimental designs.
This chapter describes the results of the authors' research of automated legal guidance tools across the federal government, conducted over a five-year period from 2019 through 2023. The authors first began this study in preparation for a conference on tax law and artificial intelligence in 2019, and were able to expand it significantly, under the auspices of the Administrative Conference of the United States (ACUS), in 2021. ACUS is an independent US government agency charged with recommending improvements to administrative process and procedure. The goals of this study were to understand how federal agencies use automated legal guidance and to offer recommendations based on these findings. During their research, the authors examined the automated legal guidance activities of every US federal agency. This research found that agencies used automation extensively to offer guidance to the public, albeit with varying levels of sophistication and legal content. This chapter focuses on two well-developed forms of automated legal guidance currently employed by federal agencies: the US Citizenship Immigration Services’ “Emma” and the Internal Revenue Service’s “Interactive Tax Assistant.”
This chapter explores how automated legal guidance helps both federal agencies and members of the public. It outlines several specific benefits, including administrative efficiency, communication of complex law in plain language, transparency regarding agency interpretations of the law, internal and external consistency regarding agency communications, and public engagement with the law.
This chapter explores how artificial intelligence has enabled the automation of customer service in private industry, such as through online tools that assist customers in purchasing airline tickets, troubleshoot internet outages, and provide personal banking services. Private industry has used machine learning, as well as other forms of artificial intelligence, to develop chatbots and virtual assistants, which can respond to conversational oral or text-based commands. These tools have rapidly become standard customer service vehicles. Recent developments suggest that automated customer service, such as large language models, will become even more sophisticated in the future.
This chapter describes interviews the authors conducted with federal agency officials about their use of automated legal guidance. This chapter offers insights gained from these interviews, including regarding the different models that agencies use to develop such guidance, their views on the usability of such guidance, the ways that agencies evaluate the guidance, and agencies’ views on successes and challenges that such guidance faces.