Book contents
- Frontmatter
- Contents
- List of Illustrations
- Introduction: Rediscovering Garfinkel's “Experiments,” Renewing Ethnomethodological Inquiry
- Part I Exegesis
- Part II ‘Experiments’
- Part III Implications
- Postface: “Experiments”—What are we Talking About? A Plea for Conceptual Investigations
- Notes on Contributors
- Index of Names
- Index of Subjects
Chapter Nine - Dealing With Daemons: Trust in Autonomous Systems
Published online by Cambridge University Press: 28 February 2024
- Frontmatter
- Contents
- List of Illustrations
- Introduction: Rediscovering Garfinkel's “Experiments,” Renewing Ethnomethodological Inquiry
- Part I Exegesis
- Part II ‘Experiments’
- Part III Implications
- Postface: “Experiments”—What are we Talking About? A Plea for Conceptual Investigations
- Notes on Contributors
- Index of Names
- Index of Subjects
Summary
Introduction: Trust and artificial intelligence (AI)
Daemons, or demons, may signify diverse entities. In multitasking computer systems, daemons are computer programs that run as background processes without the supervision of a user. In the entirely different context of thought experiments, philosophers or scientists occasionally imagine demons as agents which act in ways that pose intellectual challenges or highlight apparent paradoxes. In this chapter, both uses become applicable concerning the operations of a set of technologies with automated features. By extension, this argumentation could be relevant for the contemporary discussion on AI and algorithmic systems.
For the past two decades, the field of AI has undergone significant developments. What was once a marginal strand of research in computer science has now been implemented in a wide range of practices, from highly technical expert systems to common and mundane applications. The neurosurgeon segmenting her scan of a brain tumor and the teenager applying a beauty filter to his Snapchat Story can both draw on the power of convolutional neural networks. As another example, although the data differs, uniquely tailored recommendations for medical treatments and suggestions for music may build on similar clustering techniques. The content-agnostic nature of machine learning methods allows for their application across the board.
The profusion of algorithmic systems is also accompanied by some concerns regarding their trustworthiness. Systems that sort, score, recommend or in other ways inform or make decisions that affect human experience have been understood as having many risks (European Commission 2020). Trustworthy AI has become a research field of its own with dedicated venues. For example, the ACM FAccT conference brings together academics and practitioners interested in fairness, accountability and transparency in socio-technical systems. One of the goals of this research is to make AI trustworthy and explainable (i.e., understandable and predictable by humans). As topics for computer science, they may have a certain novelty to them. However, the notions of trust and accountability have a long history in ethnomethodology.
Garfinkel developed his ideas about trust most notably in “A Conception of, and Experiments with, ‘Trust’ as a Condition of Stable Concerted Actions” (Garfinkel 1963), in which he argues that trust is a necessary condition for understanding the events of daily life. According to Watson, though, this study of trust belongs to Garfinkel's early work, and not everyone regards it as “fully fledged EM analysis” (2009, 489).
- Type
- Chapter
- Information
- The Anthem Companion to Harold Garfinkel , pp. 163 - 180Publisher: Anthem PressPrint publication year: 2023