Book contents
- Frontmatter
- Dedication
- Contents
- About the Author
- Acknowledgements
- One Introduction: Trust Issues
- Two Trustification: Extracting Legitimacy
- Three State: Measuring Authority
- Four Corporate: Managing Risk
- Five Research: Setting Terms
- Six Media: Telling Stories
- Seven Case Study: COVID-19 Tracing Apps
- Eight Case Study: Tech for Good
- Nine Case Study: Trusting Faces
- Ten Conclusion: False Trade-Offs
- References
- Index
Nine - Case Study: Trusting Faces
Published online by Cambridge University Press: 24 January 2024
- Frontmatter
- Dedication
- Contents
- About the Author
- Acknowledgements
- One Introduction: Trust Issues
- Two Trustification: Extracting Legitimacy
- Three State: Measuring Authority
- Four Corporate: Managing Risk
- Five Research: Setting Terms
- Six Media: Telling Stories
- Seven Case Study: COVID-19 Tracing Apps
- Eight Case Study: Tech for Good
- Nine Case Study: Trusting Faces
- Ten Conclusion: False Trade-Offs
- References
- Index
Summary
The face, as a site of interpersonal interaction often linked to identity, has become a prime target for quantification, datafication and automated assessment by algorithms like AI. Facial recognition has seen a massive rise in use across high stakes contexts like policing and education. The systems are particularly used to assess trust and its various proxies. These technologies are an update of historical practices like physiognomy that assign values for personal qualities based on the shape and measurements of people's faces (or brains or other body parts).
Parallels between AI and physiognomy are ‘both disturbing and instructive’ (Stark and Hutson, 2022: 954). Both are an attempt to read into the body characteristics that are deemed innate, and therefore inescapable. Measuring for these attributes performs them as roles and identities for specific individuals and groups. They include characteristics like ‘employability, educability, attentiveness, criminality, emotional stability, sexuality, compatibility, and trustworthiness’ (2022: 954). Trustworthiness was a key target for physiognomy, attempting to embed social biases within the narratives of a scientific discipline, performing legitimacy for discrimination. Parallels also include the view of physiognomy as progressive, which aligns with innovation narratives that hype up facial recognition ‘solutions’.
However, despite the sheer number of tools and academic papers being developed, there are big questions over the significant flaws in methodology and assumptions. This includes a frequent lack of definition of what concepts like trustworthiness actually are, part of a chain of proxy conflation of terms and ideas (Spanton and Guest, 2022). This critique highlights the ethical risks inherent even to ‘demonstration’ type academic papers, aligning with what Abeba Birhane describes as ‘cheap AI’ rooted in ‘prejudiced assumptions masquerading as objective enquiry’ (2021b: 45). Faulty demonstrations of the possibilities of machine learning technology are easily picked up and misinterpreted by the media to feed the myths of AI. But problematic papers also enable the same biased and decontextualized judgements to be easily transferred to policing, insurance, education and other areas where quantifying trust is directly, materially harmful.
Saving faces
The way facial recognition technologies are developed and deployed perpetuates specific injustices. If they work, they purposefully entrench power. When they do not work, which is often, then ‘the harm inflicted by these products is a direct consequence of shoddy craftsmanship, unacknowledged technical limits, and poor design’ (Raji, 2021: 57).
- Type
- Chapter
- Information
- Mistrust IssuesHow Technology Discourses Quantify, Extract and Legitimize Inequalities, pp. 130 - 140Publisher: Bristol University PressPrint publication year: 2023