Book contents
- Frontmatter
- Dedication
- Contents
- List of figures
- List of web links
- Notes on contributors
- Acknowledgements
- Introduction: Human Perception and Digital Information Technologies
- Part I Animation and Consciousness
- Part II Affective Experience and Expression
- Part III Data Visualization: Space and Time
- Part IV Image Formation and Embodiment
- Index
9 - Deepfake Face-Swap Animations and Affect
Published online by Cambridge University Press: 18 December 2024
- Frontmatter
- Dedication
- Contents
- List of figures
- List of web links
- Notes on contributors
- Acknowledgements
- Introduction: Human Perception and Digital Information Technologies
- Part I Animation and Consciousness
- Part II Affective Experience and Expression
- Part III Data Visualization: Space and Time
- Part IV Image Formation and Embodiment
- Index
Summary
‘Deepfakes’ is the popular word term for faked videos (synthetic audiovisual media productions) produced by using deep learning techniques (artificial neural networks). According to Deeptrace, an Amsterdam-based company that provides tools to detect deepfakes, in 2019 there were 14,608 deepfake videos online, which drew more than 134 million views (Ajder et al, 2019:7). There is no doubt that the phenomenon generates fear and resistance. In November 2018 the Guardian published an article with the headline, ‘You thought fake news was bad? Deep fakes are where truth is going to die’ (Schwartz, 2018). The American government is working to ‘combat the spread of misinformation through restrictions on deep-fake video alteration technology’ (Clarke, 2019). The recent, rapidly accelerating developments in the field of artificial neural networks have made it possible for ordinary users to easily play with face-swaps. The Zao app, which is currently available only in China, enables its users to digitally swap faces with movie actors. The seamless integration and growing accessibility of face-swap technologies generate fear about their abuse: for example, in connection with fake news, advertisements, or hate crime strategies. The fear is that epistemic claims (‘He said this’, ‘She did that’) in a medium that is historically associated with authenticity may affect people differently from other media (writing, audio, still images). Deepfakes are not used primarily for fake news, but to create non-consensual pornography. According to Deeptech, 96 per cent of the deepfake videos online have sexual content (Ajder et al, 2019:7). American researcher Danielle Citron, an internet hate crimes expert, states, ‘Deepfake technology is being weaponized against women by inserting their faces into porn. It is terrifying, embarrassing, demeaning, and silencing. Deepfake sex videos say to individuals that their bodies are not their own and can make it difficult to stay online, get or keep a job, and feel safe’ (Ajder et al, 2019:6).
Just as it is illegal to write falsehoods about people, the United States Congress states that each and every person should be protected from being falsely visually presented (Clarke, 2019). Scientists, lawmakers, and journalists seem to concur in their criticism of deepfakes, but as Danielle Citron stated during a hearing, ‘A ban on deep fake technology would not be desirable. Digital manipulation is not inherently problematic. There are pro-social uses of the technology.
- Type
- Chapter
- Information
- Human Perception and Digital Information TechnologiesAnimation, the Body, and Affect, pp. 195 - 212Publisher: Bristol University PressPrint publication year: 2024