No CrossRef data available.
Article contents
Binding paradox in artificial social realities
Published online by Cambridge University Press: 05 April 2023
Abstract
The relation between communication partners is crucial for the success of their interaction. This is also true for artificial social agents. However, the more we engage in artificial relationships, the more we are forced to regulate and control them. I refer to this as binding paradox. This deserves attention during technological developments and requires professional supervision during ongoing interactions.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2023. Published by Cambridge University Press
References
Bente, G., Rüggenberg, S., Krämer, N. C., & Eschenburg, F. (2008). Avatar-mediated networking: Increasing social presence and interpersonal trust in net-based collaborations. Human Communication Research, 34(2), 287–318.CrossRefGoogle Scholar
Bernieri, F. J., Gillis, J. S., Davis, J. M., & Grahe, J. E. (1996). Dyad rapport and the accuracy of its judgment across situations: A lens model analysis. Journal of Personality and Social Psychology, 71(1), 110–129.CrossRefGoogle Scholar
Freud, S. (1982). Bemerkungen zur Übertragungsliebe [original 1914] Studienausgabe Bd. I (pp. 217–230). Fischer.Google Scholar
Kasap, Z., & Magnenat-Thalmann, N. (2007). Intelligent virtual humans with autonomy and personality: State-of-the-art. Intelligent Decision Technologies, 1, 3–15.CrossRefGoogle Scholar
Lem, S. (2014). Summa technologiae [original 1964]. University of Minnesota Press (section 6).Google Scholar
Lemley, M. A., & Volokh, E. (2018). Law, virtual reality, and augmented reality. University of Pennsylvania Law Review, 166, 1051–1138.Google Scholar
Madary, M., & Metzinger, T. K. (2016). Real virtuality: A code of ethical conduct. Recommendations for good scientific practice and the consumers of VR-technology. Frontiers in Robotics and AI, 3, 1–23.CrossRefGoogle Scholar
Marloth, M., Chandler, J., & Vogeley, K. (2020). Psychiatric interventions in virtual reality: Why we need an ethical framework. Cambridge Quarterly of Healthcare Ethics, 29(4), 574–584.CrossRefGoogle ScholarPubMed
Mead, G. H. (1963). Mind, self, and society [original 1934]. University of Chicago Press.Google Scholar
Pan, X., & Hamilton, A. F. (2018) Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. British Journal of Psychology, 109, 395–417.CrossRefGoogle ScholarPubMed
Pfeiffer, U., Schilbach, L., Timmermans, B., Kuzmanovic, B., Georgescu, A., Bente, G., & Vogeley, K. (2014). Why we interact: On the functional role of the Striatum in the subjective experience of social interaction. NeuroImage, 101C, 124–137.CrossRefGoogle Scholar
Ramirez, E. J., & LaBarge, S. (2018). Real moral problems in the use of virtual reality. Ethics Information Technology, 20, 249–263.CrossRefGoogle Scholar
Swartout, W., Gratch, J., Hill, R., Hovy, E., Marsella, S., Rickel, J., & Traum, D. (2006). Toward virtual humans. AI Magazine, 27, 96–108.Google Scholar
Tickle-Degnen, L., & Rosenthal, R. (1990). The nature of rapport and its nonverbal correlates. Psychological Inquiry, 1(4), 285–293.CrossRefGoogle Scholar
Vogel, D. H. V., Jording, M., Esser, C., Weiss, P. H., & Vogeley, K. (2021). Temporal binding is enhanced in social contexts. Psychonomic Bulletin & Review, 28, 1545–1555.CrossRefGoogle ScholarPubMed
Vogeley, K., & Bente, G. (2010) “Artificial humans”: Psychology and neuroscience perspectives on embodiment and nonverbal communication. Neural Networks, 23, 1077–1090.CrossRefGoogle ScholarPubMed
Watzlawick, P., Beavin, J. H., & Jackson, D. D. (1967). Pragmatics of human communication: A study of interactional patterns, pathologies and paradoxes. Norton.Google Scholar
You have
Access
Complementary to the technological development of artificial social agents, the question of how we can understand and conceptualize them in order to successfully communicate must be answered at the same time. This is the well-chosen focus of the target article by Clark and Fischer (C&F). They provide many examples for the different realizations of such agents (target article, sect. 3.2). That the relationship between two communication partners is crucial has been emphasized since the beginnings of modern social psychology (Watzlawick, Beavin, & Jackson, Reference Watzlawick, Beavin and Jackson1967).
In communication, we exchange information by conveying meaningful messages. According to symbolic interactionism, we interact on the basis of interpretable meanings that develop during the interaction between persons and can change over time (Blumer, Reference Blumer1969; Carey, Reference Carey2009; Mead, Reference Mead1963). However, content can only be transmitted if the communication partner is experienced as reliable and trustworthy. The “connectedness” or “attunement” between both partners is also referred to as rapport based on mutual attentiveness, reciprocal exchange of positivity cues, and coordination of nonverbal behaviors (Bernieri et al., Reference Bernieri, Gillis, Davis and Grahe1996; Tickle-Degnen & Rosenthal, Reference Tickle-Degnen and Rosenthal1990). The relationship is the primary aspect of communication, while the content is secondary. For this reason, we tend to constantly interpret even unintended signals as meaningful: “we can not not communicate” (Watzlawick et al., Reference Watzlawick, Beavin and Jackson1967). These processes of communication do not always and necessarily occur unconsciously and automatically, and their full understanding requires thoughtful consideration (C&F, target article, sect. 7).
To have a similar experience with artificial social agents, we are forced to treat them as if he or she was human or “as if they were actual agents” (C&F, target article, long abstract). We can then “respond socially and naturally” and refer to the “media equation” (C&F, target article, sect. 2.1). It is one of the earliest insights in the study of fiction that we temporarily accept fiction as reality. This “willing suspension of disbelief” was already proposed by Samuel Taylor Coleridge (1722–1834), the English critic and poet (Coleridge, 1817/Reference Coleridge1907). This early concept already contains the key components of “willingness” and “changes of perspective” that allow us to treat an artificial social actor as human at one time and as an artifact at another (C&F, target article, sect. 2.4, para. 2). This temporary suspension of disbelief depends on different dimensions (C&F, target article, sect. 3.2). It can be suggested that the more we are confronted with artificial social agents who appear and behave as “persons,” the more pronounced the suspension is (Kasap & Magnenat-Thalmann, Reference Kasap and Magnenat-Thalmann2007; Swartout et al., Reference Swartout, Gratch, Hill, Hovy, Marsella, Rickel and Traum2006; Vogeley & Bente, Reference Vogeley and Bente2010). Even the instruction to interact with another person and plausible gaze behavior of a virtual character lead persons to believe that they are interacting with real humans (Pfeiffer et al., Reference Pfeiffer, Schilbach, Timmermans, Kuzmanovic, Georgescu, Bente and Vogeley2014; Vogel et al., Reference Vogel, Jording, Esser, Weiss and Vogeley2021).
These socially enriched realities create experiences of “presence” or “social presence” (Bente et al., Reference Bente, Rüggenberg, Krämer and Eschenburg2008), the other can become a “social hallucination” (Madary & Metzinger, Reference Madary and Metzinger2016). This implies that this powerful technology is capable of blurring the boundaries between reality and virtuality, much like classical thought experiments of “brains in a vat” (Putnam, Reference Putnam1981), the “experience machine” (Nozick, Reference Nozick1974), or the invention of “phantomology” and “phantomatics” (Lem, 1964/Reference Lem2014). In a completely transformed virtual life world, we would no longer be able to distinguish between simulation and reality (Lem, 1964/Reference Lem2014).
It is the tension between real and artificial social agents that creates the “social artifact puzzle” that frames the target article: We communicate and interact with putative social agents even though we know they are artifacts (C&F, sects. 1 and 10). This raises ethical concerns (Marloth et al., Reference Marloth, Chandler and Vogeley2020). Blurred boundaries bear the potential to be stressful (Pan & Hamilton, Reference Pan and Hamilton2018) or become even traumatic (Ramirez & LaBarge, Reference Ramirez and LaBarge2018). Legally, too, the foreseeable infliction of harm or even trauma can raise challenging questions regarding responsibility (Lemley & Volokh, Reference Lemley and Volokh2018), which are addressed by conceptualizing “authorities” and asking for “principals” behind the agents (C&F, target article, sect. 7.3). The more realistic social artificial agents become and the more seducing it is to interact with them, the more we need to be reminded of their artificial nature and the more we need to control and regulate the depth of such a relation.
Probably the most reflective area dealing with a very similar conflict is the practice of psychotherapy. Effective psychotherapy requires the psychotherapist and the patient enter into a relationship, but the psychotherapist must maintain a professional distance and cannot simultaneously become a close friend or even a lover of the patient. Even Sigmund Freud commented on a case of a patient falling in love with the therapist as “counter-transference love” (“Übertragungsliebe”; Freud, 1914/Reference Freud1982). When it occurs, it requires a very careful interaction in which the relationship established must be controlled to avoid going “too deep.”
In conclusion, the relationship between humans and artificial social agents requires careful thought and reflection about the nature of their relationship as outlined in many important aspects of C&F's target article. Some level of rapport must be established in order to effectively interact with an artificial human, but the human partner must be protected from confusion about the quality and depth of the initiated relationships while being forced to control the relationship. This is what I call the “binding paradox.” It is related to the “social artifact puzzle” (C&F, target article, sect. 1), but extends it by conceptualizing this tension in the relationship between communication partners as more universal including also human–human relations, and opening an ethical debate. There is only a small corridor within which we can establish a functionally relevant relationship without being affected by an illusionary relationship that can become potentially harmful. This must be considered in any kind of empirical research or technological development of artificial social realities. During ongoing communication, it requires careful monitoring of people communicating with artificial agents, much like psychotherapy, which requires supervision.
Financial support
This work was supported by the European Commission (FET Proactive project consortium “VIRTUALTIMES,” project ID 824128) and the German Research Foundation (Collaborative Research Centre CRC 1252 “Prominence in Language,” project ID 281511265) and the German Ministery of Research and Education (SIMSUB: Simulating (inter)subjectivity, project ID 01GP2215).
Competing interest
None.