Complementary to the technological development of artificial social agents, the question of how we can understand and conceptualize them in order to successfully communicate must be answered at the same time. This is the well-chosen focus of the target article by Clark and Fischer (C&F). They provide many examples for the different realizations of such agents (target article, sect. 3.2). That the relationship between two communication partners is crucial has been emphasized since the beginnings of modern social psychology (Watzlawick, Beavin, & Jackson, Reference Watzlawick, Beavin and Jackson1967).
In communication, we exchange information by conveying meaningful messages. According to symbolic interactionism, we interact on the basis of interpretable meanings that develop during the interaction between persons and can change over time (Blumer, Reference Blumer1969; Carey, Reference Carey2009; Mead, Reference Mead1963). However, content can only be transmitted if the communication partner is experienced as reliable and trustworthy. The “connectedness” or “attunement” between both partners is also referred to as rapport based on mutual attentiveness, reciprocal exchange of positivity cues, and coordination of nonverbal behaviors (Bernieri et al., Reference Bernieri, Gillis, Davis and Grahe1996; Tickle-Degnen & Rosenthal, Reference Tickle-Degnen and Rosenthal1990). The relationship is the primary aspect of communication, while the content is secondary. For this reason, we tend to constantly interpret even unintended signals as meaningful: “we can not not communicate” (Watzlawick et al., Reference Watzlawick, Beavin and Jackson1967). These processes of communication do not always and necessarily occur unconsciously and automatically, and their full understanding requires thoughtful consideration (C&F, target article, sect. 7).
To have a similar experience with artificial social agents, we are forced to treat them as if he or she was human or “as if they were actual agents” (C&F, target article, long abstract). We can then “respond socially and naturally” and refer to the “media equation” (C&F, target article, sect. 2.1). It is one of the earliest insights in the study of fiction that we temporarily accept fiction as reality. This “willing suspension of disbelief” was already proposed by Samuel Taylor Coleridge (1722–1834), the English critic and poet (Coleridge, 1817/Reference Coleridge1907). This early concept already contains the key components of “willingness” and “changes of perspective” that allow us to treat an artificial social actor as human at one time and as an artifact at another (C&F, target article, sect. 2.4, para. 2). This temporary suspension of disbelief depends on different dimensions (C&F, target article, sect. 3.2). It can be suggested that the more we are confronted with artificial social agents who appear and behave as “persons,” the more pronounced the suspension is (Kasap & Magnenat-Thalmann, Reference Kasap and Magnenat-Thalmann2007; Swartout et al., Reference Swartout, Gratch, Hill, Hovy, Marsella, Rickel and Traum2006; Vogeley & Bente, Reference Vogeley and Bente2010). Even the instruction to interact with another person and plausible gaze behavior of a virtual character lead persons to believe that they are interacting with real humans (Pfeiffer et al., Reference Pfeiffer, Schilbach, Timmermans, Kuzmanovic, Georgescu, Bente and Vogeley2014; Vogel et al., Reference Vogel, Jording, Esser, Weiss and Vogeley2021).
These socially enriched realities create experiences of “presence” or “social presence” (Bente et al., Reference Bente, Rüggenberg, Krämer and Eschenburg2008), the other can become a “social hallucination” (Madary & Metzinger, Reference Madary and Metzinger2016). This implies that this powerful technology is capable of blurring the boundaries between reality and virtuality, much like classical thought experiments of “brains in a vat” (Putnam, Reference Putnam1981), the “experience machine” (Nozick, Reference Nozick1974), or the invention of “phantomology” and “phantomatics” (Lem, 1964/Reference Lem2014). In a completely transformed virtual life world, we would no longer be able to distinguish between simulation and reality (Lem, 1964/Reference Lem2014).
It is the tension between real and artificial social agents that creates the “social artifact puzzle” that frames the target article: We communicate and interact with putative social agents even though we know they are artifacts (C&F, sects. 1 and 10). This raises ethical concerns (Marloth et al., Reference Marloth, Chandler and Vogeley2020). Blurred boundaries bear the potential to be stressful (Pan & Hamilton, Reference Pan and Hamilton2018) or become even traumatic (Ramirez & LaBarge, Reference Ramirez and LaBarge2018). Legally, too, the foreseeable infliction of harm or even trauma can raise challenging questions regarding responsibility (Lemley & Volokh, Reference Lemley and Volokh2018), which are addressed by conceptualizing “authorities” and asking for “principals” behind the agents (C&F, target article, sect. 7.3). The more realistic social artificial agents become and the more seducing it is to interact with them, the more we need to be reminded of their artificial nature and the more we need to control and regulate the depth of such a relation.
Probably the most reflective area dealing with a very similar conflict is the practice of psychotherapy. Effective psychotherapy requires the psychotherapist and the patient enter into a relationship, but the psychotherapist must maintain a professional distance and cannot simultaneously become a close friend or even a lover of the patient. Even Sigmund Freud commented on a case of a patient falling in love with the therapist as “counter-transference love” (“Übertragungsliebe”; Freud, 1914/Reference Freud1982). When it occurs, it requires a very careful interaction in which the relationship established must be controlled to avoid going “too deep.”
In conclusion, the relationship between humans and artificial social agents requires careful thought and reflection about the nature of their relationship as outlined in many important aspects of C&F's target article. Some level of rapport must be established in order to effectively interact with an artificial human, but the human partner must be protected from confusion about the quality and depth of the initiated relationships while being forced to control the relationship. This is what I call the “binding paradox.” It is related to the “social artifact puzzle” (C&F, target article, sect. 1), but extends it by conceptualizing this tension in the relationship between communication partners as more universal including also human–human relations, and opening an ethical debate. There is only a small corridor within which we can establish a functionally relevant relationship without being affected by an illusionary relationship that can become potentially harmful. This must be considered in any kind of empirical research or technological development of artificial social realities. During ongoing communication, it requires careful monitoring of people communicating with artificial agents, much like psychotherapy, which requires supervision.
Complementary to the technological development of artificial social agents, the question of how we can understand and conceptualize them in order to successfully communicate must be answered at the same time. This is the well-chosen focus of the target article by Clark and Fischer (C&F). They provide many examples for the different realizations of such agents (target article, sect. 3.2). That the relationship between two communication partners is crucial has been emphasized since the beginnings of modern social psychology (Watzlawick, Beavin, & Jackson, Reference Watzlawick, Beavin and Jackson1967).
In communication, we exchange information by conveying meaningful messages. According to symbolic interactionism, we interact on the basis of interpretable meanings that develop during the interaction between persons and can change over time (Blumer, Reference Blumer1969; Carey, Reference Carey2009; Mead, Reference Mead1963). However, content can only be transmitted if the communication partner is experienced as reliable and trustworthy. The “connectedness” or “attunement” between both partners is also referred to as rapport based on mutual attentiveness, reciprocal exchange of positivity cues, and coordination of nonverbal behaviors (Bernieri et al., Reference Bernieri, Gillis, Davis and Grahe1996; Tickle-Degnen & Rosenthal, Reference Tickle-Degnen and Rosenthal1990). The relationship is the primary aspect of communication, while the content is secondary. For this reason, we tend to constantly interpret even unintended signals as meaningful: “we can not not communicate” (Watzlawick et al., Reference Watzlawick, Beavin and Jackson1967). These processes of communication do not always and necessarily occur unconsciously and automatically, and their full understanding requires thoughtful consideration (C&F, target article, sect. 7).
To have a similar experience with artificial social agents, we are forced to treat them as if he or she was human or “as if they were actual agents” (C&F, target article, long abstract). We can then “respond socially and naturally” and refer to the “media equation” (C&F, target article, sect. 2.1). It is one of the earliest insights in the study of fiction that we temporarily accept fiction as reality. This “willing suspension of disbelief” was already proposed by Samuel Taylor Coleridge (1722–1834), the English critic and poet (Coleridge, 1817/Reference Coleridge1907). This early concept already contains the key components of “willingness” and “changes of perspective” that allow us to treat an artificial social actor as human at one time and as an artifact at another (C&F, target article, sect. 2.4, para. 2). This temporary suspension of disbelief depends on different dimensions (C&F, target article, sect. 3.2). It can be suggested that the more we are confronted with artificial social agents who appear and behave as “persons,” the more pronounced the suspension is (Kasap & Magnenat-Thalmann, Reference Kasap and Magnenat-Thalmann2007; Swartout et al., Reference Swartout, Gratch, Hill, Hovy, Marsella, Rickel and Traum2006; Vogeley & Bente, Reference Vogeley and Bente2010). Even the instruction to interact with another person and plausible gaze behavior of a virtual character lead persons to believe that they are interacting with real humans (Pfeiffer et al., Reference Pfeiffer, Schilbach, Timmermans, Kuzmanovic, Georgescu, Bente and Vogeley2014; Vogel et al., Reference Vogel, Jording, Esser, Weiss and Vogeley2021).
These socially enriched realities create experiences of “presence” or “social presence” (Bente et al., Reference Bente, Rüggenberg, Krämer and Eschenburg2008), the other can become a “social hallucination” (Madary & Metzinger, Reference Madary and Metzinger2016). This implies that this powerful technology is capable of blurring the boundaries between reality and virtuality, much like classical thought experiments of “brains in a vat” (Putnam, Reference Putnam1981), the “experience machine” (Nozick, Reference Nozick1974), or the invention of “phantomology” and “phantomatics” (Lem, 1964/Reference Lem2014). In a completely transformed virtual life world, we would no longer be able to distinguish between simulation and reality (Lem, 1964/Reference Lem2014).
It is the tension between real and artificial social agents that creates the “social artifact puzzle” that frames the target article: We communicate and interact with putative social agents even though we know they are artifacts (C&F, sects. 1 and 10). This raises ethical concerns (Marloth et al., Reference Marloth, Chandler and Vogeley2020). Blurred boundaries bear the potential to be stressful (Pan & Hamilton, Reference Pan and Hamilton2018) or become even traumatic (Ramirez & LaBarge, Reference Ramirez and LaBarge2018). Legally, too, the foreseeable infliction of harm or even trauma can raise challenging questions regarding responsibility (Lemley & Volokh, Reference Lemley and Volokh2018), which are addressed by conceptualizing “authorities” and asking for “principals” behind the agents (C&F, target article, sect. 7.3). The more realistic social artificial agents become and the more seducing it is to interact with them, the more we need to be reminded of their artificial nature and the more we need to control and regulate the depth of such a relation.
Probably the most reflective area dealing with a very similar conflict is the practice of psychotherapy. Effective psychotherapy requires the psychotherapist and the patient enter into a relationship, but the psychotherapist must maintain a professional distance and cannot simultaneously become a close friend or even a lover of the patient. Even Sigmund Freud commented on a case of a patient falling in love with the therapist as “counter-transference love” (“Übertragungsliebe”; Freud, 1914/Reference Freud1982). When it occurs, it requires a very careful interaction in which the relationship established must be controlled to avoid going “too deep.”
In conclusion, the relationship between humans and artificial social agents requires careful thought and reflection about the nature of their relationship as outlined in many important aspects of C&F's target article. Some level of rapport must be established in order to effectively interact with an artificial human, but the human partner must be protected from confusion about the quality and depth of the initiated relationships while being forced to control the relationship. This is what I call the “binding paradox.” It is related to the “social artifact puzzle” (C&F, target article, sect. 1), but extends it by conceptualizing this tension in the relationship between communication partners as more universal including also human–human relations, and opening an ethical debate. There is only a small corridor within which we can establish a functionally relevant relationship without being affected by an illusionary relationship that can become potentially harmful. This must be considered in any kind of empirical research or technological development of artificial social realities. During ongoing communication, it requires careful monitoring of people communicating with artificial agents, much like psychotherapy, which requires supervision.
Financial support
This work was supported by the European Commission (FET Proactive project consortium “VIRTUALTIMES,” project ID 824128) and the German Research Foundation (Collaborative Research Centre CRC 1252 “Prominence in Language,” project ID 281511265) and the German Ministery of Research and Education (SIMSUB: Simulating (inter)subjectivity, project ID 01GP2215).
Competing interest
None.