Hostname: page-component-7bb8b95d7b-fmk2r Total loading time: 0 Render date: 2024-10-06T04:36:09.777Z Has data issue: false hasContentIssue false

People treat social robots as real social agents

Published online by Cambridge University Press:  05 April 2023

Alexander Eng
Affiliation:
Department of Management & Organization, National University of Singapore, Singapore 119245, Singapore [email protected] [email protected]; https://bizfaculty.nus.edu.sg/faculty-details/?profId=452
Yam Kai Chi
Affiliation:
Department of Management & Organization, National University of Singapore, Singapore 119245, Singapore [email protected] [email protected]; https://bizfaculty.nus.edu.sg/faculty-details/?profId=452
Kurt Gray
Affiliation:
Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-8100, USA [email protected]; https://www.kurtjgray.com

Abstract

When people interact with social robots, they treat them as real social agents. How people depict robots is fun to consider, but when people are confronted with embodied entities that move and talk – whether humans or robots – they interact with them as authentic social agents with minds, and not as mere representations.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Haunted houses employ actors who pretend to be werewolves and zombies. Visitors wander through the darkness, listening for creatures lying in wait, and then scream as the actors reach out to touch them. If you asked visitors whether they thought the werewolves were real before and after touring the attraction, they would laugh and say no. They understand that the actors are just “depicting social agents.” But to what extent does the concept of “depictions of social agents” matter when they are confronted with a werewolf chasing them through dark corridors? Very little. Research on construal level theory suggests that people can differentiate depictions from reality when psychological distance is high, but when people are directly experiencing a situation, depictions feel real – and are treated as real (Liberman, Trope, & Stephan, Reference Liberman, Trope, Stephan, Kruglanski and Higgins2007; Trope & Liberman, Reference Trope and Liberman2010).

Research finds that – in real life – people also treat robots as actual social agents, not as mere depictions of social agents. This suggests that the idea of “depictions of social agents” may not be useful when considering people's actual interactions with social robots. It is an interesting exercise to think about “depictions” in the pages of journal articles, but empirical evidence often suggests otherwise when people are immersed in their interactions with robots.

To illustrate the idea of depictions, Clark and Fischer (C&F) use the example of movies, distinguishing between agents (actors) and depictions (the characters they play), like Leonardo DiCaprio and Kate Winslet playing Jack Dawson and Rose Bukater in Titanic. But movies are not a good analogy for robots, because robots are embodied social agents unlike characters on the other side of the screen. Embodiment – having a physical presence – fundamentally changes how we interact with agents. Like the werewolf in the haunted house, it makes them real agents.

Importantly, even with movie characters, people often fail to distinguish between actors and the characters they play. In an analysis of hit T.V. series Breaking Bad, researchers found that the fictional character “Skyler is often merged with Anna Gunn, the actor playing her…[people] do not always make a clear distinction between Gunn and the fictional character of Skyler, who become a single entity” (Hermes & Stoete, Reference Hermes and Stoete2019, p. 412).

If people distinguished “robots as social depictions” from “robots as social agents” in real life, then they would have no trouble turning robots off, even if they pleaded for their lives. But people do have trouble. In Bartneck and Hue's (Reference Bartneck and Hue2008) replication of Milgram's obedience study, experimenters instructed participants to switch off an anthropomorphized robotic cat which they had been interacting with, informing them that this would wipe its memories and personality. The robotic cat pleaded with participants, saying “You are not really going to switch me off, are you?” In contrast to C&F's theorizing, people started bargaining with the robots, saying things like, “No! I really have to do it now, I'm sorry!” or “But it has to be done!” People treat the robot as a true social agent, not as a mere painting of a robot.

More evidence that robots are real social agents come from Qin et al.'s (Reference Qin, Chen, Yam, Cao, Li, Guan and Lin2022) replication of the classic Asch conformity experiment, in which they used a social robot confederate. As with human confederates, people bowed to the social pressure of a robot misreading the length of a line.

The distinction between depictions and social agents becomes even more insubstantial in practice as robots become more realistic: The more lifelike robots become, the more we treat them like social agents themselves, not mere depictions. For example, Zhao and Malle (Reference Zhao and Malle2022) find that people respond to new stimuli (human-like robots) in the same way that they respond to familiar stimuli (humans) if both stimuli closely resembled one another (Guttman & Kalish, Reference Guttman and Kalish1956; Shepard, Reference Shepard1987). Likewise, Yam et al. (Reference Yam, Goh, Fehr, Lee, Soh and Gray2022) found that people were more likely to act spitefully to robot supervisors who delivered negative feedback when those robots were more human-like. There's no reason to retaliate to mere social depictions.

People – in real life, with real-life robots – treat robots as real agents and not social depictions. But C&F are correct that people see differences between robots and humans. But the difference is not about depictions, but rather about mind. Mind perception theory (Gray, Gray, & Wegner, Reference Gray, Gray and Wegner2007) suggests that we perceive the minds of social agents along two distinct dimensions, agency (thinking and doing) and experience (feeling and sensing). We perceive humans as having high agency and high experience, animals as having low agency but high experience, and social robots as having moderate agency and low-to-moderate experience (Gray & Wegner, Reference Gray and Wegner2010).

These perceptions of mind are important – especially in real life. Perceptions of mind underlie whether people treat robots as legitimate moral decision maker (Bigman, Waytz, Alterovitz, & Gray, Reference Bigman, Waytz, Alterovitz and Gray2019) – a machine with the capacity for agency and experience is seen as more qualified to make life-and-death medical and military decisions (Bigman & Gray, Reference Bigman and Gray2018).

Changing perceptions of mind also change how people interact with social robots. Reducing a social robot's perceived capacity for experiencing feelings decreases the uncanniness of human-like robots (Yam, Bigman, & Gray, Reference Yam, Bigman and Gray2021b). On the flip side, a study at the world's only all-robot-staffed hotel found that increasing a service robot's perceived capacity for experiencing feelings makes people like service robots more – and forgive them more after service failures (Yam et al., Reference Yam, Bigman, Tang, Ilies, De Cremer, Soh and Gray2021a).

Robots are not human beings, but neither are they mere depictions of social agents. Instead, they are seen as real social agents, especially when people interact with them. The reality of in-person “depictions” is something designers of both robots and haunted houses understand; we scholars also need to understand this fact.

Competing interest

None.

References

Bartneck, C., & Hue, J. (2008). Exploring the abuse of robots. Interaction Studies, 9(3), 415433. https://doi.org/10.1075/is.9.3.04barCrossRefGoogle Scholar
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 2134. https://doi.org/10.1016/j.cognition.2018.08.003CrossRefGoogle ScholarPubMed
Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365368. https://doi.org/10.1016/j.tics.2019.02.008CrossRefGoogle ScholarPubMed
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science (American Association for the Advancement of Science), 315(5812), 619619. https://doi.org/10.1126/science.1134475CrossRefGoogle ScholarPubMed
Gray, K., & Wegner, D. M. (2010). Blaming god for our pain: Human suffering and the divine mind. Personality and Social Psychology Review, 14(1), 716. https://doi.org/10.1177/1088868309350299CrossRefGoogle ScholarPubMed
Guttman, N., & Kalish, H. I. (1956). Discriminability and stimulus generalization. Journal of Experimental Psychology, 51(1), 7988.CrossRefGoogle ScholarPubMed
Hermes, J., & Stoete, L. (2019). Hating Skyler White: Audience engagement, gender politics and celebrity culture. Celebrity Studies, 10(3), 411426. https://doi.org/10.1080/19392397.2019.1630155CrossRefGoogle Scholar
Liberman, N., Trope, Y., & Stephan, E. (2007). Psychological distance. In Kruglanski, A. W. & Higgins, E. T. (Eds.), Social psychology: Handbook of basic principles (pp. 353381). Guilford Press.Google Scholar
Qin, X., Chen, C., Yam, K. C., Cao, L., Li, W., Guan, J., … Lin, Y. (2022). Adults still can't resist: A social robot can induce normative conformity. Computers in Human Behavior, 127, 107041.CrossRefGoogle Scholar
Shepard, R. N. (1987). Towards a universal theory of generalization for psychological science. Science, 237(4820), 13171323.CrossRefGoogle Scholar
Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological Review, 117(2), 440463. https://doi.org/10.1037/a0018963CrossRefGoogle ScholarPubMed
Yam, K. C., Bigman, Y., & Gray, K. (2021b). Reducing the uncanny valley by dehumanizing humanoid robots. Computers in Human Behavior, 125, 106945. https://doi.org/10.1016/j.chb.2021.106945CrossRefGoogle Scholar
Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2021a). Robots at work: People prefer- and forgive-service robots with perceived feelings. Journal of Applied Psychology, 106(10), 15571572. doi:10.1037/apl0000834CrossRefGoogle ScholarPubMed
Yam, K. C., Goh, E., Fehr, R., Lee, R., Soh, H., & Gray, K. (2022). When your boss is a robot: Workers are more spiteful to robot supervisors that seem more human. Journal of Experimental Social Psychology, 102, 104360. https://doi.org/10.1016/j.jesp.2022.104360CrossRefGoogle Scholar
Zhao, X., & Malle, B. F. (2022). Spontaneous perspective taking toward robots: The unique impact of humanlike appearance. Cognition, 224, 105076105076. https://doi.org/10.1016/j.cognition.2022.105076CrossRefGoogle ScholarPubMed