Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-25T14:07:44.523Z Has data issue: false hasContentIssue false

Trait attribution explains human–robot interactions

Published online by Cambridge University Press:  05 April 2023

Yochanan E. Bigman
Affiliation:
The Hebrew University Business School, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel [email protected] https://ybigman.wixsite.com/ybigman
Nicholas Surdel
Affiliation:
Department of Psychology, Yale University, New Haven, CT 06520-8205, USA [email protected] [email protected] https://www.linkedin.com/in/nsurdel www.fergusonlab.com
Melissa J. Ferguson
Affiliation:
Department of Psychology, Yale University, New Haven, CT 06520-8205, USA [email protected] [email protected] https://www.linkedin.com/in/nsurdel www.fergusonlab.com

Abstract

Clark and Fischer (C&F) claim that trait attribution has major limitations in explaining human–robot interactions. We argue that the trait attribution approach can explain the three issues posited by C&F. We also argue that the trait attribution approach is parsimonious, as it assumes that the same mechanisms of social cognition apply to human–robot interaction.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

C&F propose that humans understand social robots as depictions rather than actual agents. This perspective focuses on the psychologically under-explained duality with which people can react to entities as simultaneously (or alternatively) agentic and non-agentic. We disagree that the trait attribution approach cannot handle the three questions that C&F identify. We argue that the trait attribution approach, based on decades of research on social cognition, can explain these issues, and variance in human–robot interaction more broadly. Moreover, this approach is parsimonious in that it assumes the same psychological processes that guide human–human interaction guide human–robot interaction.

Addressing the criticisms of the trait attribution approach

The first limitation according to C&F is that the trait attribution approach cannot explain individual differences in willingness to interact with social robots. However, this can be explained by research demonstrating individual differences in attributing human-like characteristics to other humans (Haslam & Loughnan, Reference Haslam and Loughnan2014) and to nonhumans (Waytz, Cacioppo, & Epley, Reference Waytz, Cacioppo and Epley2010). The same processes that affect trait attributions to other humans can explain the willingness to interact with robots. For example, people vary in how much humanness (e.g., agency, experience) they attribute to outgroup members (e.g., Krumhuber, Swiderska, Tsankova, Kamble, & Kappas, Reference Krumhuber, Swiderska, Tsankova, Kamble and Kappas2015), pets (e.g., McConnell, Lloyd, & Buchanan, Reference McConnell, Lloyd and Buchanan2017), and fictional characters (e.g., Banks & Bowman, Reference Banks and Bowman2016). This variability emerges across individuals, situations (e.g., Smith et al., Reference Smith, Pasek, Vishkin, Johnson, Shackleford and Ginges2022), and within interactions (e.g., Haslam, Reference Haslam2006). According to the trait attribution approach, similar individual and situational factors can predict when people respond to a robot in a human-like way.

The second issue that C&F identify is a change in the way people interact with social robots within an interaction. But considerable research and theory in psychology suggest that the way an interaction unfolds is dynamically affected by many factors, such as perceived traits, goals, and abilities (e.g., see Freeman, Stolier, & Brooks, Reference Freeman, Stolier and Brooks2020). For example, the accessibility of stereotypes and goals can change over a relatively short amount of time (e.g., Ferguson & Bargh, Reference Ferguson and Bargh2004; Kunda, Davies, Adams, & Spencer, Reference Kunda, Davies, Adams and Spencer2002; Melnikoff & Bailey, Reference Melnikoff and Bailey2018). The inherently dynamic context of an interaction, with constantly varying types of information being introduced verbally and nonverbally, predicts changing attributions of one's interaction partner, whether human or robot. The same trait attribution principles that guide human interactions can be used to explain the change in perspective in human–robot interactions.

The third unresolved question raised by C&F is selectivity – people notice some of the robots' capabilities but not others. The trait attribution approach aligns with work in social cognition suggesting that people are more sensitive to some kinds of information than others, depending on individual differences and situational factors (e.g., Brewer, Reference Brewer, Scrull and Wyer1988; Fiske & Neuberg, Reference Fiske and Neuberg1990). For example, people positively evaluate competence in others, unless the other is immoral (Landy, Piazza, & Goodwin, Reference Landy, Piazza and Goodwin2016). Although much is not yet known about precisely which aspects of an interaction or agent are considered relevant and when, we argue that these basic principles of psychology can explain the characteristic of selectivity in trait inferences about social robots. Note that the social depictions approach also cannot explain exactly which aspects will be influential when, and for whom.

Advantages and limitations of trait attribution

In addressing these three points we suggest that the trait attribution approach can explain phenomena that C&F argue are inconsistent with it. By showing how the same principles that explain human–human interaction can explain human–robot interaction, we argue that trait attribution is a parsimonious approach to explaining human–robot interactions.

The trait attribution approach is a broad concept; different research lines focus on different types of attributions to explain human–robot interactions. Anthropomorphism, for example, affects how much people trust robots such as self-driving cars (Waytz, Heafner, & Epley, Reference Waytz, Heafner and Epley2014). People's reliance on algorithms for tasks hinges on their perception that they have human-like emotions (Castelo, Bos, & Lehmann, Reference Castelo, Bos and Lehmann2019). People's aversion to algorithms making moral decisions depends on the mind they attribute to them (Bigman & Gray, Reference Bigman and Gray2018), and their resistance to medical algorithms is based on attributing to them an inability to appreciate human uniqueness (Longoni, Bonezzi, & Morewedge, Reference Longoni, Bonezzi and Morewedge2019). Similarly, people's diminished outrage at discrimination by algorithms is a result of perceiving algorithms as less prejudiced than humans (Bigman, Gray, Waytz, Arnestad, & Wilson, Reference Bigman, Gray, Waytz, Arnestad and Wilson2022). The attribution approach to studying human–robot interactions extends beyond the strict attribution of only traits.

One possible limitation of the trait attribution approach is that it cannot explain the apparent intentionality of the duality of some human–robot interactions. That is, people seem to at times knowingly suspend their disbelief and cycle between treating a robot as agentic versus non-agentic. Although the trait approach can potentially explain going back and forth in attributions of agency, it does not address the role of intentionality in (and awareness of) doing so, and more research on the importance of this characteristic would be helpful. Moreover, it is an open question when people will interact with robots as actual social agents rather than depictions of social agents. We agree that sometimes people might interact with social robots as depictions, but that does not mean that they always do so. One untested possibility is that the more mind a robot is perceived as having, the less likely people are to treat it as a depiction. To the extent that robots are increasingly more agentic, trait attribution approach parsimoniously explains interactions where people interact with social robots as “real” agents rather than depictions: loving an artificial agent (Oliver, Reference Oliver2020), thinking an AI is sentient when it displays sophisticated language and conversation (Allyn, Reference Allyn2022), and feeling bad about punishing them (Bartneck, Verbunt, Mubin, & Al Mahmud, Reference Bartneck, Verbunt, Mubin and Al Mahmud2007).

C&F assume that social cognition of humans is unique, and cannot be applied to nonhuman entities. We argue that social cognition is broad. Humans as targets of social cognition share the space with other entities, even if they have a special place in it. By our account, the difference between the social cognition of humans and the social cognition of robots is mostly quantitative, not qualitative.

Financial support

This work was supported by Office of Naval Research N00014-19-1-2299, Modeling and planning with human impressions of robots.

Competing interest

None.

References

Allyn, B. (2022). The Google engineer who sees company's AI as “sentient” thinks a chatbot has a soul. NPR. https://www.npr.org/2022/06/16/1105552435/google-ai-sentientGoogle Scholar
Banks, J., & Bowman, N. D. (2016). Emotion, anthropomorphism, realism, control: Validation of a merged metric for player–avatar interaction (PAX). Computers in Human Behavior, 54, 215223. https://doi.org/10.1016/j.chb.2015.07.030CrossRefGoogle Scholar
Bartneck, C., Verbunt, M., Mubin, O., & Al Mahmud, A. (2007). To kill a mockingbird robot. In Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction, Washington, DC (pp. 8187). https://doi.org/10.1145/1228716.1228728Google Scholar
Bigman, Y., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 2134. https://doi.org/10.1016/j.cognition.2018.08.003CrossRefGoogle ScholarPubMed
Bigman, Y., Gray, K., Waytz, A., Arnestad, M., & Wilson, D. (2022). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General. Advance online publication. http://dx.doi.org/10.1037/xge0001250Google ScholarPubMed
Brewer, M. B. (1988). A dual process model of impression formation. In Scrull, T. K. & Wyer, R. S. (Eds.), Advances in social cognition (Vol. 1, pp. 136). Erlbaum..Google Scholar
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809825. https://doi.org/10.1177/0022243719851788CrossRefGoogle Scholar
Ferguson, M. J., & Bargh, J. A. (2004). Liking is for doing: The effects of goal pursuit on automatic evaluation. Journal of Personality and Social Psychology, 87(5), 557572. https://doi.org/10.1037/0022-3514.87.5.557CrossRefGoogle ScholarPubMed
Fiske, S. T., & Neuberg, S. L. (1990). A continuum of impression formation, from category-based to individuating processes: Influences of information and motivation on attention and interpretation. In Advances in experimental social psychology (Vol. 23, pp. 174). Elsevier. https://doi.org/10.1016/S0065-2601(08)60317-2Google Scholar
Freeman, J. B., Stolier, R. M., & Brooks, J. A. (2020). Dynamic interactive theory as a domain-general account of social perception. In Advances in experimental social psychology (Vol. 61, pp. 237287). Elsevier. https://doi.org/10.1016/bs.aesp.2019.09.005Google Scholar
Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252264. https://doi.org/10.1207/s15327957pspr1003_4CrossRefGoogle ScholarPubMed
Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual Review of Psychology, 65(1), 399423. https://doi.org/10.1146/annurev-psych-010213-115045CrossRefGoogle ScholarPubMed
Krumhuber, E. G., Swiderska, A., Tsankova, E., Kamble, S. V., & Kappas, A. (2015). Real or artificial? Intergroup biases in mind perception in a cross-cultural perspective. PLoS ONE, 10(9), e0137840. https://doi.org/10.1371/journal.pone.0137840CrossRefGoogle ScholarPubMed
Kunda, Z., Davies, P. G., Adams, B. D., & Spencer, S. J. (2002). The dynamic time course of stereotype activation: Activation, dissipation, and resurrection. Journal of Personality and Social Psychology, 82(3), 283299.CrossRefGoogle ScholarPubMed
Landy, J. F., Piazza, J., & Goodwin, G. P. (2016). When it's bad to be friendly and smart: The desirability of sociability and competence depends on morality. Personality and Social Psychology Bulletin, 42(9), 12721290. https://doi.org/10.1177/0146167216655984CrossRefGoogle ScholarPubMed
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629650..CrossRefGoogle Scholar
McConnell, A. R., Lloyd, E. P., & Buchanan, T. M.. (2017). Animals as friends: Social psychological implications of human-pet relationship. In M. Hojjat & A. Moyer (Eds.), The psychology of friendship (pp. 157174). Oxford University Press.Google Scholar
Melnikoff, D. E., & Bailey, A. H. (2018). Preferences for moral vs. immoral traits in others are conditional. Proceedings of the National Academy of Sciences, 115(4), E592E600..CrossRefGoogle ScholarPubMed
Oliver, M. (2020). Inside the life of people married to robots. Buzzworthy. https://www.buzzworthy.com/meet-men-married-robots/Google Scholar
Smith, J. M., Pasek, M. H., Vishkin, A., Johnson, K. A., Shackleford, C., & Ginges, J. (2022). Thinking about God discourages dehumanization of religious outgroups. Journal of Experimental Psychology: General, 151(10), 25862603. https://doi.org/10.1037/xge0001206CrossRefGoogle ScholarPubMed
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human?: The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219232. https://doi.org/10.1177/1745691610369336CrossRefGoogle ScholarPubMed
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113117. https://doi.org/10.1016/j.jesp.2014.01.005CrossRefGoogle Scholar