Hostname: page-component-7bb8b95d7b-2h6rp Total loading time: 0 Render date: 2024-10-06T04:41:47.516Z Has data issue: false hasContentIssue false

Cues trigger depiction schemas for robots, as they do for human identities

Published online by Cambridge University Press:  05 April 2023

Eliott K. Doyle
Affiliation:
Department of Psychology, University of Oregon, Eugene, OR 97403-1227, USA [email protected] [email protected]; https://psychology.uoregon.edu/profile/sdhodges
Sara D. Hodges
Affiliation:
Department of Psychology, University of Oregon, Eugene, OR 97403-1227, USA [email protected] [email protected]; https://psychology.uoregon.edu/profile/sdhodges

Abstract

Clark and Fischer's three levels of depiction of social robots can be conceptualized as cognitive schemas. When interacting with social robots, humans shift between schemas similarly to how they shift between identity category schemas when interacting with other humans. Perception of mind, context cues, and individual differences underlie perceptions of which level of depiction is most situationally relevant.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Social environments are complex. To navigate them, we use simplified scaffolding information, called schemas, built from our past experiences (Macrae & Cloutier, Reference Macrae and Cloutier2009). Often, schemas focus on social identity categories, and contain stereotypes – simple, categorical, automatically arising predictors about what someone will be like (Hammond & Cimpian, Reference Hammond and Cimpian2017) – about those identities. Any given individual has many identities, each of which might be differently salient from context to context, and so different assumptions about the same individual will come to mind more readily depending on the situation (Oyserman, Reference Oyserman, Scott and Kosslyn2015; Shih, Pittinsky, & Ambady, Reference Shih, Pittinsky and Ambady1999).

We propose that “robot” is an identity category that comprises three subschemas, delineated in Clark and Fischer's (C&F's) work as three levels of depiction. Each subschema evokes different types of behavior, but which is evoked as most relevant can fluctuate, just as one's perception of another person's most relevant identity might. If this is true, then people's individual variation in when and whether they approach robots as characters, depictions of social agents, or pieces of machinery is likely because of the same reasons stereotypes about any kind of identity are variably activated.

One underlying impetus for switching between these schemas, we contend, is the degree to which people perceive the robot as having a mind. Human beings assume things about each other's minds in order to communicate effectively – a task that is vital for social interaction, but very complex. Despite understandable objections about the overuse of stereotypes, particularly negative stereotypes of minority groups, stereotypes facilitate communication by providing quick, and often accurate, predictions about what someone else might be thinking (Hodges & Kezer, Reference Hodges and Kezer2021; Lewis, Hodges, Laurent, Srivastava, & Biancarosa, Reference Lewis, Hodges, Laurent, Srivastava and Biancarosa2012). If a social robot is perceived as having a mind, people are more likely to interact with the robot as a character rather than as a machine, with “robot(ic)” being simply one of the stereotypes activated to describe it as a social entity, much like “teenager” or “doctor.”

Robots are often not perceived as having a mind (Gray, Gray, & Wegner, Reference Gray, Gray and Wegner2007), and in these instances social stereotypes do not come into play. However, some things can cause people to ascribe more mind to a robot, such as the robot behaving in unexpected ways, the robot possessing human-like features, or the person who perceives the robot feeling particularly lonely (Epley, Waytz, & Cacioppo, Reference Epley, Waytz and Cacioppo2007; Waytz, Gray, Epley, & Wegner, Reference Waytz, Gray, Epley and Wegner2010). Excessive robotic attempts to copy human appearances perfectly can be unsettling (Gray, Knobe, Sheskin, Bloom, & Barrett, Reference Gray, Knobe, Sheskin, Bloom and Barrett2011; Gray & Wegner, Reference Gray and Wegner2012), but characteristics that allow the robot to express the things humans notice and communicate to each other – like attention and emotion – can facilitate perception of mind (Duffy, Reference Duffy2003). Given the right cues, anthropomorphism can occur automatically when the perceiver is presented with a situation in which treating a robot as a social agent is contextually appropriate (Kim & Sundar, Reference Kim and Sundar2012).

The characteristics of the human perceivers, therefore, are important in addition to the features of the social robot itself. Qualities like willingness to suspend disbelief (Duffy & Zawieska, Reference Duffy and Zawieska2012; Muckler, Reference Muckler2017) and tendency to anthropomorphize (Waytz, Cacioppo, & Epley, Reference Waytz, Cacioppo and Epley2014) vary between people, and may make people more or less inclined to treat a robot like a character or like a machine. As delineated in C&F's example of the three human interactants encountering Smooth the robot, some people will readily engage socially with the same robot that others will not. This tendency to anthropomorphize is partly because of individual variation between people, but past experience and mindset likely play a role, too: People who are distracted by novel aspects of a social robot or focused on its non-humanness may be impeded in depicting the robot as a character, and by extension, in applying certain stereotypes that guide particular kinds of interactions with it. However, these effects are not unique to perceptions of robots. For example, encountering other humans in heavily scripted roles (e.g., flight attendant, nightclub bouncer) may lead us to evoke prop-like schemas that preclude character depictions. Cues that prompt thoughts of body counts or bodily actions may similarly interfere with character depiction, and evoke more mechanical schemas (Mooijman & Stern, Reference Mooijman and Stern2016; Small & Loewenstein, Reference Small and Loewenstein2003).

Social robots might have difficulty being perceived as genuinely plausible interaction partners in part because the features of the robot fail to activate the character-level stereotypes, such that the robot is stuck at depiction or machinery. Alternatively, some observers might be unwilling or unable to suspend their disbelief in order to interact with the robot like a character (which would, in turn, create a social situation in which others who might otherwise be willing to treat the robot anthropomorphically are made more self-conscious by their peers' reluctance). Finally, even robots depicted as characters might evoke stereotypes of robots being less socially capable than humans (Chan et al., Reference Chan, Doyle, McElfresh, Conitzer, Dickerson, Schaich Borg and Sinnott-Armstrong2020) because, for example, their language is less fluid. As we further explore the factors that promote the willingness and ease with which humans can interact with robots as social agents, we should also heed when robots mirror aspects of some human agents with whom interactions are problematic.

Our suggestion that the three levels of depiction that C&F outline provide three schemas for robots, each of which can be activated to bring to mind different stereotypes, offers a psychological explanation for how people are able to switch their focus between machinery, depiction, and character fluidly. As C&F note, humans have extensive experience engaging with depictions, which should help us construe social robots as depictions of social agents. Increasingly sophisticated robots should trigger stereotypes of various different social agents, providing humans with further cognitive scaffolding to guide and elaborate interactions with robots. Additionally, humans also have experience engaging with what C&F call “nonstandard” (i.e., not real) characters from whom they seek and derive a number of very “human” yearnings (e.g., companionship, inspiration, perspective; see Gabriel & Young, Reference Gabriel and Young2011; Myers & Hodges, Reference Myers, Hodges, Markman, Klein and Suhr2009; Taylor, Hodges, & Kohányi, Reference Taylor, Hodges and Kohányi2003), suggesting a flexible, inclusive, and creative ability to connect with a wide range of social agents.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors

Competing interest

None.

References

Chan, L., Doyle, K., McElfresh, D., Conitzer, V., Dickerson, J. P., Schaich Borg, J., & Sinnott-Armstrong, W. (2020). Artificial intelligence: Measuring influence of AI “assessments” on moral decision-making. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, United States (pp. 214–220).10.1145/3375627.3375870CrossRefGoogle Scholar
Duffy, B. R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3–4), 177190.10.1016/S0921-8890(02)00374-3CrossRefGoogle Scholar
Duffy, B. R., & Zawieska, K. (2012). Suspension of disbelief in social robotics. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Wuhan, China (pp. 484489). IEEE.Google Scholar
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864886.CrossRefGoogle Scholar
Gabriel, S., & Young, A. F. (2011). Becoming a vampire without being bitten: The narrative collective-assimilation hypothesis. Psychological Science, 22(8), 990994.10.1177/0956797611415541CrossRefGoogle ScholarPubMed
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619619.10.1126/science.1134475CrossRefGoogle ScholarPubMed
Gray, K., Knobe, J., Sheskin, M., Bloom, P., & Barrett, L. F. (2011). More than a body: Mind perception and the nature of objectification. Journal of Personality and Social Psychology, 101(6), 12071220.10.1037/a0025883CrossRefGoogle ScholarPubMed
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125130.10.1016/j.cognition.2012.06.007CrossRefGoogle ScholarPubMed
Hammond, M. D., & Cimpian, A. (2017). Investigating the cognitive structure of stereotypes: Generic beliefs about groups predict social judgments better than statistical beliefs. Journal of Experimental Psychology: General, 146(5), 607614.10.1037/xge0000297CrossRefGoogle ScholarPubMed
Hodges, S. D., & Kezer, M. (2021). It is hard to read minds without words: Cues to use to achieve empathic accuracy. Journal of Intelligence, 9(2), 27.10.3390/jintelligence9020027CrossRefGoogle ScholarPubMed
Kim, Y., & Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or mindless?. Computers in Human Behavior, 28(1), 241250.CrossRefGoogle Scholar
Lewis, K. L., Hodges, S. D., Laurent, S. M., Srivastava, S., & Biancarosa, G. (2012). Reading between the minds: The use of stereotypes in empathic accuracy. Psychological Science, 23(9), 10401046.10.1177/0956797612439719CrossRefGoogle ScholarPubMed
Macrae, C. N., & Cloutier, J. (2009). A matter of design: Priming context and person perception. Journal of Experimental Social Psychology, 45(4), 10121015.10.1016/j.jesp.2009.04.021CrossRefGoogle Scholar
Mooijman, M., & Stern, C. (2016). When perspective taking creates a motivational threat: The case of conservatism, same-sex sexual behavior, and anti-gay attitudes. Personality and Social Psychology Bulletin, 42(6), 738754.CrossRefGoogle ScholarPubMed
Muckler, V. C. (2017). Exploring suspension of disbelief during simulation-based learning. Clinical Simulation in Nursing, 13(1), 39.10.1016/j.ecns.2016.09.004CrossRefGoogle Scholar
Myers, M. W., & Hodges, S. D. (2009). Making it up and making do: Simulation, imagination and empathic accuracy. In Markman, K., Klein, W. & Suhr, J. (Eds.), The handbook of imagination and mental simulation (pp. 281294). Psychology Press.Google Scholar
Oyserman, D.. (2015). Identity-based motivation. In Scott, R. & Kosslyn, S. (Eds.), Emerging trends in the social sciences (pp. 111). John Wiley & Sons. https://doi.org/10.1002/9781118900772.etrds0171Google Scholar
Shih, M., Pittinsky, T. L., & Ambady, N. (1999). Stereotype susceptibility: Identity salience and shifts in quantitative performance. Psychological Science, 10(1), 8083.10.1111/1467-9280.00111CrossRefGoogle Scholar
Small, D. A., & Loewenstein, G. (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26(1), 516.CrossRefGoogle Scholar
Taylor, M., Hodges, S. D., & Kohányi, A. (2003). The illusion of independent agency: Do adult fiction writers experience their characters as having minds of their own?. Imagination, Cognition and Personality, 22(4), 361380.10.2190/FTG3-Q9T0-7U26-5Q5XCrossRefGoogle Scholar
Waytz, A., Cacioppo, J., & Epley, N. (2014). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219232.CrossRefGoogle Scholar
Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383388.CrossRefGoogle ScholarPubMed