Hostname: page-component-7bb8b95d7b-l4ctd Total loading time: 0 Render date: 2024-10-06T04:35:51.592Z Has data issue: false hasContentIssue false

Anthropomorphism, not depiction, explains interaction with social robots

Published online by Cambridge University Press:  05 April 2023

Dawson Petersen
Affiliation:
Linguistics Program, University of South Carolina, Columbia, SC 29208, USA [email protected]
Amit Almor
Affiliation:
Department of Psychology, Linguistics Program, Institute for Mind and Brain, Barnwell College, University of South Carolina, Columbia, SC 29208, USA [email protected] https://sc.edu/study/colleges_schools/artsandsciences/psychology/our_people/directory/almor_amit.php

Abstract

We question the role given to depiction in Clark and Fischer's account of interaction with social robots. Specifically, we argue that positing a unique cognitive process for handling depiction is evolutionarily implausible and empirically redundant because the phenomena it is intended to explain are not limited to depictive contexts and are better explained by reference to more general cognitive processes.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

We applaud Clark and Fischer (henceforth C&F) for calling attention to the timely question of how interaction with social robots can be nested in a broader framework. However, we question the central role given to depiction in their account. We argue that positing a specific cognitive mechanism for handling depictions is problematic a priori from an evolutionary perspective. Depictions are not naturally occurring phenomena which we have evolved to accommodate. Rather, they are human creations and could never have been created if we did not already possess a general mechanism to interpret them. We further wish to question the relevance of depiction as an explanatory factor regarding interactions with social robots. As we will show, many of the puzzles C&F discuss are not limited to depictive contexts and, therefore, are better explained in general terms. Specifically, we argue that (1) the type of artifact-directed social behavior performed in depictive contexts is present in other instances of anthropomorphism, (2) the levels of representation involved in depiction are present in other kinds of symbolic representation, and (3) the dissociations between social attributions and social interactions which occur with social robots are present in general social cognition. Overall, we argue that the issue of evolutionary plausibility, along with the requirements of parsimony, favors more general accounts over a depiction specific one.

The first puzzle that C&F address regards social behavior directed toward robots. We argue that this is merely one example in the broader category of artifact-directed social behavior. While C&F's explanation seems to be sufficient for robots, it fails to explain the full category. Following Airenti (Reference Airenti2018), we note that there are clear instances of nonhuman entities eliciting social responses even when the target does not meaningfully resemble a human. For example, when a car engine fails to start, it is not uncommon for the would-be driver to engage in begging, chastising, or other social behaviors directed toward the car. It is difficult to argue that the car is a depiction of a social agent. Rather, Airenti argues that the interactive situation itself, in this case noncooperation, is sufficient to provoke a social response. We are not convinced that there are important qualitative differences between social interactions with robots and broken cars which should require distinct explanations. In these two instances of anthropomorphic artifact-directed social behavior, it makes little difference whether the target artifact is a depiction or not. The robot's status as a depiction, while it may increase the frequency of anthropomorphization, is not necessary for it to be anthropomorphized. As such, we suggest that depiction does not play a central causal role in social interactions with robots.

The second puzzle that C&F discuss involves levels of representation. We argue that the three levels of representation that C&F propose are not unique to depictions, but rather are present in widely varying cognitive contexts. In the philosophical literature, a distinction is drawn between icons (which are analogous to depictions, representing non-arbitrarily via correspondences between the signifier and the signified) and symbols, like words, which represent arbitrarily (Burks, Reference Burks1949; de Saussure, Reference de Saussure1983). The ability to be represented at multiple levels is by no means limited to icons. Symbols likewise can be conceived of as physical objects (sounds, marks on a page), be mentioned as representative objects bearing meaning, or be used to express their meanings without any acknowledgment of the signs themselves. The presence of these levels of representation in general symbolic reasoning calls into question the relevance of a depiction-specific framework to explain phenomena which are present in non-depictive contexts.

The third puzzle involves the relationship between social beliefs and social interactions. C&F consider interactions with social robots to be basically different from interactions with humans because (in general) we believe humans and not robots to be conscious social agents. As a result, C&F make a great deal of the fact that social robots can be treated alternatively as objects and agents while failing to recognize that the same is true of human beings. We can just as quickly attribute human behavior (falling to the ground and twitching) to a physical cause (a seizure) as we can attribute Robovie's behavior (turning off) to a physical cause (a dead battery). Equally, anyone who has worked in the service industry will relate to Smooth's experience of being treated as a mere piece of machinery by customers. These facts about human interaction undermine the assumption that social behavior relies on beliefs about agency and consciousness. While we may intuitively believe that humans are conscious and robots are not, there is little evidence that this belief greatly affects our willingness to engage socially with either. If we abandon the assumption that social beliefs determine social interactions, much of the difficulty dissolves, and there is no longer need for a bright line to distinguish depictions, non-depictive anthropomorphization, and ordinary social interaction. As with the previous two puzzles, the phenomenon that C&F seek to explain with depictions is present in non-depictive contexts, and a more general explanation is required. Given that this puzzle, like the previous two, is solvable at a general level, it is not clear to us what role a theory of depictions has to play in cognitive psychology as a whole.

In summary, we argue that the phenomena that C&F describe are not qualitatively distinguishable from other non-depictive phenomena. They are not indicative of a unique depictive cognitive process, but are simply an anthropomorphic generalization of more basic representative processes already used in social cognition. While C&F's theory is coherent and well-articulated, it is evolutionarily unmotivated, because a unique process for depictive interpretation could not arise unless depictions already existed, and it is unnecessary, because the puzzles that C&F address require (and in many cases already possess) more general explanations. The broader phenomenon of anthropomorphism, in contrast, is still vastly underexplored and lacks a fully articulated theory. We suggest that future efforts should be focused on providing and testing theories of anthropomorphism, not of social robots or depictions specifically.

Acknowledgments

We thank Anne Bezuidenhout and Brett Sherman for their comments on an earlier draft of this commentary.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Airenti, G. (2018). The development of anthropomorphism in interaction: Intersubjectivity, imagination, and theory of mind. Frontiers in Psychology, 9, 2136.CrossRefGoogle ScholarPubMed
Burks, A. W. (1949). Icon, index, and symbol. Philosophy and Phenomenological Research, 9(4), 673689.CrossRefGoogle Scholar
de Saussure, F. (1983). Course in general linguistics, trans. by Harris, R. Open Court Classics.Google Scholar