Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-26T00:04:29.869Z Has data issue: false hasContentIssue false

How cultural framing can bias our beliefs about robots and artificial intelligence

Published online by Cambridge University Press:  05 April 2023

Jeff M. Stibel
Affiliation:
Natural History Museum, Los Angeles, CA 90007, USA [email protected]
H. Clark Barrett
Affiliation:
Center for Behavior, Evolution and Culture, Department of Anthropology, University of California, Los Angeles, Los Angeles, CA 90095, USA [email protected]

Abstract

Clark and Fischer argue that humans treat social artifacts as depictions. In contrast, theories of distributed cognition suggest that there is no clear line separating artifacts from agents, and artifacts can possess agency. The difference is likely a result of cultural framing. As technology and artificial intelligence grow more sophisticated, the distinction between depiction and agency will blur.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Imagine a human living on earth 1,000 years ago, before the discovery of electricity or before anyone could have imagined a computer let alone an autonomous robot. Suppose this person suddenly encountered a robot transported back from the year 2022 (or, perhaps, an alien life form or probe from another planet). How would this person construe the mechanical entity? Would they treat it as a “depiction” of a real agent or simply as an agent, full stop?

Lacking any cultural, personal, or historical concept of the idea of a “robot,” it seems unlikely that a twelfth-century human would take the object before them as a human-made artifact designed to “depict” authentic agency. More likely, they would construe this unknown entity as a real agent of some kind.

Agency distinctions are not just limited to prehistoric analogies: Even well-informed individuals can perceive artifacts as having agency and intelligence. Indeed, one could imagine scenarios in which many people today, upon encountering a robot, artificial intelligence (AI), or deepfake, would not take certain artifacts as “depictions,” but as real agents. There is evidence that contemporary humans do perceive artificial agents as real. To take just one example, an AI researcher at Google has recently been suspended for arguing that a program they were interacting with had achieved sentience; this was followed by an MIT research professor who argued that Amazon's Alexa could also become sentient (Kaplan, Reference Kaplan2022).

Clark and Fischer (C&F) do an outstanding job outlining a particular cultural framing, or schema, of robots. Crucially, however, their theory is not and cannot be a universal theory of how all humans can, do, or will perceive and interact with artificial kinds. What is missing from C&F's theory is an anthropological viewpoint. Through such a lens, one can see that the notion of robots as “depictions” of real agents requires expectations – a mental model of what a robot is – that are not shared by all humans.

C&F cite individuals such as Danish theatergoers who bring a priori assumptions about robots from films, popular media, science, and school. Such prior expectations about robots allow people to act within a culturally delineated frame. They interact with a robot as if it had agency while knowing that the robot is a mere artifact, with no agency beyond that extended by its author.

From an anthropological perspective, it seems clear that this is a culturally provided mode of interaction, not one that has been available to all or even most humans across the span of history. Indeed, we suggest that this may not be the way that all or most humans currently perceive or will perceive robots in the future. Robots-as-depictions might be a category of robots that will always exist, but it is unlikely to be the only category of robots or artificial agents.

In evaluating C&F's proposal, it is important to distinguish between real and perceived agency. The question of what makes something a real, or actual, agent is largely a philosophical question. The question of when people perceive, or construe, an entity as a real agent is a question for psychology and anthropology (Barrett, Todd, Miller, & Blythe, Reference Barrett, Todd, Miller and Blythe2005; Gergely & Csibra, Reference Gergely and Csibra2003). C&F's article is primarily concerned with the second question and assumes that robots are not real agents. However, we argue that we should not take this for granted. It is possible for artificial agents to have real agency. As the technological sophistication of robotics and AI grows, this becomes increasingly likely.

While AI is still in its infancy, we can look to how humans interact with artifacts as a guide to how we will ultimately treat artificial agency. Consider for instance a blind person and how she interacts with her cane: Studies have demonstrated that the cane is treated as a part of the body (Malafouris, Reference Malafouris2008; Maravita & Iriki, Reference Maravita and Iriki2004). The effect is even more pronounced with artificial limbs (van den Heiligenberg et al., Reference van den Heiligenberg, Yeung, Brugger, Culham and Makin2017, Reference van den Heiligenberg, Orlov, Macdonald, Duff, Henderson Slater, Beckmann and Makin2018), and this was likely true with stone tools as they became integral to the livelihood of prehistoric Homo (Haggard, Martin, Taylor-Clarke, Jeannerod, & Franck, Reference Haggard, Martin, Taylor-Clarke, Jeannerod and Franck2003; Malafouris, Reference Malafouris2020).

There is also evidence to support the direct impact of artifacts on our biology. As Homo increased its reliance on physical artifacts, our genus' bodies grew less muscular and robust as a result (Ruff, Reference Ruff2005). The same may be true of cognitive tools: Homo sapiens have experienced more than a 5% reduction in brain mass throughout the Late Pleistocene and Holocene (Stibel, Reference Stibel2021) and that loss of brain mass has been linked to an increased use of cognitive tools (DeSilva et al., Reference DeSilva, Traniello, Claxton and Fannin2021). Modern technology, such as the internet and cell phones, have been shown to supplant thinking more broadly (Barr, Pennycook, Stolz, & Fugelsang, Reference Barr, Pennycook, Stolz and Fugelsang2015; Sparrow, Liu, & Wegner, Reference Sparrow, Liu and Wegner2011). Cognitive tools enable thought to move to and from the brain by offloading cognition from biological wetware to artificial hardware. Just as physical artifacts offload physical exertion, cognitive offloading may allow our expensive brain tissue to be selected against while enabling our intelligence to increase (DeSilva et al., Reference DeSilva, Traniello, Claxton and Fannin2021; Stibel, Reference Stibel2021).

When an artifact is used in the thinking process, it is as much a part of the process as are the neurons in the brain (Clark & Chalmers, Reference Clark and Chalmers1998; Malafouris, Reference Malafouris2020; Maravita & Iriki, Reference Maravita and Iriki2004). In that respect, artifacts are already a part of human agency so it seems reasonable to believe that, as AI gains in sophistication, we will perceive artificially intelligent agents as real and not depictions. Part of the problem may be that the term “artificial intelligence” is loaded. The technology humans create is artificial, but the intelligence created is real: artificial minds can have real intelligence. At present, most social robots are not yet sophisticated enough to arouse any response beyond a depiction, an imitation of something that has agency. But as artificial agents gain in sophistication and intelligence, it is likely that humans will treat them as having real agency.

Competing interest

None.

References

Barr, N., Pennycook, G., Stolz, J. A., & Fugelsang, J. A. (2015). The brain in your pocket: Evidence that smartphones are used to supplant thinking. Computers in Human Behavior, 48, 473480. https://doi.org/10.1016/j.chb.2015.02.029CrossRefGoogle Scholar
Barrett, H. C., Todd, P. M., Miller, G. F., & Blythe, P. W. (2005). Accurate judgments of intention from motion cues alone: A cross-cultural study. Evolution and Human Behavior, 26(4), 313331. https://doi.org/10.1016/j.evolhumbehav.2004.08.015CrossRefGoogle Scholar
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 719. http://www.jstor.org/stable/3328150CrossRefGoogle Scholar
DeSilva, J. M., Traniello, J. F., Claxton, A. G., & Fannin, L. D. (2021). When & why did human brains decrease in size? A new change-point analysis & insights from brain evolution in ants. Frontiers in Ecology and Evolution, 9, 742639. https://doi.org/10.3389/fevo.2021.742639CrossRefGoogle Scholar
Gergely, G., & Csibra, G. (2003). Teleological reasoning in infancy: The naıve theory of rational action. Trends in Cognitive Sciences, 7(7), 287292. https://doi.org/10.1016/S1364-6613(03)00128-1CrossRefGoogle Scholar
Haggard, P., Martin, F., Taylor-Clarke, M., Jeannerod, M., & Franck, N. (2003). Awareness of action in schizophrenia. Neuroreport, 14(7), 10811085. https://doi.org/10.1097/01.wnr.0000073684.00308.c0Google ScholarPubMed
Kaplan, M. (2022). After Google chatbot becomes “sentient,” MIT professor says Alexa could too. New York Post, Retrieved from https://nypost.com/2022/06/13/mit-prof-says-alexa-could-become-sentient-like-google-chatbot/Google Scholar
Malafouris, L. (2008). Beads for a plastic mind: The “blind man's stick” (BMS) hypothesis & the active nature of material culture. Cambridge Archaeological Journal, 18(3), 401414. https://doi.org/10.1017/S0959774308000449CrossRefGoogle Scholar
Malafouris, L. (2020). Thinking as “thinging”: Psychology with things. Current Directions in Psychological Science, 29(1), 38. https://doi.org/10.1177/0963721419873349CrossRefGoogle Scholar
Maravita, A., & Iriki, A. (2004). Tools for the body (schema). Trends in Cognitive Sciences, 8(2), 7986. https://doi.org/10.1016/j.tics.2003.12.008CrossRefGoogle ScholarPubMed
Ruff, C. B. (2005). Mechanical determinants of bone form: Insights from skeletal remains. Journal of Musculoskeletal & Neuronal Interactions, 5(3), 202212..Google ScholarPubMed
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science (New York, N.Y.), 333(6043), 776778. https://doi.org/10.1126/science.1207745CrossRefGoogle ScholarPubMed
Stibel, J. M. (2021). Decreases in brain size & encephalization in anatomically modern humans. Brain, Behavior and Evolution, 96(2), 6477. https://doi.org/10.1159/000519504CrossRefGoogle ScholarPubMed
van den Heiligenberg, F. M., Orlov, T., Macdonald, S. N., Duff, E. P., Henderson Slater, D., Beckmann, C. F., … Makin, T. R. (2018). Artificial limb representation in amputees. Brain, 141(5), 14221433. https://doi.org/10.1093/brain/awy054CrossRefGoogle ScholarPubMed
van den Heiligenberg, F. M., Yeung, N., Brugger, P., Culham, J. C., & Makin, T. R. (2017). Adaptable categorization of hands and tools in prosthesis users. Psychological Science, 28(3), 395398. https://doi.org/10.1177/0956797616685869CrossRefGoogle ScholarPubMed