Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-22T15:38:18.179Z Has data issue: false hasContentIssue false

Learning agents that acquire representations of social groups

Published online by Cambridge University Press:  07 July 2022

Abstract

Humans are learning agents that acquire social group representations from experience. Here, we discuss how to construct artificial agents capable of this feat. One approach, based on deep reinforcement learning, allows the necessary representations to self-organize. This minimizes the need for hand-engineering, improving robustness and scalability. It also enables “virtual neuroscience” research on the learned representations.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Botvinick, M., Barrett, D. G., Battaglia, P., de Freitas, N., Kumaran, D., Leibo, J. Z., (2017). Building machines that learn and think for themselves [Commentary on Lake et al.] Behavioral and Brain Sciences, 40, e255. https://doi.org/10.1017/S0140525X17000048CrossRefGoogle Scholar
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … (2020). Language models are few-shot learners. arXiv preprint arXiv, 2005.14165.Google Scholar
Kriegeskorte, N., Mur, M., & Bandettini, P. A. (2008). Representational similarity analysis – Connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, 4.Google ScholarPubMed
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436444.CrossRefGoogle ScholarPubMed
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … (2015) Human-level control through deep reinforcement learning. Nature, 518(7540), 529533.CrossRefGoogle ScholarPubMed
Poggio, T. (2012). The levels of understanding framework, revised. Perception, 41(9), 10171023.CrossRefGoogle ScholarPubMed
Saxe, A. M., Bhand, M., Mudur, R., Suresh, B., & Ng, A. Y. (2011). Unsupervised learning models of primary cortical receptive fields and receptive field plasticity. Advances in Neural Information Processing Systems, 19711979.Google Scholar
Sutton, R. S., Precup, D., & Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1–2), 181211.CrossRefGoogle Scholar
Vezhnevets, A., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., & Kavukcuoglu, K. (2017). Feudal networks for hierarchical reinforcement learning. In International Conference on Machine Learning, 35403549. PMLR.Google Scholar
Vezhnevets, A., Wu, Y., Eckstein, M., Leblond, R., & Leibo, J. Z. (2020). Options as responses: Grounding behavioural hierarchies in multi-agent reinforcement learning. In International Conference on Machine Learning, 97339742. PMLR.Google Scholar
Zhuang, C., Yan, S., Nayebi, A., Schrimpf, M., Frank, M. C., DiCarlo, J. J., & Yamins, D. L. (2021). Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118(3), e2014196118. https://doi.org/10.1073/pnas.2014196118.CrossRefGoogle ScholarPubMed