Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-22T14:34:48.622Z Has data issue: false hasContentIssue false

Meta-learning: Data, architecture, and both

Published online by Cambridge University Press:  23 September 2024

Marcel Binz*
Affiliation:
Max Planck Institute for Biological Cybernetics, Tübingen, Germany Helmholtz Institute for Human-Centered AI, Munich, Germany [email protected] [email protected] [email protected]
Ishita Dasgupta
Affiliation:
Akshay Jagadish
Affiliation:
Max Planck Institute for Biological Cybernetics, Tübingen, Germany Helmholtz Institute for Human-Centered AI, Munich, Germany [email protected] [email protected] [email protected]
Matthew Botvinick
Affiliation:
Jane X. Wang
Affiliation:
Eric Schulz
Affiliation:
Max Planck Institute for Biological Cybernetics, Tübingen, Germany Helmholtz Institute for Human-Centered AI, Munich, Germany [email protected] [email protected] [email protected]
*
*Corresponding author.

Abstract

We are encouraged by the many positive commentaries on our target article. In this response, we recapitulate some of the points raised and identify synergies between them. We have arranged our response based on the tension between data and architecture that arises in the meta-learning framework. We additionally provide a short discussion that touches upon connections to foundation models.

Type
Authors' Response
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

R1. Introduction

In our target article, we sketched out a research program around the idea of meta-learned models of cognition. The cornerstone of this research program was the observation that neural networks, such as recurrent neural networks, can be trained via meta-learning to mimic Bayesian inference without being explicitly designed to do so (Ortega et al., Reference Ortega, Wang, Rowland, Genewein, Kurth-Nelson, Pascanu and Legg2019). This positions the resulting meta-learned models ideally for applications in the context of rational analyses of cognition (Anderson, Reference Anderson2013). Yet, meta-learning additionally enables us to do things that are not possible with other existing methods, thereby pushing the boundaries of rational analyses. Not only is the framework built on solid theoretical grounds, but it also enjoys growing empirical support. Meta-learned models account for a wide range of phenomena that pose a challenge to traditional models, such as the ability for compositional reasoning (Jagadish, Binz, Saanum, Wang, & Schulz, Reference Jagadish, Binz, Saanum, Wang and Schulz2023; Lake & Baroni, Reference Lake and Baroni2023) or the reliance on heuristic strategies (Binz, Gershman, Schulz, & Endres, Reference Binz, Gershman, Schulz and Endres2022; Dasgupta, Schulz, Tenenbaum, & Gershman, Reference Dasgupta, Schulz, Tenenbaum and Gershman2020).

We believe this research direction is particularly exciting because it allows us to reconceptualize different cognitive processes, including learning, planning, reasoning, and decision-making, into one unified process: The forward dynamics of a deep neural network. In the terminology of modern large language models (LLMs), this ability to acquire knowledge via a simple forward pass is also known as in-context learning (Brown et al., Reference Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal and Amodei2020). In-context learning stands in contrast to traditional means of knowledge acquisition in neural networks that requires weight adjustment via gradient descent (hence, it is referred to as in-weights learning). Indeed, there are close connections between meta-learning and training LLMs to which we will return at the end of our response.

Many commentators shared our excitement about this new technology. McCoy & Griffiths write that the “direction laid out by Binz et al. is exciting.” Pesnot-Lerousseau & Summerfield suggest that “this computational approach provides an interesting candidate solution for some of nature's most startling and puzzling behaviours” and that “meta-learning is a general theory of natural intelligence that is – more than [its] classical counterpart – fit for the real world.” Grant says that “it is an exciting time to be working with and on meta-learning toolkit” but also points out that “many aspects remain open.” We agree with this sentiment: Meta-learning is a powerful framework that provides us with the toolkit to build candidate theories of human cognition, but we still must figure out the details and precise instantiations that best describe it.

In the target article, we have framed our argument from a Bayesian angle. Although this offers an invaluable perspective, it does somewhat undermine the role that neural networks play in this context. Indeed, it is really the marriage between Bayesian and neural network models that gives meta-learning its power. There were several commentators who picked up on this. We agree with Ong, Zhi-Xuan, Tenenbaum, & Goodman (Ong et al.) who “suggest that the meta-learning approach could be further strengthened by considering connectionist and Bayesian approaches, rather than exclusively one or the other.” McCoy & Griffiths perhaps put it best by saying that meta-learning “expand[s] the applicability of Bayesian approaches by reconciling them with connectionist models – thereby bringing together two successful research traditions that have often been framed as antagonistic.” This integration of research traditions is what enables us to build “constraints from experimental neuroscience, and ecologically relevant environments” into rational theories as suggested by Grant, thereby leading to more faithful and naturalistic models.

Many commentators also noted that meta-learning finds applications beyond studying standard human cognition. For example, Fields & Glazebrook suggest studying meta-learning “in more tractable experimental systems in which the implementing architecture can be manipulated biochemically and bioelectrically”, whereas Veit & Browning highlight that “there is also the potential to use meta-learning models to help us understand the evolution of cognition more generally.” Nussenbaum & Hartley furthermore point out that “these models are particularly useful for testing hypotheses about why learning processes change across development” because they allow us to arbitrate whether changes in an individual are due to an adaption to the external environment (i.e., changes in data) or to internal changes in cognitive capacity (i.e., changes in architecture). We are excited by these research directions as well.

We found the tension between data and architecture laid out by Nussenbaum & Hartley very useful, and have therefore decided to organize our response around it. We begin by discussing the commentaries that placed a focus on the importance of data for understanding human cognition (sect. R2), followed by those that focused on the importance of model architecture instead (sect. R3). The point where those two concepts meet will be the centerpiece of our discussion (sect. R4). We finish our response by clarifying some of the misunderstandings that have arisen from our original target article (sect. R5), before providing a general conclusion (sect. R6).

R2. Data matters more than we thought

Historically, cognitive models have been largely based on symbolic representations. Examples include models of heuristic decision-making, problem-solving, or planning. This modeling tradition is largely based on the premise that model architectures are the driving factor in determining behavior. Proponents of this approach often argue that symbolic representations are necessary to capture core ingredients of human cognition, such as decision-making, problem-solving, or planning (Marcus, Reference Marcus1998). The advent of Bayesian models of cognition expanded on this picture. Even though most Bayesian models are also based on symbolic representations, they are sensitive to the data that are expected to be encountered. If assumptions about the environment change, the behavior of these models changes as well. The past 30 years have shown that people are indeed adaptive to the environment, thereby providing considerable support to Bayesian models of cognition (Griffiths, Kemp, & Tenenbaum, Reference Griffiths, Kemp, Tenenbaum and Sun2008).

In contrast to models with symbolic representations, neural networks are based on distributed vector representations. Many have argued that neural networks are inherently ill-equipped for reasoning, planning, and problem-solving because they lack the symbolic representations of their cousins. Indeed, there is a whole line of research (known as neurosymbolic AI) attempting to fix these issues by incorporating symbolic processes into neural network architectures (De Raedt, Manhaeve, Dumancic, Demeester, & Kimmig, Reference De Raedt, Manhaeve, Dumancic, Demeester and Kimmig2019). The framework of meta-learning demonstrates that this may not be necessary. It instead offers a proof-of-concept showing that – when trained on the right data – neural networks can exhibit many emergent phenomena that have traditionally been attributed to symbolic models, such as the ability for model-based (Wang et al., Reference Wang, Kurth-Nelson, Tirumala, Soyer, Leibo, Munos and Botvinick2016) and compositional reasoning (Lake & Baroni, Reference Lake and Baroni2023). For example, as already discussed in our target article, Lake and Baroni (Reference Lake and Baroni2023) have shown that a vanilla transformer architecture can be taught to make compositional inferences via meta-learning. Findings like this allow us to interpret human compositionality as “an emergent property of an inner-loop, in-context learning algorithm that is itself meta-learned” as discussed by Russin, McGrath, Pavlick, & Frank. Likewise, Wang et al. (Reference Wang, Kurth-Nelson, Tirumala, Soyer, Leibo, Munos and Botvinick2016) have shown that a simple meta-learned recurrent neural network can act like a model-based reinforcement learning algorithm, even though it does not contain any explicit architecture that facilitates model-based reasoning. The implications of these findings are vast as pointed out by Pesnot-Lerousseau & Summerfield who suggest that “many supposedly ‘model-based’ behaviours may be better explained by meta-learning than by classical models” and that meta-learning “invites us to revisit our neural theories of problem solving and goal-directed planning.”

Taken together, this suggests that model architecture may not be as important as once thought for building systems with human-like reasoning capabilities. What matters much more than we initially thought however are the data these systems are trained on. If the data are generated by symbolic processes, meta-learning will pick up on this and compile these processes into the resulting models.

There is evidence from recent work in NeuroAI supporting the idea that data trumps architecture. In a large-scale analysis, Conwell, Prince, Kay, Alvarez, and Konkle (Reference Conwell, Prince, Kay, Alvarez and Konkle2022), for example, found different model architectures achieve near equivalent degrees of brain predictivity in the human ventral visual system and that the data they were trained on had a much bigger influence. Muttenthaler, Dippel, Linhardt, Vandermeulen, and Kornblith (Reference Muttenthaler, Dippel, Linhardt, Vandermeulen and Kornblith2022) presented similar findings, suggesting that “model scale and architecture have essentially no effect on the alignment [between the representations learned by neural networks and] human behavioral responses, whereas the training dataset and objective function both have a much larger impact.”

That puts the focus on the question of “what to learn?” Prat & Lamm argue that this is the hard problem in natural and artificial intelligence. They further point out that nature solves this problem via evolution and that we cannot handcraft the utility function (or error measurements) for each task separately. We sympathize with this perspective. However, the evolutionary perspective is not the most useful one when the goal is to build models of human cognition. We certainly do not want to simulate the entire process of evolution for this. If we want to avoid this, what can we do as an alternative? For one, we can use automated tools, such as Gibbs sampling with people (Harrison et al., Reference Harrison, Marjieh, Adolfi, van Rijn, Anglada-Tort, Tchernichovski and Jacoby2020), that measure the priors and utility functions of people and plug the resulting data into our pipelines. That this is possible in the meta-learning framework has been recently demonstrated by Kumar et al. (Reference Kumar, Correa, Dasgupta, Marjieh, Hu, Hawkins and Griffiths2022). There is also recent work suggesting that the generation of data reflecting the real world can be automated using foundation models. Jagadish, Coda-Forno, Thalmann, Schulz, and Binz (Reference Jagadish, Coda-Forno, Thalmann, Schulz and Binz2024) have, for example, shown that this is a promising approach for building models that can acquire human-like priors when trained on ecologically valid problems. In particular, they queried a LLM to generate naturalistic classification problems, trained a meta-learning system on these problems, and demonstrated that the resulting meta-learned models explain many effects observed in the literature.

R3. Yet, architecture matters too

However, it is certainly not only data that matter for understanding human cognition. Model architecture will still play a role as pointed out in several commentaries (e.g., Schilling, Ritter, & Ohl). In fact, there are already results showing that this is the case in the meta-learning setting. For example, Chan et al. (Reference Chan, Santoro, Lampinen, Wang, Singh, Richemond and Hill2022) studied the trade-off between in-context and in-weights learning and found that in-context learning only emerges when the training data exhibit certain data distributional properties. Importantly, this was only true in transformer-based models but not in recurrent models (which relied on in-weights learning instead). This demonstrates that different model architectures can lead to characteristically different behaviors, thereby highlighting that architecture is crucial – at least to some extent. From a cognitive perspective, the interesting question will be how much architecture is needed.

Many commentaries suggested that enhancing the black-box meta-learning framework with process-level structures will help us to better understand human cognition. We agree that this is an intriguing line of thought (see the discussion in our target article within sects. 2.4 and 5). In many cases, the commentaries added an additional dimension to our original proposal. We discuss some of these proposals in the following and place them into the context of our framework.

Sanborn, Yan, & Tsvetkov (Sanborn et al.) highlight that people often deviate from normative behavior (whether Bayesian or meta-learned). Earlier work has shown that many of these deviations can be captured by rational process models, which approximate posterior predictive distributions using posterior mean, posterior median, or other summary statistics. These rational process models play hand-in-hand with the meta-learning framework as pointed out by Sanborn et al.. Essentially, their proposal is to have a rational process model reason based on the meta-learned posterior predictive distribution. This combination brings together the best of both worlds as one does not even have to retrain the meta-learning model as in other approaches that induce limited computational resources into meta-learned models (Binz & Schulz, Reference Binz and Schulz2022; Saanum, Éltető, Dayan, Binz, & Schulz, Reference Saanum, Éltető, Dayan, Binz and Schulz2023). That can be convenient from a practical perspective. We agree that this is an appealing property and look forward to how the interaction between these two frameworks will play out in the future.

In a similar vein, Grant argues that the meta-learning toolkit needs stronger architectural constraints. Her proposal emphasizes a connectionist implementation of meta-learning called model-agnostic meta-learning (MAML). MAML implements its stepwise updating using gradient descent as opposed to the models we focused on in our target article whose updating is implemented using neural network forward dynamics (which is also referred to as memory-based meta-learning). Although MAML involves meta-learning with the same objective as discussed in our target article, it differs in what is being meta-learned. In MAML, one adapts the initial weights of a neural network, such that subsequent gradient steps lead to optimal learning. That leads to an interesting class of gradient-based meta-learned models that have many (but not all) of the advantages discussed in our target article. MAML's key feature is that it allows for a seamless link between the algorithmic and the computational level of analysis. Future research should compare different classes of models against each other and find the one that best explains human behavior. It will be particularly exciting to pit gradient-based models (such as MAML) against memory-based models (including recurrent neural networks) and see which class of theories offers a better account of human behavior. Doing so will allow us to answer some of the big, outstanding questions of cognitive psychology and neuroscience, such as whether we can find any evidence for computations like gradient descent and backpropagation in the brain.

Last but not least, Cea and Stussi, Dukes, & Sander suggest that the meta-learning framework could benefit from the inclusion of affective elements. We agree that doing so can provide added value to the meta-learning research agenda. Yet, at the same time, this proposal highlights one of the tensions involved in building complex systems, namely deciding on what should be prewired and what should be given the chance to emerge instead. To illustrate this point, let us consider one of the examples provided by Stussi et al. highlighting the importance of affective processes: For humans, positively valenced prediction errors are generally associated with a higher learning rate than negatively valenced prediction errors (Palminteri & Lebreton, Reference Palminteri and Lebreton2022). In a recent study, we found that this characteristic emerges naturally in meta-learned models (Schubert, Jagadish, Binz, & Schulz, Reference Schubert, Jagadish, Binz and Schulz2024), thereby illustrating that at least some affective processes are already present in meta-learned models.

Ultimately, determining which inductive biases should be prewired and which should be learned from data depends on which research question one is investigating. If one wants to obtain a process-level understanding of a phenomenon, there is no better way than formalizing that phenomenon mathematically and simulating it in silico. If the goal, on the other hand, is to simply induce superhuman-like general abilities in a computational model, modern machine learning research, such as the work on AlphaZero, has taught us that we should keep the amount of prewiring limited and instead rely mainly on the data itself (Sutton, Reference Sutton2019).

R4. Transcending levels of analysis

The full power of meta-learning does not solely come from its close ties to Bayesian inference – the algorithmic implementation also matters. To get a more complete understanding of human cognition, it seems likely that we need to consider both data and architecture. Meta-learning allows us to do this by bringing together two modeling traditions that have focused on these two aspects individually. It combines the advantages of Bayesian models – which feature powerful, data-dependent inductive biases – and neural network models – which come with a vast amount of architectural design choices – seamlessly. To quote Ortega (Reference Ortega2020), meta-learning “brings back Bayesian statistics within deep learning without even trying – no latents, no special architecture, no special cost function, nada.”

That was also recognized by some of the commentators. McCoy & Griffiths state that meta-learning “reconcil[es Bayesian approaches] with connectionist models – thereby bringing together two successful research traditions that have often been framed as antagonistic”. This feature allows the framework to effortlessly jump between different levels of analysis, from the computational over the algorithmic to the implementational. Furthermore, although neural networks have been often criticized for not being able to engage in symbolic reasoning, meta-learning illustrates that it is, in principle, possible to equip neural networks with symbolic inductive biases.

Nussenbaum & Hartley highlight potential applications in the context of developmental psychology. Here, one of the central questions involves identifying whether “age-related changes in learning reflect adaptation to age-varying ‘external’ ecological problems or ‘internal’ changes in cognitive capacity.” We believe that this is an exciting research direction. In fact, we have recently applied some of these ideas to test whether the developmental trajectories of children in the context of intuitive physics can be captured with deep generative models by manipulating the amount of training data or the system's computational resources (Buschoff, Schulz, & Binz, Reference Buschoff, Schulz and Binz2023).

However, the strict dichotomy between data and architecture is likely to be false. Instead, the two interact with each other over the lifespan, as also pointed out by Nussenbaum & Hartley. Meta-learning allows us to disentangle the two and study them jointly or separately. This not only has implications for understanding adults and kids but also in the context of mental and physical health. We can, for instance, ask which kinds of environments cause or exacerbate certain mental illnesses, or what types of architectural constraints lead to maladaptive behaviors. In doing so, we might be able to better understand these issues and, in turn, potentially develop targeted aids for them.

R5. Points of contention

Although the framework proposed in our target article was received well overall, there were a few points of contention raised by some of the commentators. In this section, we address and clarify these issues.

The first of them is raised by Ong et al. and by Székely & Orbán. They both argue that having to specify an inference problem is a virtue of the Bayesian approach, not a limitation. From their perspective, the process of defining the inference problem can in itself shed light on the system whose cognitive processes are being modeled. We agree that this is a valid – and often very useful – strategy. However, both of the commentaries come with the implicit assumption that this is not possible in the meta-learning framework. We think this is a wrong dichotomy. The exact same research strategy can be applied in the meta-learning framework: (1) Define a data-generating distribution, (2) draw samples from it, (3) use these to construct a meta-learned model, and (4) compare models with different assumptions against each other. To illustrate this using the probabilistic programming example of Ong et al., one could, for example, define a distribution over probabilistic programs and use meta-learning to construct a neural network that can perform approximate inference over probabilistic programs. Although we generally agree that this is a useful research strategy, it is important to mention that there are settings in which it is just not applicable, as outlined in our target article. We think this is where the strength of meta-learning lies. It allows us to do all of the things we can do in the traditional Bayesian framework – including probabilistic programs – and more.

From a conceptual perspective, we also have to weigh in on the commentaries of Vriens, Horan, Gottlieb, & Silvetti who state that “the framework generates models that are not interpretable in cognitive terms and, and crucially, are governed by an immense numbers of free parameters […].” That is not an accurate depiction of the meta-learning framework from our perspective. Although these models have potentially a lot of parameters, they are not free parameters that are fitted to human data. Instead, they are only optimized to maximize performance on a given task. Vriens et al. further claim that the framework generates models with low falsifiability. This is also far from the truth. Every meta-learned model can be compared against alternative models, and hence potentially falsified, as we and others have shown in many previous studies. Indeed, this is true not only on the behavioral level but also on the neural level (i.e., when there are inconsistencies with neural recordings). In this sense, meta-learned models provide even stronger grounds on which they can be refuted as pointed out by Grant. The meta-learning framework as a whole is of course harder to falsify. However, we believe that it should be the role of a framework to generate useful theories, not to be falsifiable.

We also noticed that there were a few misunderstandings concerning the distinction between tool and theory highlighted in our target article. In particular, we put forward the notion of using meta-learning as a tool for building models of human learning. We did not say much or make any claims about how people actually acquire their learning algorithms, i.e., the process of meta-learning itself (see sect. 1.4 of the target article). Although this is an important problem, it is outside the scope of our article. Llewellyn states that our first question is to understand how people improve their learning abilities over time (and subsequently that we fail to address this question satisfyingly). However, we explicitly want to highlight again that we did not strive to address this question in our target article. Likewise, Calderan & Visalli mention that we should position ourselves against hierarchical Bayesian models, which can be used to model learning-to-learn. We agree that this would be needed if we were targeting to find out how people learn-to-learn, which we are not. Even though the study of learning-to-learn is outside the scope of our target article, we still believe that the meta-learning framework could provide an interesting perspective for studying these processes. This was, for example, noted by Nussenbaum & Hartley in the context of developmental psychology, or by Yin, Xiao, Wu, & Lian who make the connection to integrative learning.

Finally, Calderan & Visalli question the utility of meta-learning to build rational models in large worlds. In particular, they ask: “What justifications exist for the selection of training data?” They rightfully claim that meta-learned models have priors too and that they therefore offer no important advantages over Bayesian models. However, in contrast to Bayesian models, meta-learned models do not require an explicit expression for these priors – they only need samples from them, which is a much weaker requirement. That means that we can go out and measure them by collecting samples. In turn, this gives us many opportunities. We can, for example, ask people to generate samples from their priors (as done in the work of Kumar et al. [Reference Kumar, Correa, Dasgupta, Marjieh, Hu, Hawkins and Griffiths2022] mentioned earlier), or we can go out and determine priors that match real-world statistics (as done in the work of Jagadish et al. [Reference Jagadish, Coda-Forno, Thalmann, Schulz and Binz2024] mentioned earlier). Meta-learning then allows us to compile these priors into a computational model. Although hierarchical Bayesian models may also be able to construct their priors, as mentioned by Calderan & Visalli, they can only do so in a predetermined class of functions, preventing an effective application to large world problems.

R6. Links to foundation models

To our surprise, none of the commentaries touched upon the similarities between meta-learning and training LLMs. We therefore wanted to use this opportunity to raise a few points on this topic ourselves. Essentially, LLMs are trained using the same objective we have discussed in our target article (equation 7). The only thing that is special is that the data distribution amounts to the whole internet. In this sense, LLMs can be viewed as a special case of meta-learned models – all the same principles apply. Thus, one way to view LLMs is that they approximate Bayesian inference to predict the next tokens in human language. Like the meta-learned models we have discussed in our target article, LLMs learn from their context (i.e., a history of previous observations) to make better predictions with more examples by updating only their internal activations. There are exciting research questions only waiting to be answered in the space between human cognition and LLMs, and we believe that the meta-learning perspective could help us in this endeavor (Binz & Schulz, Reference Binz and Schulz2023; Hussain, Binz, Mata, & Wulff, Reference Hussain, Binz, Mata and Wulff2023; Yax, Anlló, & Palminteri, Reference Yax, Anlló and Palminteri2023).

It might also be interesting to think about meta-learned models that are not based on the objectives outlined in our target article. For example, we may ask how meta-learned models relate to the concepts of free energy minimization and active inference (see commentary by Penacchio & Clemente), what objectives are needed to meta-learn quantum models (see commentaries by Clark and Mastrogiorgio), whether we can meta-learn models using contrastive losses (Tian et al., Reference Tian, Sun, Poole, Krishnan, Schmid and Isola2020), or whether it is possible to give meta-learning systems the ability to determine their own objectives (see commentary by Moldoveanu). Doing so might lead to models that do not approximate Bayesian inference but have other appealing properties. Nevertheless, putting theoretical properties aside, finding out which models are most useful in understanding cognition will ultimately be an empirical question, not a theoretical one.

Where will cognitive modeling be in 10 years from now? We predict that there will be major advances in two main directions: (1) Our models will become much more domain-general and (2) they will process high-dimensional, naturalistic stimuli. The meta-learning framework will help us to achieve both of these objectives. The first is already addressed by design: Meta-learning involves training on a collection of tasks – we only have to make this collection more diverse. Regarding the second, meta-learned models of cognition can be readily combined with visual neural networks, thereby giving them the ability to “see” experimental stimuli similarly to people (as pointed out by Sanborn et al.). We are already witnessing some of these systems that perform a wide range of tasks in complex, vision-based environments in the machine learning literature. Examples include models such as Voyager (Wang et al., Reference Wang, Xie, Jiang, Mandlekar, Xiao, Zhu and Anandkumar2023), Ada (Team et al., Reference Team, Bauer, Baumli, Baveja, Behbahani, Bhoopchand and Zhang2023), or SIMA (Raad, Ahuja, Besse, Bolt, & Young, Reference Raad, Ahuja, Barros, Besse, Bolt and Young2024) – all of which are based (at least to some extent) on a meta-learned model. Unfortunately, these models are currently too expensive to train for most academic research labs (let alone to run ablations on them). For example, training Ada requires access to 64 TPUs for five weeks. However, compute is getting cheaper every year, and – together with technological advances – we think it is likely that a similar system could be trained on standard hardware 10 years from now. We are excited by this prospect and what it means for understanding human cognition.

References

Anderson, J. R. (2013). The adaptive character of thought. Psychology Press.CrossRefGoogle Scholar
Binz, M., & Schulz, E. (2022). Modeling human exploration through resource-rational reinforcement learning. Advances in Neural Information Processing Systems, 35, 3175531768.Google Scholar
Binz, M., & Schulz, E. (2023). Turning large language models into cognitive models. arXiv preprint arXiv:2306.03917.Google Scholar
Binz, M., Gershman, S. J., Schulz, E., & Endres, D. (2022). Heuristics from bounded meta-learned inference. Psychological Review, 129(5), 1042.CrossRefGoogle ScholarPubMed
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., … Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 18771901.Google Scholar
Buschoff, L. M. S., Schulz, E., & Binz, M. (2023). The acquisition of physical knowledge in generative neural networks. In International Conference on Machine Learning (pp. 30321–30341). PMLR.Google Scholar
Chan, S., Santoro, A., Lampinen, A., Wang, J., Singh, A., Richemond, P., … Hill, F. (2022). Data distributional properties drive emergent in-context learning in transformers. Advances in Neural Information Processing Systems, 35, 1887818891.Google Scholar
Conwell, C., Prince, J. S., Kay, K. N., Alvarez, G. A., & Konkle, T. (2022). What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines?. BioRxiv, 2022-03.Google Scholar
Dasgupta, I., Schulz, E., Tenenbaum, J. B., & Gershman, S. J. (2020). A theory of learning to infer. Psychological Review, 127(3), 412.CrossRefGoogle ScholarPubMed
De Raedt, L., Manhaeve, R., Dumancic, S., Demeester, T., & Kimmig, A. (2019). Neuro-symbolic= neural+ logical+ probabilistic. In NeSy’19@ IJCAI, the 14th International Workshop on Neural-Symbolic Learning and Reasoning.Google Scholar
Griffiths, T. L., Kemp, C., Tenenbaum, J. B.. (2008). Bayesian models of cognition. In Sun, R. (Ed.), The Cambridge handbook of computational psychology (pp. 59100). Cambridge University Press.Google Scholar
Harrison, P., Marjieh, R., Adolfi, F., van Rijn, P., Anglada-Tort, M., Tchernichovski, O., … Jacoby, N. (2020). Gibbs sampling with people. Advances in Neural Information Processing Systems, 33, 1065910671.Google Scholar
Hussain, Z., Binz, M., Mata, R., & Wulff, D. U. (2023). A tutorial on open-source large language models for behavioral science. PsyArXiv preprint.CrossRefGoogle Scholar
Jagadish, A. K., Binz, M., Saanum, T., Wang, J. X., & Schulz, E. (2023). Zero-shot compositional reinforcement learning in humans.CrossRefGoogle Scholar
Jagadish, A. K., Coda-Forno, J., Thalmann, M., Schulz, E., & Binz, M. (2024). Ecologically rational meta-learned inference explains human category learning. arXiv preprint arXiv:2402.01821.Google Scholar
Kumar, S., Correa, C. G., Dasgupta, I., Marjieh, R., Hu, M. Y., Hawkins, R., … Griffiths, T. (2022). Using natural language and program abstractions to instill human inductive biases in machines. Advances in Neural Information Processing Systems, 35, 167180.Google Scholar
Lake, B. M., & Baroni, M. (2023). Human-like systematic generalization through a meta-learning neural network. Nature, 623(7985), 115121.CrossRefGoogle ScholarPubMed
Marcus, G. F. (1998). Rethinking eliminative connectionism. Cognitive Psychology, 37(3), 243282.CrossRefGoogle ScholarPubMed
Muttenthaler, L., Dippel, J., Linhardt, L., Vandermeulen, R. A., & Kornblith, S. (2022). Human alignment of neural network representations. arXiv preprint arXiv:2211.01201.Google Scholar
Ortega, P. A., Wang, J. X., Rowland, M., Genewein, T., Kurth-Nelson, Z., Pascanu, R., … Legg, S. (2019). Meta-learning of sequential strategies. arXiv preprint arXiv:1905.03030.Google Scholar
Palminteri, S., & Lebreton, M. (2022). The computational roots of positivity and confirmation biases in reinforcement learning. Trends in Cognitive Sciences, 26(7), 607621.CrossRefGoogle ScholarPubMed
Saanum, T., Éltető, N., Dayan, P., Binz, M., & Schulz, E. (2023). Reinforcement learning with simple sequence priors. Advances in Neural Information Processing Systems, 36, 6198562005.Google Scholar
Schubert, J. A., Jagadish, A. K., Binz, M., & Schulz, E. (2024). In-context learning agents are asymmetric belief updaters. arXiv preprint arXiv:2402.03969.Google Scholar
SIMA Team, Raad, M. A., Ahuja, A., Barros, C., Besse, F., Bolt, A., … Young, N. (2024). Scaling instructable agents across many simulated worlds. Technical Report.Google Scholar
Sutton, R. (2019). The bitter lesson. Incomplete Ideas (blog), 13(1), 38.Google Scholar
Team, A. A., Bauer, J., Baumli, K., Baveja, S., Behbahani, F., Bhoopchand, A., … Zhang, L. (2023). Human-timescale adaptation in an open-ended task space. arXiv preprint arXiv:2301.07608.Google Scholar
Tian, Y., Sun, C., Poole, B., Krishnan, D., Schmid, C., & Isola, P. (2020). What makes for good views for contrastive learning? Advances in Neural Information Processing Systems, 33, 68276839.Google Scholar
Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., … Anandkumar, A. (2023). Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291.Google Scholar
Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., … Botvinick, M. (2016). Learning to reinforcement learn. arXiv preprint arXiv:1611.05763.Google Scholar
Yax, N., Anlló, H., & Palminteri, S. (2023). Studying and improving reasoning in humans and machines. arXiv preprint arXiv:2309.12485.Google Scholar