No CrossRef data available.
Article contents
Combining meta-learned models with process models of cognition
Published online by Cambridge University Press: 23 September 2024
Abstract
Meta-learned models of cognition make optimal predictions for the actual stimuli presented to participants, but investigating judgment biases by constraining neural networks will be unwieldy. We suggest combining them with cognitive process models, which are more intuitive and explain biases. Rational process models, those that can sequentially sample from the posterior distributions produced by meta-learned models, seem a natural fit.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2024. Published by Cambridge University Press
References
Bhatia, S. (2017). Associative judgment and vector space semantics. Psychological Review, 124(1), 1–20. http://dx.doi.org/10.1037/rev0000047CrossRefGoogle ScholarPubMed
Busemeyer, J. R., Pothos, E. M., Franco, R., & Trueblood, J. S. (2011). A quantum theoretical explanation for probability judgment errors. Psychological Review, 118(2), 193–218. https://doi.org/10.1037/a0022542CrossRefGoogle ScholarPubMed
Castillo, L., León-Villagrá, P., Chater, N., & Sanborn, A. (2024). Explaining the flaws in human random generation as local sampling with momentum. PLoS Computational Biology, 20(1), e1011739. https://doi.org/10.1371/journal.pcbi.1011739CrossRefGoogle ScholarPubMed
Costello, F., & Watts, P. (2014). Surprisingly rational: Probability theory plus noise explains biases in judgment. Psychological Review, 121(3), 463–480. https://doi.org/10.1037/a0037010CrossRefGoogle ScholarPubMed
Costello, F., & Watts, P. (2017). Explaining high conjunction fallacy rates: The probability theory plus noise account. Journal of Behavioral Decision Making, 30(2), 304–321. https://dx.doi.org/10.1002/bdm.1936CrossRefGoogle Scholar
Dasgupta, I., Schulz, E., Tenenbaum, J. B., & Gershman, S. J. (2020). A theory of learning to infer. Psychological Review, 127(3), 412–441. https://doi.org/10.1037/rev0000178CrossRefGoogle ScholarPubMed
Griffiths, T. L., Vul, E., & Sanborn, A. N. (2012). Bridging levels of analysis for probabilistic models of cognition. Current Directions in Psychological Science, 21(4), 263–268. https://doi.org/10.1177/0963721412447619CrossRefGoogle Scholar
Juslin, P., Nilsson, H., & Winman, A. (2009). Probability theory, not the very guide of life. Psychological Review, 116(4), 856–874. https://doi.org/10.1037/a0016979CrossRefGoogle Scholar
Nosofsky, R. M., Sanders, C. A., Meagher, B. J., & Douglas, B. J. (2018). Toward the development of a feature-space representation for a complex natural category domain. Behavior Research Methods, 50, 530–556. https://doi.org/10.3758/s13428-017-0884-8CrossRefGoogle ScholarPubMed
Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D., & Griffiths, T. L. (2021). Using large-scale experiments and machine learning to discover theories of human decision-making. Science, 372(6547), 1209–1214. https://doi.org/10.1126/science.abe2629CrossRefGoogle ScholarPubMed
Spicer, J., Zhu, J. Q., Chater, N., & Sanborn, A. N. (2022). Perceptual and cognitive judgments show both anchoring and repulsion. Psychological Science, 33(9), 1395–1407. https://doi.org/10.1177/09567976221089599CrossRefGoogle ScholarPubMed
Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4), 293–315. https://doi.org/10.1037/0033-295X.90.4.293CrossRefGoogle Scholar
Vul, E., Goodman, N., Griffiths, T. L., & Tenenbaum, J. B. (2014). One and done? Optimal decisions from very few samples. Cognitive Science, 38(4), 599–637. https://doi.org/10.1111/cogs.12101CrossRefGoogle ScholarPubMed
Wedell, D. H., & Moro, R. (2008). Testing boundary conditions for the conjunction fallacy: Effects of response mode, conceptual focus, and problem type. Cognition, 107(1), 105–136. https://doi.org/10.1016/j.cognition.2007.08.003CrossRefGoogle ScholarPubMed
Zhu, J. Q., León-Villagrá, P., Chater, N., & Sanborn, A. N. (2022). Understanding the structure of cognitive noise. PLoS Computational Biology, 18(8), e1010312. https://doi.org/10.1371/journal.pcbi.1010312CrossRefGoogle ScholarPubMed
Zhu, J.-Q., Sanborn, A. N., & Chater, N. (2020). The Bayesian sampler: Generic Bayesian inference causes incoherence in human probability judgments. Psychological Review, 127(5), 719–748. https://doi.org/10.1037/rev0000190CrossRefGoogle ScholarPubMed
Zhu, J.-Q., Sundh, J., Spicer, J., Chater, N., & Sanborn, A. N. (2023). The autocorrelated Bayesian sampler: A rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times. Psychological Review, 131(2), 456–493. https://doi.org/10.1037/rev0000427CrossRefGoogle ScholarPubMed
Target article
Meta-learned models of cognition
Related commentaries (22)
Bayes beyond the predictive distribution
Challenges of meta-learning and rational analysis in large worlds
Combining meta-learned models with process models of cognition
Integrative learning in the lens of meta-learned models of cognition: Impacts on animal and human learning outcomes
Is human compositionality meta-learned?
Learning and memory are inextricable
Linking meta-learning to meta-structure
Meta-learned models as tools to test theories of cognitive development
Meta-learned models beyond and beneath the cognitive
Meta-learning and the evolution of cognition
Meta-learning as a bridge between neural networks and symbolic Bayesian models
Meta-learning goes hand-in-hand with metacognition
Meta-learning in active inference
Meta-learning modeling and the role of affective-homeostatic states in human cognition
Meta-learning: Bayesian or quantum?
Probabilistic programming versus meta-learning as models of cognition
Quantum Markov blankets for meta-learned classical inferential paradoxes with suboptimal free energy
Quo vadis, planning?
The added value of affective processes for models of human cognition and learning
The hard problem of meta-learning is what-to-learn
The meta-learning toolkit needs stronger constraints
The reinforcement metalearner as a biologically plausible meta-learning framework
Author response
Meta-learning: Data, architecture, and both