No CrossRef data available.
Article contents
Probabilistic programming versus meta-learning as models of cognition
Published online by Cambridge University Press: 23 September 2024
Abstract
We summarize the recent progress made by probabilistic programming as a unifying formalism for the probabilistic, symbolic, and data-driven aspects of human cognition. We highlight differences with meta-learning in flexibility, statistical assumptions and inferences about cogniton. We suggest that the meta-learning approach could be further strengthened by considering Connectionist and Bayesian approaches, rather than exclusively one or the other.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2024. Published by Cambridge University Press
References
Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, T., & …Goodman, N. D. (2019). Pyro: Deep universal probabilistic programming. The Journal of Machine Learning Research, 20(1), 973–978.Google Scholar
Cusumano-Towner, M., Bichsel, B., Gehr, T., Vechev, M., & Mansinghka, V. K. (2018). Incremental inference for probabilistic programs. In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation (pp. 571–585).10.1145/3192366.3192399CrossRefGoogle Scholar
Cusumano-Towner, M. F., Saad, F. A., Lew, A., & Mansinghka, V. K. (2019). Gen: A general-purpose probabilistic programming system with programmable inference. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ‘19).10.1145/3314221.3314642CrossRefGoogle Scholar
Dasgupta, I., Schulz, E., Tenenbaum, J. B., & Gershman, S. J. (2020). A theory of learning to infer. Psychological Review, 127(3), 412.10.1037/rev0000178CrossRefGoogle ScholarPubMed
Goodman, N. D., Mansinghka, V., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2012). Church: a language for generative models. arXiv preprint arXiv:1206.3255.Google Scholar
Goodman, N. D., & Stuhlmüller, A. (electronic). The design and implementation of probabilistic programming languages. Retrieved from http://dippl.org.Google Scholar
Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14(8), 357–364.10.1016/j.tics.2010.05.004CrossRefGoogle ScholarPubMed
Hwang, I., Stuhlmüller, A., & Goodman, N. D. (2011). Inducing probabilistic programs by Bayesian program merging. arXiv preprint arXiv:1110.5667.Google Scholar
Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114.Google Scholar
Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338.10.1126/science.aab3050CrossRefGoogle ScholarPubMed
Levy, R., Reali, F., & Griffiths, T. (2008). Modeling the effects of memory on human online sentence processing with particle filters. Advances in Neural Information Processing Systems, 21, 937–944.Google Scholar
Lew, A. K., Matheos, G., Zhi-Xuan, T., Ghavamizadeh, M., Gothoskar, N., Russell, S., & Mansinghka, V. K. (2023). SMCP3: Sequential Monte Carlo with probabilistic program proposals. In International Conference on Artificial Intelligence and Statistics (pp. 7061–7088). PMLR.Google Scholar
McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., Seidenberg, M. S., & Smith, L. B. (2010). Letting structure emerge: Connectionist and dynamical systems approaches to cognition. Trends in Cognitive Sciences, 14(8), 348–356.10.1016/j.tics.2010.06.002CrossRefGoogle ScholarPubMed
Ong, D. C., Soh, H., Zaki, J., & Goodman, N. D. (2021). Applying probabilistic programming to affective computing. IEEE Transactions on Affective Computing, 12(2), 306–317.10.1109/TAFFC.2019.2905211CrossRefGoogle ScholarPubMed
Rule, J. S., Tenenbaum, J. B., & Piantadosi, S. T. (2020). The child as hacker. Trends in Cognitive Sciences, 24(11), 900–915.10.1016/j.tics.2020.07.005CrossRefGoogle ScholarPubMed
Saad, F. A., Cusumano-Towner, M. F., Schaechtle, U., Rinard, M. C., & Mansinghka, V. K. (2019). Bayesian synthesis of probabilistic programs for automatic data modeling. Proceedings of the ACM on Programming Languages, 3(POPL), 1–32.10.1145/3290350CrossRefGoogle Scholar
Stuhlmüller, A., Hawkins, R. X., Siddharth, N., & Goodman, N. D. (2015). Coarse-to-fine sequential Monte Carlo for probabilistic programs. arXiv preprint arXiv:1509.02962.Google Scholar
Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.10.1126/science.1192788CrossRefGoogle Scholar
Tsividis, P. A., Loula, J., Burga, J., Foss, N., Campero, A., Pouncy, T., & …Tenenbaum, J. B. (2021). Human-level reinforcement learning through theory-based modeling, exploration, and planning. arXiv preprint arXiv:2107.12544.Google Scholar
Ullman, T. D., Goodman, N. D., & Tenenbaum, J. B. (2012). Theory learning as stochastic search in the language of thought. Cognitive Development, 27(4), 455–480.10.1016/j.cogdev.2012.07.005CrossRefGoogle Scholar
Ullman, T. D., & Tenenbaum, J. B. (2020). Bayesian models of conceptual development: Learning as building models of the world. Annual Review of Developmental Psychology, 2, 533–558.10.1146/annurev-devpsych-121318-084833CrossRefGoogle Scholar
Vul, E., Alvarez, G., Tenenbaum, J., & Black, M. (2009). Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model. Advances in Neural Information Processing Systems, 22, 1955–1963.Google Scholar
Wong, L., Grand, G., Lew, A. K., Goodman, N. D., Mansinghka, V. K., Andreas, J., & Tenenbaum, J. B. (2023). From word models to world models: Translating from natural language to the probabilistic language of thought. arXiv preprint arXiv:2306.12672.Google Scholar
Ying, L., Zhi-Xuan, T., Mansinghka, V., & Tenenbaum, J. B. (2023). Inferring the goals of communicating agents from actions and instructions. In Proceedings of the AAAI Symposium Series (Vol. 2, No. 1, pp. 26–33).Google Scholar
Zhi-Xuan, T., Ying, L., Mansinghka, V., & Tenenbaum, J. B. (2024). Pragmatic instruction following and goal assistance via cooperative language guided inverse plan search. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems.Google Scholar
Zhou, Y., Feinman, R., & Lake, B. M. (2024). Compositional diversity in visual concept learning. Cognition, 244, 105711.10.1016/j.cognition.2023.105711CrossRefGoogle ScholarPubMed
Target article
Meta-learned models of cognition
Related commentaries (22)
Bayes beyond the predictive distribution
Challenges of meta-learning and rational analysis in large worlds
Combining meta-learned models with process models of cognition
Integrative learning in the lens of meta-learned models of cognition: Impacts on animal and human learning outcomes
Is human compositionality meta-learned?
Learning and memory are inextricable
Linking meta-learning to meta-structure
Meta-learned models as tools to test theories of cognitive development
Meta-learned models beyond and beneath the cognitive
Meta-learning and the evolution of cognition
Meta-learning as a bridge between neural networks and symbolic Bayesian models
Meta-learning goes hand-in-hand with metacognition
Meta-learning in active inference
Meta-learning modeling and the role of affective-homeostatic states in human cognition
Meta-learning: Bayesian or quantum?
Probabilistic programming versus meta-learning as models of cognition
Quantum Markov blankets for meta-learned classical inferential paradoxes with suboptimal free energy
Quo vadis, planning?
The added value of affective processes for models of human cognition and learning
The hard problem of meta-learning is what-to-learn
The meta-learning toolkit needs stronger constraints
The reinforcement metalearner as a biologically plausible meta-learning framework
Author response
Meta-learning: Data, architecture, and both