Hostname: page-component-669899f699-tzmfd Total loading time: 0 Render date: 2025-04-24T23:27:07.719Z Has data issue: false hasContentIssue false

Probabilistic programming versus meta-learning as models of cognition

Published online by Cambridge University Press:  23 September 2024

Desmond C. Ong*
Affiliation:
Department of Psychology, University of Texas at Austin, Austin, TX, USA [email protected] https://cascoglab.psy.utexas.edu/desmond/
Tan Zhi-Xuan
Affiliation:
Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA [email protected] [email protected] https://ztangent.github.io/ https://cocosci.mit.edu/
Joshua B. Tenenbaum
Affiliation:
Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA [email protected] [email protected] https://ztangent.github.io/ https://cocosci.mit.edu/
Noah D. Goodman
Affiliation:
Department of Psychology, Stanford University, Stanford, CA, USA [email protected] https://cocolab.stanford.edu/ Department of Computer Science, Stanford University, Stanford, CA, USA
*
*Corresponding author.

Abstract

We summarize the recent progress made by probabilistic programming as a unifying formalism for the probabilistic, symbolic, and data-driven aspects of human cognition. We highlight differences with meta-learning in flexibility, statistical assumptions and inferences about cogniton. We suggest that the meta-learning approach could be further strengthened by considering Connectionist and Bayesian approaches, rather than exclusively one or the other.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, T., & …Goodman, N. D. (2019). Pyro: Deep universal probabilistic programming. The Journal of Machine Learning Research, 20(1), 973978.Google Scholar
Cusumano-Towner, M., Bichsel, B., Gehr, T., Vechev, M., & Mansinghka, V. K. (2018). Incremental inference for probabilistic programs. In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation (pp. 571–585).10.1145/3192366.3192399CrossRefGoogle Scholar
Cusumano-Towner, M. F., Saad, F. A., Lew, A., & Mansinghka, V. K. (2019). Gen: A general-purpose probabilistic programming system with programmable inference. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ‘19).10.1145/3314221.3314642CrossRefGoogle Scholar
Dasgupta, I., Schulz, E., Tenenbaum, J. B., & Gershman, S. J. (2020). A theory of learning to infer. Psychological Review, 127(3), 412.10.1037/rev0000178CrossRefGoogle ScholarPubMed
Goodman, N. D., Mansinghka, V., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2012). Church: a language for generative models. arXiv preprint arXiv:1206.3255.Google Scholar
Goodman, N. D., & Stuhlmüller, A. (electronic). The design and implementation of probabilistic programming languages. Retrieved from http://dippl.org.Google Scholar
Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14(8), 357364.10.1016/j.tics.2010.05.004CrossRefGoogle ScholarPubMed
Hwang, I., Stuhlmüller, A., & Goodman, N. D. (2011). Inducing probabilistic programs by Bayesian program merging. arXiv preprint arXiv:1110.5667.Google Scholar
Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114.Google Scholar
Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 13321338.10.1126/science.aab3050CrossRefGoogle ScholarPubMed
Levy, R., Reali, F., & Griffiths, T. (2008). Modeling the effects of memory on human online sentence processing with particle filters. Advances in Neural Information Processing Systems, 21, 937944.Google Scholar
Lew, A. K., Matheos, G., Zhi-Xuan, T., Ghavamizadeh, M., Gothoskar, N., Russell, S., & Mansinghka, V. K. (2023). SMCP3: Sequential Monte Carlo with probabilistic program proposals. In International Conference on Artificial Intelligence and Statistics (pp. 7061–7088). PMLR.Google Scholar
McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., Seidenberg, M. S., & Smith, L. B. (2010). Letting structure emerge: Connectionist and dynamical systems approaches to cognition. Trends in Cognitive Sciences, 14(8), 348356.10.1016/j.tics.2010.06.002CrossRefGoogle ScholarPubMed
Ong, D. C., Soh, H., Zaki, J., & Goodman, N. D. (2021). Applying probabilistic programming to affective computing. IEEE Transactions on Affective Computing, 12(2), 306317.10.1109/TAFFC.2019.2905211CrossRefGoogle ScholarPubMed
Rule, J. S., Tenenbaum, J. B., & Piantadosi, S. T. (2020). The child as hacker. Trends in Cognitive Sciences, 24(11), 900915.10.1016/j.tics.2020.07.005CrossRefGoogle ScholarPubMed
Saad, F. A., Cusumano-Towner, M. F., Schaechtle, U., Rinard, M. C., & Mansinghka, V. K. (2019). Bayesian synthesis of probabilistic programs for automatic data modeling. Proceedings of the ACM on Programming Languages, 3(POPL), 132.10.1145/3290350CrossRefGoogle Scholar
Stuhlmüller, A., Hawkins, R. X., Siddharth, N., & Goodman, N. D. (2015). Coarse-to-fine sequential Monte Carlo for probabilistic programs. arXiv preprint arXiv:1509.02962.Google Scholar
Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 12791285.10.1126/science.1192788CrossRefGoogle Scholar
Tsividis, P. A., Loula, J., Burga, J., Foss, N., Campero, A., Pouncy, T., & …Tenenbaum, J. B. (2021). Human-level reinforcement learning through theory-based modeling, exploration, and planning. arXiv preprint arXiv:2107.12544.Google Scholar
Ullman, T. D., Goodman, N. D., & Tenenbaum, J. B. (2012). Theory learning as stochastic search in the language of thought. Cognitive Development, 27(4), 455480.10.1016/j.cogdev.2012.07.005CrossRefGoogle Scholar
Ullman, T. D., & Tenenbaum, J. B. (2020). Bayesian models of conceptual development: Learning as building models of the world. Annual Review of Developmental Psychology, 2, 533558.10.1146/annurev-devpsych-121318-084833CrossRefGoogle Scholar
Vul, E., Alvarez, G., Tenenbaum, J., & Black, M. (2009). Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model. Advances in Neural Information Processing Systems, 22, 19551963.Google Scholar
Wong, L., Grand, G., Lew, A. K., Goodman, N. D., Mansinghka, V. K., Andreas, J., & Tenenbaum, J. B. (2023). From word models to world models: Translating from natural language to the probabilistic language of thought. arXiv preprint arXiv:2306.12672.Google Scholar
Ying, L., Zhi-Xuan, T., Mansinghka, V., & Tenenbaum, J. B. (2023). Inferring the goals of communicating agents from actions and instructions. In Proceedings of the AAAI Symposium Series (Vol. 2, No. 1, pp. 26–33).Google Scholar
Zhi-Xuan, T., Ying, L., Mansinghka, V., & Tenenbaum, J. B. (2024). Pragmatic instruction following and goal assistance via cooperative language guided inverse plan search. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems.Google Scholar
Zhou, Y., Feinman, R., & Lake, B. M. (2024). Compositional diversity in visual concept learning. Cognition, 244, 105711.10.1016/j.cognition.2023.105711CrossRefGoogle ScholarPubMed