Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-22T09:12:26.319Z Has data issue: false hasContentIssue false

Meta-learned models beyond and beneath the cognitive

Published online by Cambridge University Press:  23 September 2024

Mihnea Moldoveanu*
Affiliation:
Desautels Centre for Integrative Thinking, Rotman School of Management, University of Toronto, Toronto, ON, Canada [email protected]
*
*Corresponding author.

Abstract

I propose that meta-learned models, and in particular the situation-aware deployment of “learning-to-infer” modules can be advantageously extended to domains commonly thought to lie outside the cognitive, such as motivations and preferences on one hand, and the effectuation of micro- and coping-type behaviors.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

An account of the ways in which locally meta-learned models can address difficulties arising from computational intractability of Bayesian inference in non-small worlds and difficulties in articulating inference problems over unknown state spaces have is overdue. We have for some time been aware that the experimental evidence base of behavioral decision theory admits of alternative plausible explanations via models of the experimental condition (meant to illustrate “base rate neglect,” or availability of a plausible simile) that explain how the subject can seem biased to an observer while in fact engaging in a reasonable and ecologically successful pattern of inference (Moldoveanu & Langer, Reference Moldoveanu and Langer2002; Marsh, Todd, & Gigerenzer, Reference Marsh, Todd, Gigerenzer, Leighton and Sternberg2004). At the same time, incorporating the informational (say, “bits”) and computational (“operations performed upon bits”) costs of inferential calculations in models of cognition not only “makes sense” – as storage and calculation both require work – but can rationalize patterns of behavior that previously appeared to some as irrational or sub-rational (Gershman, Horvitz, & Tennenbaum, Reference Gershman, Horvitz and Tennenbaum2015; Moldoveanu, Reference Moldoveanu2011).

The meta-learning program of inquiry can be extended to phenomena and episodes that lie beyond or on the fringes of what we would call “cognitive,” at both the “upstream” (motives, motivations, identities) and “downstream” (motor behaviors, perceptual inferences) ends. Motivations, “preferences,” and meta-preferences can be understood as the outcomes of a process by which one learns from one's own reactions to an environment's responses to one's own behaviors. At the other end, in-the-moment “heedful coping” for the purpose of getting a “maximal grip” (Merleau Ponty, Reference Merleau Ponty2012[1974]) on an immediate physical or interpersonal environment can be described in ways that make transparent the structure of the inferential learning problem and the function that one is trying to approximate, for example: Producing situation-specific, successful combinations of vertical and horizontal pressures through one's digits in order to balance a large, flat object, or of activations of muscles to produce particular facial and postural expressions meant to cause someone else to do, say or think in a particular way.

Consider, first, the “motivational” nexus of motivation-identity-preference. The idea that one infers one's own “attitudes” from one's behavior in situations whose affordances encode “choicefulness” has been around for a while (Bem, Reference Bem1967), but the problem of “learning to prefer (X to Y, say)” is neither immediately self-evident or “given” nor, once posed, computationally simple (or, even tractable). But, once preferences are understood as dispositions to act in particular ways or combinations of ways in a context and encoded by conditional probabilities linking combinations of sensorial fields and internal states to actions, the self-referential problem of inferring “What do I like?/What motivates me?” from observations of “what do I do (or, choose to do) in situations in which…?” can get off the ground. One can develop preferences that are highly specific (combinations of ingredients mixed in precise sequences and proportions to make a “dish”) or general (“I prefer injustice to disorder”) and motivations that are domain-specific (“decentralized decision rights as an approach to managing this team on this task”) or less so (“to enable and facilitate other people's sense of autonomy”). Preferences and motivations can be more or less sophisticated (in terms of the number of features of a “situation” the inference of preference depends on, and the logical depth or computational complexity of the inference algorithm) and more or less context-adaptive, and more or less susceptible to recursive refinement upon experiencing the ways in which one behaves in different environments. Thus, being able to adaptively modify the informational and computational complexity of the learning algorithm adds a much-needed degree of freedom to the modelers toolkit. Both preferences and motivations may be learned without an explicit awareness of that which is learned or that which learning is conditioned by. “Learning to prefer” (or learning to be the self one, motivationally, is) can thus be phrased in a way that tracks “learning to infer”, provided we can successfully formulate a local objective or cost function the inferential process meliorates or seeks to extremize. “Internal conflicts” appear as neither pathological nor irrational: An optimal (or ecologically adapted in virtue of its adaptiveness) brain can comprise a set of independent agents that have conflicting objectives (Livnat & Pippenger, Reference Livnat and Pippenger2006) but are jointly adaptive to environmental niches its organism copes with frequently enough.

Second, consider the “in-the-moment” perception-sensation-effectuation nexus, which has to do with trying to get a target variable to take on a certain value within a certain time window using the least or a fixed amount of energy – such as controlling the vertical displacement of a tray of containers containing hot liquids, the horizontal displacement of an inverted pendulum, the angular velocity, height, and vertical velocity of a coin used in a coin toss or even a dynamical network of physiognomical micro-responses in an emotionally charged meeting with several attendees. In such cases, the inverse, counterfactual-dependent problem of “causal inference” is replaced by the forward, direct problem of effectuation (or control): One is attempting to produce and in some cases also maintain a specific set of values of variables XT (vertical displacement of a tray of liquids, perceived visceral or emotional responses of another person) that form part of the state space X of a dynamically evolving system Xt = F(X,U, t) by making specific changes to one or more “input” or lever variables U on the basis of observations of some proxy variables Y that encode or register filtered, biased states of X via Y = G(X,t). In this case, the underlying inference problem can be posed in terms of maximizing the time-bounded and energy-efficient controllability and observability of Xt = F(X,U, t); Y = G(X,t) for instance by the optimal choice and placement of the “lever” or “driver” nodes (Li and Laszlo Barabassi, Reference Liu and Laszlo-Barabassi2016), or in terms of making changes to the structural properties of Xt = F(X,U, t); Y = G(X,t) in ways that alter its temporal dynamics or time constants (for instance, learning to control a tremor by using different combinations or muscle groups for effecting a fine movement, which changes the parameters of Xt = F(X,U, t); Y = G(X,t) and thus the pole-zero distribution of its transfer function).

Financial support

This research was funded by the Desautels Centre for Integrative Thinking, Rotman School of Management, University of Toronto.

Competing interest

None.

References

Bem, D. J. (1967). Self-perception: An alternative interpretation of cognitive dissonance phenomena. Psychological Review, 74, 183200.CrossRefGoogle ScholarPubMed
Gershman, S. J., Horvitz, E. J., & Tennenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds and machines. Science, 349(6245), 273278.CrossRefGoogle ScholarPubMed
Liu, Y. -Y., & Laszlo-Barabassi, A. (2016). Control principles of complex systems. Review of Modern Physics, 88(3), 158.CrossRefGoogle Scholar
Livnat, A., & Pippenger, N. (2006). An optimal brain can be composed of conflicting agents. Proceedings of the National Academy of Sciences, 103(9), 31983202.CrossRefGoogle ScholarPubMed
Marsh, B., Todd, P. M., & Gigerenzer, G. (2004). Cognitive heuristics: Reasoning the fast and frugal way. In Leighton, J. P., & Sternberg, R. J. (Eds.), The nature of reasoning (pp. 273287). Cambridge University Press.Google Scholar
Merleau Ponty, M. (2012)(1974). The phenomenology of perception. Routledge.Google Scholar
Moldoveanu, M. C. (2011). Inside man: The discipline of modeling human ways of being. Stanford University Press.Google Scholar
Moldoveanu, M. C., & Langer, E. J. (2002). False memories of the future: A critique of the applications of probabilistic reasoning to the study of cognitive processes. Psychological Review, 109(2), 358375.CrossRefGoogle Scholar