Article contents
Causal generative models are just a start
Published online by Cambridge University Press: 10 November 2017
Abstract
Human reasoning is richer than Lake et al. acknowledge, and the emphasis on theories of how images and scenes are synthesized is misleading. For example, the world knowledge used in vision presumably involves a combination of geometric, physical, and other knowledge, rather than just a causal theory of how the image was produced. In physical reasoning, a model can be a set of constraints rather than a physics engine. In intuitive psychology, many inferences proceed without detailed causal generative models. How humans reliably perform such inferences, often in the face of radically incomplete information, remains a mystery.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © Cambridge University Press 2017
References
- 3
- Cited by
Target article
Building machines that learn and think like people
Related commentaries (27)
Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human-like learning
Avoiding frostbite: It helps to learn from others
Back to the future: The return of cognitive functionalism
Benefits of embodiment
Building brains that communicate like machines
Building machines that adapt and compute like brains
Building machines that learn and think for themselves
Building on prior knowledge without building it in
Causal generative models are just a start
Children begin with the same start-up software, but their software updates are cultural
Crossmodal lifelong learning in hybrid neural embodied architectures
Deep-learning networks and the functional architecture of executive control
Digging deeper on “deep” learning: A computational ecology approach
Evidence from machines that learn and think like people
Human-like machines: Transparency and comprehensibility
Intelligent machines and human minds
Social-motor experience and perception-action learning bring efficiency to machines
The architecture challenge: Future artificial-intelligence systems will require sophisticated architectures, and knowledge of the brain might guide their construction
The argument for single-purpose robots
The fork in the road
The humanness of artificial non-normative personalities
The importance of motivation and emotion for explaining human cognition
Theories or fragments?
Thinking like animals or thinking like colleagues?
Understand the cogs to understand cognition
What can the brain teach us about building artificial intelligence?
Will human-like machines make human-like mistakes?
Author response
Ingredients of intelligence: From classic debates to an engineering roadmap