Article contents
Theories or fragments?
Published online by Cambridge University Press: 10 November 2017
Abstract
Lake et al. argue persuasively that modelling human-like intelligence requires flexible, compositional representations in order to embody world knowledge. But human knowledge is too sparse and self-contradictory to be embedded in “intuitive theories.” We argue, instead, that knowledge is grounded in exemplar-based learning and generalization, combined with high flexible generalization, a viewpoint compatible both with non-parametric Bayesian modelling and with sub-symbolic methods such as neural networks.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © Cambridge University Press 2017
References
- 3
- Cited by
Target article
Building machines that learn and think like people
Related commentaries (27)
Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human-like learning
Avoiding frostbite: It helps to learn from others
Back to the future: The return of cognitive functionalism
Benefits of embodiment
Building brains that communicate like machines
Building machines that adapt and compute like brains
Building machines that learn and think for themselves
Building on prior knowledge without building it in
Causal generative models are just a start
Children begin with the same start-up software, but their software updates are cultural
Crossmodal lifelong learning in hybrid neural embodied architectures
Deep-learning networks and the functional architecture of executive control
Digging deeper on “deep” learning: A computational ecology approach
Evidence from machines that learn and think like people
Human-like machines: Transparency and comprehensibility
Intelligent machines and human minds
Social-motor experience and perception-action learning bring efficiency to machines
The architecture challenge: Future artificial-intelligence systems will require sophisticated architectures, and knowledge of the brain might guide their construction
The argument for single-purpose robots
The fork in the road
The humanness of artificial non-normative personalities
The importance of motivation and emotion for explaining human cognition
Theories or fragments?
Thinking like animals or thinking like colleagues?
Understand the cogs to understand cognition
What can the brain teach us about building artificial intelligence?
Will human-like machines make human-like mistakes?
Author response
Ingredients of intelligence: From classic debates to an engineering roadmap