Preface
Published online by Cambridge University Press: 05 March 2012
Summary
By the late eighties, the computational approach to perception advocated by Marr (1982) was well established. In vision, most properties of the 2 ½ D sketch such as surface orientation and 3D shape admitted solutions, especially for machine vision systems operating in constrained environments. Similarly, tactile and force sensing was rapidly becoming a practicality for robotics and prostheses. Yet in spite of this progress, it was increasingly apparent that machine perceptual systems were still enormously impoverished versions of their biological counterparts. Machine systems simply lacked the inductive intelligence and knowledge that allowed biological systems to operate successfully over a variety of unspecified contexts and environments. The role of “top-down” knowledge was clearly underestimated and was much more important than precise edge, region, “textural”, or shape information. It was also becoming obvious that even when adequate “bottom-up” information was available, we did not understand how this information should be combined from the different perceptual modules, each operating under their often quite different and competing constraints (Jain, 1989). Furthermore, what principles justified the choice of these “constraints” in the first place? Problems such as these all seemed to be subsumed under a lack of understanding of how prior knowledge should be brought to bear upon the interpretation of sensory data. Of course, this conclusion came as no surprise to many cognitive and experimental psychologists (e.g. Gregory, 1980; Hochberg, 1988; Rock, 1983), or to neurophysiologists who were exploring the role of massive reciprocal descending pathways (Maunsell & Newsome, 1987; Van Essen et al., 1992).
- Type
- Chapter
- Information
- Perception as Bayesian Inference , pp. ix - xiiPublisher: Cambridge University PressPrint publication year: 1996
- 1
- Cited by