Book contents
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Evolution of Object Categorization and the Challenge of Image Abstraction
- 2 A Strategy for Understanding How the Brain Accomplishes Object Recognition
- 3 Visual Recognition Circa 2008
- 4 On What It Means to See, and WhatWe Can Do About It
- 5 Generic Object Recognition by Inference of 3-D Volumetric Parts
- 6 What Has fMRI Taught Us About Object Recognition?
- 7 Object Recognition Through Reasoning About Functionality: A Survey of Related Work
- 8 The Interface Theory of Perception: Natural Selection Drives True Perception to Swift Extinction
- 9 Words and Pictures: Categories, Modifiers, Depiction, and Iconography
- 10 Structural Representation of Object Shape in the Brain
- 11 Learning Hierarchical Compositional Representations of Object Structure
- 12 Object Categorization in Man, Monkey, and Machine: Some Answers and Some Open Questions
- 13 Learning Compositional Models for Object Categories from Small Sample Sets
- 14 The Neurophysiology and Computational Mechanisms of Object Representation
- 15 From Classification to Full Object Interpretation
- 16 Visual Object Discovery
- 17 Towards Integration of Different Paradigms in Modeling, Representation, and Learning of Visual Categories
- 18 Acquisition and Disruption of Category Specificity in the Ventral Visual Stream: The Case of Late Developing and Vulnerable Face-Related Cortex
- 19 Using Simple Features and Relations
- 20 The Proactive Brain: Using Memory-Based Predictions in Visual Recognition
- 21 Spatial Pyramid Matching
- 22 Visual Learning for Optimal Decisions in the Human Brain
- 23 Shapes and Shock Graphs: From Segmented Shapes to Shapes Embedded in Images
- 24 Neural Encoding of Scene Statistics for Surface and Object Inference
- 25 Medial Models for Vision
- 26 Multimodal Categorization
- 27 Comparing 2-D Images of 3-D Objects
- Index
- Plate section
9 - Words and Pictures: Categories, Modifiers, Depiction, and Iconography
Published online by Cambridge University Press: 20 May 2010
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Evolution of Object Categorization and the Challenge of Image Abstraction
- 2 A Strategy for Understanding How the Brain Accomplishes Object Recognition
- 3 Visual Recognition Circa 2008
- 4 On What It Means to See, and WhatWe Can Do About It
- 5 Generic Object Recognition by Inference of 3-D Volumetric Parts
- 6 What Has fMRI Taught Us About Object Recognition?
- 7 Object Recognition Through Reasoning About Functionality: A Survey of Related Work
- 8 The Interface Theory of Perception: Natural Selection Drives True Perception to Swift Extinction
- 9 Words and Pictures: Categories, Modifiers, Depiction, and Iconography
- 10 Structural Representation of Object Shape in the Brain
- 11 Learning Hierarchical Compositional Representations of Object Structure
- 12 Object Categorization in Man, Monkey, and Machine: Some Answers and Some Open Questions
- 13 Learning Compositional Models for Object Categories from Small Sample Sets
- 14 The Neurophysiology and Computational Mechanisms of Object Representation
- 15 From Classification to Full Object Interpretation
- 16 Visual Object Discovery
- 17 Towards Integration of Different Paradigms in Modeling, Representation, and Learning of Visual Categories
- 18 Acquisition and Disruption of Category Specificity in the Ventral Visual Stream: The Case of Late Developing and Vulnerable Face-Related Cortex
- 19 Using Simple Features and Relations
- 20 The Proactive Brain: Using Memory-Based Predictions in Visual Recognition
- 21 Spatial Pyramid Matching
- 22 Visual Learning for Optimal Decisions in the Human Brain
- 23 Shapes and Shock Graphs: From Segmented Shapes to Shapes Embedded in Images
- 24 Neural Encoding of Scene Statistics for Surface and Object Inference
- 25 Medial Models for Vision
- 26 Multimodal Categorization
- 27 Comparing 2-D Images of 3-D Objects
- Index
- Plate section
Summary
Introduction
Collections of digital pictures are now very common. Collections can range from a small set of family pictures, to the entire contents of a picture site like Flickr. Such collections differ from what one might see if one simply attached a camera to a robot and recorded everything, because the pictures have been selected by people. They are not necessarily “good” pictures (say, by standards of photographic aesthetics), but, because they have been chosen, they display quite strong trends. It is common for such pictures to have associated text, which might be keywords or tags but is often in the form of sentences or brief paragraphs. Text could be a caption (a set of remarks explicitly bound to the picture, and often typeset in a way that emphasizes this), region labels (terms associated with image regions, perhaps identifying what is in that region), annotations (terms associated with the whole picture, often identifying objects in the picture), or just nearby text. We review a series of ideas about how to exploit associated text to help interpret pictures.
Word Frequencies, Objects, and Scenes
Most pictures in electronic form seem to have related words nearby (or sound or metadata, and so on; we focus on words), so it is easy to collect word and picture datasets, and there are many examples. Such multimode collections should probably be seen as the usual case, because one usually has to deliberately ignore information to collect only images.
- Type
- Chapter
- Information
- Object CategorizationComputer and Human Vision Perspectives, pp. 167 - 181Publisher: Cambridge University PressPrint publication year: 2009
- 4
- Cited by