Book contents
- Frontmatter
- Dedication
- Contents
- Foreword
- Preface
- Part I Sound Analysis and Representation Overview
- Part II Systems Theory for Hearing
- Part III The Auditory Periphery
- Part IV The Auditory Nervous System
- Part V Learning and Applications
- 24 Neural Networks for Machine Learning
- 25 Feature Spaces
- 26 Sound Search
- 27 Musical Melody Matching
- 28 Other Applications
- Bibliography
- Author Index
- Subject Index
- Plate section
26 - Sound Search
from Part V - Learning and Applications
Published online by Cambridge University Press: 28 April 2017
- Frontmatter
- Dedication
- Contents
- Foreword
- Preface
- Part I Sound Analysis and Representation Overview
- Part II Systems Theory for Hearing
- Part III The Auditory Periphery
- Part IV The Auditory Nervous System
- Part V Learning and Applications
- 24 Neural Networks for Machine Learning
- 25 Feature Spaces
- 26 Sound Search
- 27 Musical Melody Matching
- 28 Other Applications
- Bibliography
- Author Index
- Subject Index
- Plate section
Summary
This task aims at identifying the pictures relevant to a few word query, within a large picture collection. Solving such a problem is of particular interest from a user perspective since most people are used to efficiently access large textual corpora through text querying and would like to benefit from a similar interface to search collections of pictures.
—“A discriminative kernel-based model to rank images from text queries,” Grangier and Bengio (2008)This chapter is adapted from “Sound retrieval and ranking using auditory sparse-code representations” by Richard F. Lyon, Martin Rehn, Samy Bengio, Thomas C. Walters, and Gal Chechik (Lyon et al., 2010b).
Our first-reported large-scale application of the machine hearing approach is a sound search system (Lyon et al., 2010b) based directly on the PAMIR image search system described by Grangier and Bengio (2008). These are a form of “document ranking and retrieval from text queries,” for image and sound documents.
While considerable effort has been devoted to speech and music recognition and indexing, the wide range of sounds that people—and machines—may encounter in their everyday life has been far less studied. Such sounds cover a wide variety of objects, actions, events, and communications: from natural ambient sounds, through animal and human vocalizations, to artificial sounds that are abundant in today's environment.
Building an artificial system that processes and classifies many types of sounds poses two major challenges. First, we need to develop efficient algorithms that can learn to classify or rank a large set of different sound categories. Recent developments in machine learning, and particularly progress in large-scale methods (Bottou et al., 2007), provide several efficient algorithms for this task. Second, and sometimes more challenging, we need to develop a representation of sounds that captures the full range of auditory features that humans use to discriminate and identify different sounds, so that machines have a chance to do so as well. Unfortunately, our current understanding of how the plethora of naturally encountered sounds should be represented is still very limited.
- Type
- Chapter
- Information
- Human and Machine HearingExtracting Meaning from Sound, pp. 450 - 466Publisher: Cambridge University PressPrint publication year: 2017