Book contents
- Frontmatter
- Contents
- Foreword: Out of Sight, Out of Mind
- Preface
- Part one New Interfaces and Novel Applications
- Part two Tracking Human Action
- Part three Gesture Recognition and Interpretation
- 11 A Framework for Gesture Generation and Interpretation
- 12 Model-Based Interpretation of Faces and Hand Gestures
- 13 Recognition of Hand Signs from Complex Backgrounds
- 14 Probabilistic Models of Verbal and Body Gestures
- 15 Looking at Human Gestures
- Acknowledgements
- Bibliography
- List of contributors
12 - Model-Based Interpretation of Faces and Hand Gestures
from Part three - Gesture Recognition and Interpretation
Published online by Cambridge University Press: 06 July 2010
- Frontmatter
- Contents
- Foreword: Out of Sight, Out of Mind
- Preface
- Part one New Interfaces and Novel Applications
- Part two Tracking Human Action
- Part three Gesture Recognition and Interpretation
- 11 A Framework for Gesture Generation and Interpretation
- 12 Model-Based Interpretation of Faces and Hand Gestures
- 13 Recognition of Hand Signs from Complex Backgrounds
- 14 Probabilistic Models of Verbal and Body Gestures
- 15 Looking at Human Gestures
- Acknowledgements
- Bibliography
- List of contributors
Summary
Abstract
Face and hand gestures are an important means of communication between humans. Similarly, automatic face and gesture recognition systems could be used for contact-less human-machine interaction. Developing such systems is difficult, however, because faces and hands are both complex and highly variable structures. We describe how flexible models can be used to represent the varying appearance of faces and hands and how these models can be used for tracking and interpretation. Experimental results are presented for face pose recovery, face identification, expression recognition, gender recognition and gesture interpretation.
Introduction
This chapter addresses the problem of locating and interpreting faces and hand gestures in images. By interpreting face images we mean recovering the 3D pose, identifying the individual and recognizing the expression and gender; for the hand images we mean recognizing the configuration of the fingers. In both cases different instances of the same class are not identical; for example, face images belonging to the same individual will vary because of changes in expression, lighting conditions, 3D pose and so on. Similarly hand images displaying the same gesture will vary in form.
We have approached these problems by modeling the ways in which the appearance of faces and hands can vary, using parametrised deformable models which take into account all the main sources of variability. A robust image search method [90, 89] is used to fit the models to new face/hand images recovering compact parametric descriptions.
- Type
- Chapter
- Information
- Computer Vision for Human-Machine Interaction , pp. 217 - 234Publisher: Cambridge University PressPrint publication year: 1998
- 1
- Cited by