Book contents
- Frontmatter
- Contents
- Foreword: Out of Sight, Out of Mind
- Preface
- Part one New Interfaces and Novel Applications
- Part two Tracking Human Action
- Part three Gesture Recognition and Interpretation
- 11 A Framework for Gesture Generation and Interpretation
- 12 Model-Based Interpretation of Faces and Hand Gestures
- 13 Recognition of Hand Signs from Complex Backgrounds
- 14 Probabilistic Models of Verbal and Body Gestures
- 15 Looking at Human Gestures
- Acknowledgements
- Bibliography
- List of contributors
11 - A Framework for Gesture Generation and Interpretation
from Part three - Gesture Recognition and Interpretation
Published online by Cambridge University Press: 06 July 2010
- Frontmatter
- Contents
- Foreword: Out of Sight, Out of Mind
- Preface
- Part one New Interfaces and Novel Applications
- Part two Tracking Human Action
- Part three Gesture Recognition and Interpretation
- 11 A Framework for Gesture Generation and Interpretation
- 12 Model-Based Interpretation of Faces and Hand Gestures
- 13 Recognition of Hand Signs from Complex Backgrounds
- 14 Probabilistic Models of Verbal and Body Gestures
- 15 Looking at Human Gestures
- Acknowledgements
- Bibliography
- List of contributors
Summary
Abstract
In this chapter I describe ongoing research that seeks to provide a common framework for the generation and interpretation of spontaneous gesture in the context of speech. I present a testbed for this framework in the form of a program that generates speech, gesture, and facial expression from underlying rules specifying (a) what speech and gesture are generated on the basis of a given communicative intent, (b) how communicative intent is distributed across communicative modalities, and (c) where one can expect to find gestures with respect to the other communicative acts. Finally, I describe a system that has the capacity to interpret communicative facial, gestural, intonational, and verbal behaviors.
Introduction
I am addressing in this chapter one very particular use of the term “gesture” – that is, hand gestures that co-occur with spoken language. Why such a narrow focus, given that so much of the work on gesture in the human-computer interface community has focused on gestures as their own language – gestures that might replace the keyboard or mouse or speech as a direct command language? Because I don't believe that everyday human users have any more experience with, or natural affinity for, a “gestural language” than they have with DOS commands. We have plenty of experience with actions and the manipulation of objects. But the type of gestures defined as (Väänänen & Böhm, 1993) “body movements which are used to convey some information from one person to another” are in fact primarily found in association with spoken language (90% of gestures are found in the context of speech, according to McNeill, 1992).
- Type
- Chapter
- Information
- Computer Vision for Human-Machine Interaction , pp. 191 - 216Publisher: Cambridge University PressPrint publication year: 1998
- 61
- Cited by