Book contents
- Frontmatter
- Contents
- Foreword: Out of Sight, Out of Mind
- Preface
- Part one New Interfaces and Novel Applications
- 1 Smart Rooms: Machine Understanding of Human Behavior
- 2 GestureComputer – History, Design and Applications
- 3 Human Reader: A Vision-Based Man-Machine Interface
- 4 Visual Sensing of Humans for Active Public Interfaces
- 5 A Human–Robot Interface using Pointing with Uncalibrated Stereo Vision
- Part two Tracking Human Action
- Part three Gesture Recognition and Interpretation
- Acknowledgements
- Bibliography
- List of contributors
3 - Human Reader: A Vision-Based Man-Machine Interface
from Part one - New Interfaces and Novel Applications
Published online by Cambridge University Press: 06 July 2010
- Frontmatter
- Contents
- Foreword: Out of Sight, Out of Mind
- Preface
- Part one New Interfaces and Novel Applications
- 1 Smart Rooms: Machine Understanding of Human Behavior
- 2 GestureComputer – History, Design and Applications
- 3 Human Reader: A Vision-Based Man-Machine Interface
- 4 Visual Sensing of Humans for Active Public Interfaces
- 5 A Human–Robot Interface using Pointing with Uncalibrated Stereo Vision
- Part two Tracking Human Action
- Part three Gesture Recognition and Interpretation
- Acknowledgements
- Bibliography
- List of contributors
Summary
Abstract
This chapter introduces the Human Reader project and some research results of human-machine interfaces based on image sequence analysis. Real-time responsive and multimodal gesture interaction, which is not an easily achieved capability, is investigated. Primary emphasis is placed on real-time responsive capability for head and hand gestural interaction as applied to the project's Headreader and Handreader. Their performances are demonstrated in experimental interactive applications, the CG Secretary Agent and the FingerPointer. Next, we focus on facial expression as a rich source of nonverbal message media. A preliminary experiment in facial expression research using an optical-flow algorithm is introduced to show what kind of information can be extracted from facial gestures. Real-time responsiveness is left to subsequent research work, some of which is introduced in other chapters of this book. Lastly, new directions in vision-based interface research are briefly addressed based on these experiences.
Introduction
Human body movement plays a very important role in our daily communications. Such communications do not merely include human-to-human interactions but also human-to-computer (and other inanimate objects) interactions. We can easily infer people's intentions from their gestures. I believe that a computer possessing “eyes” in addition to a mouse and keyboard, would be able to interact with humans in a smooth, enhanced and well-organized way by using visual input information. If a machine can sense and identify an approaching user, for example, it can load his/her personal profile and prepare the necessary configuration before he/she starts to use it.
- Type
- Chapter
- Information
- Computer Vision for Human-Machine Interaction , pp. 53 - 82Publisher: Cambridge University PressPrint publication year: 1998
- 3
- Cited by