Book contents
- Frontmatter
- Contents
- List of contributors
- 1 Multimodal signal processing for meetings: an introduction
- 2 Data collection
- 3 Microphone arrays and beamforming
- 4 Speaker diarization
- 5 Speech recognition
- 6 Sampling techniques for audio-visual tracking and head pose estimation
- 7 Video processing and recognition
- 8 Language structure
- 9 Multimodal analysis of small-group conversational dynamics
- 10 Summarization
- 11 User requirements for meeting support technology
- 12 Meeting browsers and meeting assistants
- 13 Evaluation of meeting support technology
- 14 Conclusion and perspectives
- References
- Index
7 - Video processing and recognition
Published online by Cambridge University Press: 05 July 2012
- Frontmatter
- Contents
- List of contributors
- 1 Multimodal signal processing for meetings: an introduction
- 2 Data collection
- 3 Microphone arrays and beamforming
- 4 Speaker diarization
- 5 Speech recognition
- 6 Sampling techniques for audio-visual tracking and head pose estimation
- 7 Video processing and recognition
- 8 Language structure
- 9 Multimodal analysis of small-group conversational dynamics
- 10 Summarization
- 11 User requirements for meeting support technology
- 12 Meeting browsers and meeting assistants
- 13 Evaluation of meeting support technology
- 14 Conclusion and perspectives
- References
- Index
Summary
This chapter describes approaches used for video processing, in particular, for face and gesture detection and recognition. The role of video processing, as described in this chapter, is to extract all the information necessary for higher-level algorithms from the raw video data. The target high-level algorithms include tasks such as video indexing, knowledge extraction, and human activity detection. The main focus of video processing in the context of meetings is to extract information about presence, location, motion, and activities of humans along with gaze and facial expressions to enable higher-level processing to understand the semantics of the meetings.
Object and face detection
The object and face detection methods used in this chapter include pre-processing through skin color detection, object detection through visual similarity using machine learning and classification, gaze detection, and face expression detection.
Skin color detection
For skin color detection, color segmentation is usually used to detect pixels with a color similar to the color of the skin (Hradiš and Juranek, 2006). The segmentation is done in several steps. First, an image is converted from color into gray scale using a skin color model – each pixel value corresponds to a skin color likelihood. The gray scale image is binarized by thresholding. The binary image is then filtered by a sequence of morphological operations so as to avoid noise. Finally, the components of the binary image can be labeled and processed in order to recognize the type of the object.
- Type
- Chapter
- Information
- Multimodal Signal ProcessingHuman Interactions in Meetings, pp. 103 - 124Publisher: Cambridge University PressPrint publication year: 2012