Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-04T21:01:01.671Z Has data issue: false hasContentIssue false

12 - Motion-Tracking Technology for the Study of Gesture

from Part II - Ways of Approaching Gesture Analysis

Published online by Cambridge University Press:  01 May 2024

Alan Cienki
Affiliation:
Vrije Universiteit, Amsterdam
Get access

Summary

In this chapter I discuss the role of motion-tracking technology in the study of gesture, both from a production perspective as well as for understanding how gestures support comprehension. I first give an overview of motion-tracking technologies in order to provide a starting point for researchers currently using or interested in using motion tracking. Next, I discuss how motion tracking has been employed in the past to understand gesture production and comprehension, as well as how it can be utilized for more complex experiments including virtual reality. This is not meant as a comprehensive review of the field of motion tracking, but rather a source of inspiration for how such methodologies can be employed in order to tackle relevant research questions. The chapter is concluded with suggestions for how to build upon previous research, asking new, previously inaccessible questions, and how motion-tracking technology can be used to move toward a more replicable and quantitative study of gesture.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ahmad, N., Ghazilla, R.A.R., Khairi, N.M., & Kasi, V. (2013). Reviews on various Inertial Measurement Unit (IMU) sensor applications. International Journal of Signal Processing Systems, 1(2), 256262.CrossRefGoogle Scholar
Alviar, C., Dale, R., & Galati, A. (2019). Complex communication dynamics: Exploring the structure of an academic talk. Cognitive Science, 43(3), e12718. https://doi.org/10.1111/cogs.12718CrossRefGoogle ScholarPubMed
Barré, A., & Armand, S. (2014). Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data. Computer Methods and Programs in Biomedicine, 114, 8087. https://doi.org/10.1016/j.cmpb.2014.01.012CrossRefGoogle ScholarPubMed
Beecks, C., & Grass, A. (2018). Efficient point-based pattern search in 3D motion capture databases. 2018 IEEE 6th International Conference on Future Internet of Things and Cloud (FiCloud), 230235.CrossRefGoogle Scholar
Bergmann, K., & Kopp, S. (2012). Gestural alignment in natural dialogue. Proceedings of the Annual Meeting of the Cognitive Science Society, 34(34), 13261331.Google Scholar
Boutet, D., Jégo, J. F., & Meyrueis, V. (2018). POLIMOD Pipeline: Documentation. Motion capture, visualization & data analysis for gesture studies. https://hal.science/hal-01950466v1Google Scholar
Boutet, D., Morgenstern, A., & Cienki, A. (2016). Grammatical aspect and gesture in French: A kinesiological approach. Russian Journal of Linguistics, 20(3), 132151. http://journals.rudn.ru/linguistics/article/view/14745Google Scholar
Cao, Z., Simon, T., Wei, S.-E., & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 72917299. http://openaccess.thecvf.com/content_cvpr_2017/html/Cao_Realtime_Multi-Person_2D_CVPR_2017_paper.htmlCrossRefGoogle Scholar
Chu, M., & Hagoort, P. (2014). Synchronization of speech and gesture: Evidence for interaction in action. Journal of Experimental Psychology: General, 143(4), 17261741. https://doi.org/10.1037/a0036281CrossRefGoogle ScholarPubMed
Danner, S. G., Barbosa, A. V., & Goldstein, L. (2018). Quantitative analysis of multimodal speech data. Journal of Phonetics, 71, 268283. https://doi.org/10.1016/j.wocn.2018.09.007CrossRefGoogle Scholar
Driskell, J. E., & Radtke, P. H. (2003). The effect of gesture on speech production and comprehension. Human Factors, 45(3), 445454. https://doi.org/10.1518/hfes.45.3.445.27258CrossRefGoogle ScholarPubMed
Esposito, A., & Esposito, A. M. (2011). On speech and gestures synchrony. In Esposito, A., Vinciarelli, A., Vicsi, K., Pelachaud, C., & Nijholt, A. (Eds.), Analysis of verbal and nonverbal communication and enactment: The processing issues (pp. 252–272). Berlin, Germany: Springer. https://doi.org/10.1007/978-3-642-25775-9_25CrossRefGoogle Scholar
Esteve-Gibert, N., & Guellaï, B. (2018). Prosody in the auditory and visual domains: A developmental perspective. Frontiers in Psychology, 9, 338. https://doi.org/10.3389/fpsyg.2018.00338CrossRefGoogle ScholarPubMed
Fujiwara, K., & Daibo, I. (2016). Evaluating interpersonal synchrony: Wavelet transform toward an unstructured conversation. Frontiers in Psychology, 7, 516. https://doi.org/10.3389/fpsyg.2016.00516CrossRefGoogle ScholarPubMed
Furtado, J. S., Liu, H. H. T., Lai, G., Lacheray, H., & Desouza-Coelho, J. (2019). Comparative analysis of OptiTrack motion capture systems. In Janabi-Sharifi, F. & Melek, W. (Eds.), Advances in motion sensing and control for robotic applications (pp. 1531). Toronto, Canada: Springer. https://doi.org/10.1007/978-3-030-17369-2_2CrossRefGoogle Scholar
Gullberg, M., & Kita, S. (2009). Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior, 33(4), 251277. https://doi.org/10.1007/s10919-009-0073-2CrossRefGoogle ScholarPubMed
Hassemer, J., & Winter, B. (2016). Producing and perceiving gestures conveying height or shape. Gesture, 15(3), 404424. https://doi.org/10.1075/gest.15.3.07hasCrossRefGoogle Scholar
Hassemer, J., & Winter, B. (2018). Decoding gestural iconicity. Cognitive Science, 42(8), 30343049. https://doi.org/10.1111/cogs.12680CrossRefGoogle ScholarPubMed
Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639652. https://doi.org/10.1016/j.tics.2019.05.006CrossRefGoogle ScholarPubMed
Holler, J., Shovelton, H., & Beattie, G. (2009). Do iconic hand gestures really contribute to the communication of semantic information in a face-to-face context? Journal of Nonverbal Behavior, 33(2), 7388. https://doi.org/10.1007/s10919-008-0063-9CrossRefGoogle Scholar
Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43(14), 35223536. https://doi.org/10.1016/j.pragma.2011.08.002CrossRefGoogle Scholar
Ienaga, N., Scotney, B. W., Saito, H., Cravotta, A., & Busà, M. G. (2018). Natural gesture extraction based on hand trajectory. Proceedings of the 20th Irish Machine Vision and Image Processing conference, 8188.Google Scholar
Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M., & Schiele, B. (2016). DeeperCut: A deeper, stronger, and faster multi-person pose estimation model. In Leibe, B., Matas, J., Sebe, N., & Welling, M. (Eds.), Computer Vision – ECCV 2016 (pp. 3450). Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-46466-4_3CrossRefGoogle Scholar
Jégo, J. F., Meyrueis, V., & Boutet, D. (2019, October). A workflow for real-time visualization and data analysis of gesture using motion capture. Proceedings of the 6th International Conference on Movement and Computing, 16. https://hal.science/hal-02474193/CrossRefGoogle Scholar
Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14(2), 201211. https://doi.org/10.3758/BF03212378CrossRefGoogle Scholar
Juszczyk, K., & Ciecierski, K. (2016). Kinemo Software: Towards automated gesture annotation with MS Kinect. In Brill, M., Gudberg, G. K., & Schwab, F. (Eds.), Research network on methodology for the analysis of social interaction: Proceedings of the ninth meeting of MASI (pp. 1011). Würzburg, Germany: Universität Würzburg.Google Scholar
Kendon, A. (1997). Gesture. Annual Review of Anthropology, 26(1), 109128. https://doi.org/10.1146/annurev.anthro.26.1.109CrossRefGoogle Scholar
Kita, S., van Gijn, I., & van der Hulst, H. (1998). Movement phases in signs and co-speech gestures, and their transcription by human coders. In Wachsmuth, I. & Fröhlich, M. (Eds.), Gesture and sign language in human-computer interaction (pp. 2335). Berlin, Germany: Springer. https://doi.org/10.1007/BFb0052986CrossRefGoogle Scholar
Krivokapic, J., Tiede, M. K., Tyrone, M. E., & Goldenberg, D. (2016). Speech and manual gesture coordination in a pointing task. Proceedings of Speech Prosody 2016, 12401244. https://doi.org/10.21437/SpeechProsody.2016-255CrossRefGoogle Scholar
LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature, 521, 436–444. https://doi.org/10.1038/nature14539CrossRefGoogle Scholar
Loehr, D. P. (2004). Gesture and Intonation. (Unpublished doctoral dissertation.) University of Chicago.Google Scholar
Loehr, D. P. (2012). Temporal, structural, and pragmatic synchrony between intonation and gesture. Laboratory Phonology, 3(1), 7189. https://doi.org/10.1515/lp-2012-0006CrossRefGoogle Scholar
McNeill, D. (2000). Language and gesture. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Manera, V., Becchio, C., Schouten, B., Bara, B. G., & Verfaillie, K. (2011). Communicative interactions improve visual detection of biological motion. PLOS ONE, 6(1), e14594. https://doi.org/10.1371/journal.pone.0014594CrossRefGoogle ScholarPubMed
Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., & Bethge, M. (2018). DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21(9), 12811289. https://doi.org/10.1038/s41593-018-0209-yCrossRefGoogle ScholarPubMed
Mittelberg, I. (2018). Enacted schematicity: Image schemas and force dynamics operating in gestural (inter-)action. TriCoLore (C3GI/ISD/SCORE), 6. http://ceur-ws.org/Vol-2347/3-Mittelberg.pdfGoogle Scholar
Mittelberg, I. (2019). Peirce’s universal categories: On their potential for gesture theory and multimodal analysis. Semiotica, 2019(228), 193222. https://doi.org/10.1515/sem-2018-0090CrossRefGoogle Scholar
Mondada, L. (2016). Challenges of multimodality: Language and the body in social interaction. Journal of Sociolinguistics, 20(3), 336366. https://doi.org/10.1111/josl.1_12177CrossRefGoogle Scholar
Nakano, N., Sakura, T., Ueda, K., Omura, L., Kimura, A., Iino, Y., … & Yoshioka, S. (2020). Evaluation of 3D markerless motion capture accuracy using OpenPose with multiple video cameras. BioRxiv, 842492. https://doi.org/10.3389/fspor.2020.00050CrossRefGoogle Scholar
Nath, T., Mathis, A., Chen, A. C., Patel, A., Bethge, M., & Mathis, M. W. (2019). Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nature Protocols, 14(7), 21522176. https://doi.org/10.1038/s41596-019-0176-0CrossRefGoogle ScholarPubMed
Nirme, J., Haake, M., Gulz, A., & Gullberg, M. (2020). Motion capture-based animated characters for the study of speech–gesture integration. Behavior Research Methods, 52(3), 13391354. https://doi.org/10.3758/s13428-019-01319-wCrossRefGoogle Scholar
Okruszek, Ł. , & Chrustowicz, M. (2019). Social perception and interaction database: A novel tool to study social cognitive processes with point-light displays. BioRxiv, 729996. https://doi.org/10.1101/729996CrossRefGoogle Scholar
Pan, X., & Hamilton, A. F. de C. (2018). Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. British Journal of Psychology, 109(3), 395417. https://doi.org/10.1111/bjop.12290CrossRefGoogle ScholarPubMed
Paxton, A., & Dale, R. (2013). Frame-differencing methods for measuring bodily synchrony in conversation. Behavior Research Methods, 45(2), 329343. https://doi.org/10.3758/s13428-012-0249-2CrossRefGoogle ScholarPubMed
Peeters, D., Hagoort, P., & Özyürek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 6484. https://doi.org/10.1016/j.cognition.2014.10.010CrossRefGoogle ScholarPubMed
Pouw, W., & Dixon, J. A. (2019). Entrainment and modulation of gesture–speech synchrony under delayed auditory feedback. Cognitive Science, 43(3), e12721. https://onlinelibrary.wiley.com/doi/full/10.1111/cogs.12721CrossRefGoogle ScholarPubMed
Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301319. https://doi.org/10.1080/0163853X.2019.1678967CrossRefGoogle Scholar
Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723740. https://doi.org/10.3758/s13428-019-01271-9CrossRefGoogle ScholarPubMed
Pouw, W., Wassenburg, S. I., Hostetter, A. B., de Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84, 966980. https://doi.org/10.1007/s00426-018-1128-yCrossRefGoogle ScholarPubMed
Rahman, A., Clift, L. G., & Clark, A. F. (2019). Comparing gestural interfaces using Kinect and OpenPose. EG UK Computer Graphics & Visual Computing, 2. https://doi.org/10.2312/cgvc.20191264Google Scholar
Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52, 17831794. https://doi.org/10.3758/s13428-020-01350-2CrossRefGoogle ScholarPubMed
Robert-Lachaine, X., Mecheri, H., Muller, A., Larue, C., & Plamondon, A. (2020). Validation of a low-cost inertial motion capture system for whole-body motion analysis. Journal of Biomechanics, 99, 109520. https://doi.org/10.1016/j.jbiomech.2019.109520CrossRefGoogle ScholarPubMed
Romero, V., Amaral, J., Fitzpatrick, P., Schmidt, R. C., Duncan, A. W., & Richardson, M. J. (2017). Can low-cost motion-tracking systems substitute a Polhemus system when researching social motor coordination in children? Behavior Research Methods, 49(2), 588601. https://doi.org/10.3758/s13428-016-0733-1CrossRefGoogle ScholarPubMed
Rosenthal, R., & Rubin, D. B. (1978). Interpersonal expectancy effects: The first 345 studies. Behavioral and Brain Sciences, 1(3), 377386. https://doi.org/10.1017/S0140525X00075506CrossRefGoogle Scholar
Schüller, D., Beecks, C., Hassani, M., Hinnel, J., Brenger, B., Seidl, T., & Mittelberg, I. (2017). Automated pattern analysis in gesture research: Similarity measuring in 3D motion capture models of communicative action. Digital Humanities Quarterly, 11(2), 3952.Google Scholar
Sebanz, N., Bekkering, H., & Knoblich, G. (2006). Joint action: Bodies and minds moving together. Trends in Cognitive Sciences, 10(2), 7076. https://doi.org/10.1016/j.tics.2005.12.009CrossRefGoogle ScholarPubMed
Sers, R., Forrester, S., Moss, E., Ward, S., Ma, J., & Zecca, M. (2020). Validity of the Perception Neuron inertial motion capture system for upper body motion analysis. Measurement, 149, 107024.CrossRefGoogle Scholar
Sidnell, J., & Stivers, T. (Eds.) (2012). The handbook of conversation analysis. Oxford, UK: Wiley-Blackwell.CrossRefGoogle Scholar
Trujillo, J. P., Simanova, I., Bekkering, H., & Özyürek, A. (2018). Communicative intent modulates production and comprehension of actions and gestures: A Kinect study. Cognition, 180, 3851. https://doi.org/10.1016/j.cognition.2018.04.003CrossRefGoogle ScholarPubMed
Trujillo, J. P., Simanova, I., Bekkering, H., & Özyürek, A. (2020). The communicative advantage: How kinematic signaling supports semantic comprehension. Psychological Research, 84, 1897–1911. https://doi.org/10.1007/s00426-019-01198-yCrossRefGoogle ScholarPubMed
Trujillo, J. P., Vaitonyte, J., Simanova, I., & Özyürek, A. (2019). Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behavior Research Methods, 51(2), 769777. https://doi.org/10.3758/s13428-018-1086-8CrossRefGoogle ScholarPubMed
Valenti, S. S., & Good, J. M. M. (1991). Social affordances and interaction I: Introduction. Ecological Psychology, 3(2), 7798. https://doi.org/10.1207/s15326969eco0302_2CrossRefGoogle Scholar
Vigliensoni, G., & Wanderley, M. M. (2012). A quantitative comparison of position trackers for the development of a touch-less musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.5281/zenodo.1178445CrossRefGoogle Scholar
Wagner, P., Malisz, Z., & Kopp, S. (2014). Gesture and speech in interaction: An overview. Speech Communication, 57, 209232. https://doi.org/10.1016/j.specom.2013.09.008CrossRefGoogle Scholar
Wasenmüller, O., & Stricker, D. (2017). Comparison of Kinect V1 and V2 depth images in terms of accuracy and precision. In Chen, C.-S., Lu, J., & Ma, K.-K. (Eds.), Computer Vision – ACCV 2016 Workshops (pp. 3445). Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-54427-4_3CrossRefGoogle Scholar
Weichert, F., Bachmann, D., Rudak, B., & Fisseler, D. (2013). Analysis of the accuracy and robustness of the leap motion controller. Sensors, 13(5), 63806393. https://doi.org/10.3390/s130506380CrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×