Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-06T04:04:06.527Z Has data issue: false hasContentIssue false

Current and future methodologies for quantitative analysis of information transfer in sign language and gesture data

Published online by Cambridge University Press:  26 April 2017

Evie Malaia*
Affiliation:
Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907. [email protected]

Abstract

State-of-the-art methods of analysis of video data now include motion capture and optical flow from video recordings. These techniques allow for biological differentiation between visual communication and noncommunicative motion, enabling further inquiry into neural bases of communication. The requirements for additional noninvasive methods of data collection and automatic analysis of natural gesture and sign language are discussed.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Barbu, A., Barrett, D., Chen, W., Siddarth, N., Xiong, C., Corso, J., Fellbaum, C., Hanson, C., Hanson, S., Helie, S., Malaia, E., Pearlmutter, B., Siskind, J., Talavage, T. & Wilbur, R. (2014) Seeing is worse than believing: Reading people's minds better than computer-vision methods recognize actions. In: European Conference on Computer Vision 2014, Lecture Notes in Computer Science, ed. Fleet, D. et al. , pp. 612–27. Springer.Google Scholar
Bosworth, R. G., Bartlett, M. S. & Dobkins, K. R. (2006) Image statistics of American Sign Language: Comparison with faces and natural scenes. JOSA A 23(9):2085–96.CrossRefGoogle ScholarPubMed
Brentari, D. (1998) A prosodic model of sign language phonology. MIT Press.Google Scholar
Malaia, E. (2014) It still isn't over: Event boundaries in language and perception. Language and Linguistics Compass 8(3):8998.Google Scholar
Malaia, E., Borneman, J. & Wilbur, R. B. (2016) Assessment of information content in visual signal: analysis of optical flow fractal complexity. Visual Cognition 24(3):246–51. doi: 10.1080/13506285.2016.1225142.Google Scholar
Malaia, E., Borneman, J.D. & Wilbur, R.B. (in press) Information transfer capacity of articulators in American Sign Language. Language and Speech.Google Scholar
Malaia, E., Gonzalez-Castillo, J., Weber-Fox, C., Talavage, T. M. & Wilbur, R. B. (2014a) Neural processing of verbal event structure: Temporal and functional dissociation between telic and atelic verbs. In: Cognitive science perspectives on verb representation and processing, ed. Mandouilidou, C. & de Almeida, R., pp. 131–40. Springer.Google Scholar
Malaia, E., Ranaweera, R., Wilbur, R. B. & Talavage, T. M. (2012) Event segmentation in a visual language: Neural bases of processing American Sign Language predicates. NeuroImage 59(4):4094–101.Google Scholar
Malaia, E., Talavage, T. & Wilbur, R. B. (2014b) Functional connectivity in task-negative network of the Deaf: Effects of sign language experience. PeerJ 2:e446. doi: 10.7717/peerj.446.Google Scholar
Malaia, E., Wilbur, R. B. & Milković, M. (2013) Kinematic parameters of signed verbs at morpho-phonology interface. Journal of Speech, Language, and Hearing Research 56(5):112.Google Scholar
McDonald, J., Wolfe, R., Wilbur, R.B., Moncrief, R., Malaia, E., Fujimoto, S., Baowidan, S. & Stec, J. (2016) A new tool to facilitate prosodic analysis of motion capture data and a data-driven technique for the improvement of avatar motion. Proceedings of Language Resources and Evaluation Conference (LREC), pp. 153–59. Portorož, Slovenia.Google Scholar
Oben, B. & Brône, G. (2015) What you see is what you do: On the relationship between gaze and gesture in multimodal alignment. Language and Cognition 7(04):546–62.Google Scholar
Puupponen, A., Wainio, T., Burger, B. & Jantunen, T. (2015) Head movements in Finnish Sign Language on the basis of motion capture data: A study of the form and function of nods, nodding, head thrusts, and head pulls. Sign Language and Linguistics 18(1):4189.CrossRefGoogle Scholar
Romero-Fresco, P., ed. (2015) The reception of subtitles for the deaf and hard of hearing in Europe. Peter Lang AG.Google Scholar
Wilbur, R. B. & Malaia, E. (in press) A new technique for assessing narrative prosodic effects in sign languages. In: Linguistic foundations of narration in spoken and sign languages, ed. Hübl, A. & Steinbach, M.. John Benjamins.Google Scholar