Published online by Cambridge University Press: 20 July 2017
To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. bæz) coupled to non-intact (excised onsets) auditory speech (signified by /–b/æz). Children discriminated syllable pairs that differed in intactness (i.e. bæz:/–b/æz) and identified non-intact nonwords (/–b/æz). We predicted that visual speech would cause children to perceive the non-intact onsets as intact, resulting in more same responses for discrimination and more intact (i.e. bæz) responses for identification in the audiovisual than auditory mode. Visual speech for the easy-to-speechread /b/ but not for the difficult-to-speechread /g/ boosted discrimination and identification (about 35–45%) in children from four to fourteen years. The influence of visual speech on discrimination was uniquely associated with the influence of visual speech on identification and receptive vocabulary skills.
This research was supported by the National Institute on Deafness and Other Communication Disorders, grant DC-00421 to the University of Texas at Dallas. Dr Abdi would like to acknowledge the support of an EURIAS fellowship at the Paris Institute for Advanced Studies (France), with the support of the European Union's 7th Framework Program for research, and from funding from the French State managed by the “Agence Nationale de la Recherche (program: Investissements d'avenir, ANR-11-LABX-0027-01 Labex RFIEA+).” Sincere appreciation to speech science colleagues for their guidance and advice to adopt a perceptual criterion for editing the non-intact stimuli. We appreciate Dr Nancy Tye-Murray's comments on an earlier version of this paper. We thank the children and parents who participated and the research staff who assisted, namely Aisha Aguilera, Carissa Dees, Nina Dinh, Nadia Dunkerton, Alycia Elkins, Brittany Hernandez, Cassandra Karl, Demi Krieger, Michelle McNeal, Jeffrey Okonye, and Kimberly Periman of the University of Texas at Dallas (data collection, analysis, presentation), and Derek Hammons and Scott Hawkins of the University of Texas at Dallas and Dr Brent Spehar and Dr Nancy Tye-Murray of Washington University School of Medicine (stimuli recording and editing, computer programming).