Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-25T04:50:17.462Z Has data issue: false hasContentIssue false

Do parents lead their children by the hand?

Published online by Cambridge University Press:  06 September 2005

ŞEYDA ÖZÇALIŞKAN
Affiliation:
University of Chicago
SUSAN GOLDIN-MEADOW
Affiliation:
University of Chicago

Abstract

The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture+speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 American child–caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (deictic, conventional, representational) and the relation it held to speech (reinforcing, disambiguating, supplementary). Children and their caregivers produced the same types of gestures and in approximately the same distribution. However, the children differed from their caregivers in the way they used gesture in relation to speech. Over time, children produced many more REINFORCING (bike+point at bike), DISAMBIGUATING (that one+point at bike), and SUPPLEMENTARY combinations (ride+point at bike). In contrast, the frequency and distribution of caregivers' gesture+speech combinations remained constant over time. Thus, the changing relation between gesture and speech observed in the children cannot be traced back to the gestural input the children receive. Rather, it appears to reflect changes in the children's own skills, illustrating once again gesture's ability to shed light on developing cognitive and linguistic processes.

Type
Research Article
Copyright
© 2005 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

We thank Jana Iverson for her input on gesture coding, Kristi Schoendube and Jason Voigt for their administrative and technical help, and the project research assistants, Karyn Brasky, Kristin Duboc, Molly Nikolas, Jana Oberholtzer, Lillia Rissman, and Becky Seibel for their help in collecting and transcribing the data. The research presented in this paper is supported by a grant from NIH (PO1 HD406–05) to Goldin-Meadow.