Published online by Cambridge University Press: 30 September 2009
This paper examines the relationship between headedness and language processing and considers two strategies that potentially ease language comprehension and production. Both strategies allow a language to minimize the number of arguments in a given clause, either by reducing the number of overtly expressed arguments or by reducing the number of structurally required arguments. The first strategy consists of minimizing the number of overtly expressed arguments by using more pro-drop for two-place predicates (Pro-drop bias). According to the second strategy, a language gives preference to one-place predicates over two-place predicates, thus minimizing the number of structural arguments (Intransitive bias). In order to investigate these strategies, we conducted a series of comparative corpus studies of SVO and SOV languages. Study 1 examined written texts of various genres and children's utterances in English and Japanese, while Study 2 examined narrative stories in English, Spanish, Japanese, and Turkish. The results for these studies showed that pro-drop was uniformly more common with two-place predicates than with one-place predicates, regardless of the OV/VO distinction. Thus the Pro-drop bias emerges as a universal economy principle for making utterances shorter. On the other hand, SOV languages showed a much stronger Intransitive bias than SVO languages. This finding suggests that SOV word order with all the constituents explicitly expressed is potentially harder to process; the dominance of one-place predicates is therefore a compensatory strategy in order to reduce the number of preverbal arguments. The overall pattern of results suggests that human languages utilize both general (Pro-drop bias) and headedness-order-specific (Intransitive bias) strategies to facilitate processing. The results on headedness-order-specific strategies are consistent with other researchers' findings on differential processing in head-final and non-head-final languages, for example, Yamashita & Chang's (2001) ‘long-before-short’ parameterization.
This work was supported by an NIMH postdoctoral training fellowship (T32 MH19554) at the University of Illinois and a Faculty Research Award at the University of Oregon to the first author and an award from the Harvard FAS Fund to the second author. We would like to thank Kay Bock, Bernard Comrie, Wind Cowles, Jeanette Gundel, Robert Kluender, Andrew Nevins, Johanna Nichols, and two anonymous JL referees for helpful suggestions, and Orin Gensler for the insightful comments on the pre-final version of this paper. We are grateful to Shin Fukuda, Alper Mizrak, Anita Saalfeld, and Marisol Garrido for help with data coding, and Mary Theresa Seig for providing us with her ‘frog story’ data in English and Japanese. All errors are our sole responsibility.