Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-20T00:33:11.030Z Has data issue: false hasContentIssue false

LINGUISTIC CORRELATES OF SECOND LANGUAGE PROFICIENCY

Proof of Concept with ILR 2–3 in Russian

Published online by Cambridge University Press:  13 March 2012

Michael H. Long*
Affiliation:
University of Maryland
Kira Gor
Affiliation:
University of Maryland
Scott Jackson
Affiliation:
University of Maryland
*
*Address correspondence to Michael Long, School of Languages, Literatures, and Cultures, 3124 Jiménez Hall, University of Maryland, College Park, MD 20742; e-mail: [email protected].

Abstract

With Russian as the target language, a proof of concept study was undertaken to determine whether it is possible to identify linguistic features, control over which is implicated in progress on the Interagency Linguistic Roundtable (ILR) proficiency scale, thereby better to inform the instructional process. Following its development in an instrumentation study, a revised version of a computer-delivered battery of 33 perception and production tasks was administered to 68 participants—57 learners between levels 2 and 3 (21 at ILR 2, 18 at 2+, and 18 at 3) on the ILR scale, and 11 native speaker controls—whose proficiency was tested via an ILR oral proficiency telephone interview. The tasks sampled subjects’ control of Russian phonology, morphology, syntax, lexis, and collocations. Relationships between control of the linguistic features and the ILR levels of interest were assessed statistically. All 33 tasks, 18 of which assessed learners’ abilities in perception and 15 of which assessed their abilities in production, were found to differentiate ILR proficiency levels 2 and 3, and a subset was found to also distinguish levels 2 and 2+, and 2+ and 3. On the basis of the results, a checklist of linguistic features pegged to proficiency levels was produced that can be useful for syllabus designers, teachers, and learners themselves as well as providing the basis for future diagnostic tests.

Type
ARTICLES
Copyright
Copyright © Cambridge University Press 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Agresti, A. (2002). Categorical data analysis (2nd ed.). New York: Wiley.CrossRefGoogle Scholar
Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface between learning and assessment. New York: Continuum.Google Scholar
Alderson, J. C. (2007). The CEFR and the need for more research. Modern Language Journal, 91, 659–63.CrossRefGoogle Scholar
Alderson, J. C., & Huhta, A. (2005). The development of a suite of computer-based diagnostic tests based on the Common European Framework. Language Testing, 22, 301320.CrossRefGoogle Scholar
American Council for the Teaching of Foreign Languages. (1985). ACTFL Proficiency Guidelines (Rev. ed.). Hastings-on-Hudson, NY: ACTFL Materials Center.Google Scholar
Bachman, L. F. (1988). Problems in examining the validity of the ACTFL Oral Proficiency Interview. Studies in Second Language Acquisition, 10, 149164.CrossRefGoogle Scholar
Bates, D. M., & Sarkar, D. (2007). lme4: Linear mixed-effects models using S4 classes (R package version 0.999375-28) [Computer software].Google Scholar
Brecht, R., & Rivers, W. (2000). Language and national security for the 21st century: The role of Title VI/Fulbright Hays in supporting national language capacity. Dubuque, IA: Kendall/Hunt.Google Scholar
Brecht, R., & Rivers, W. (2005). Language needs analysis at the societal level. In Long, M. H. (Ed.), Second language needs assessment (pp. 79104). New York: Cambridge University Press.CrossRefGoogle Scholar
Council of Europe. (2001). Common European framework of reference for languages: Learning, teaching, and assessment. New York: Cambridge University Press.Google Scholar
Council of Europe. (n.d.). Language policy. Retrieved December 2, 2006, fromhttp://www.coe.int.lang.Google Scholar
Forster, K. I., & Forster, J. C. (2003). DMDX: A Windows display program with millisecond accuracy. Behavioral Research Methods, Instruments, & Computers, 35, 116124.CrossRefGoogle ScholarPubMed
Fulcher, G. (1996). Invalidating validity claims for the ACTFL oral rating scale. System, 24, 163172.CrossRefGoogle Scholar
Fulcher, G. (2004). Deluded by artifices? The Common European Framework and harmonization. Language Assessment Quarterly, 1, 253266.CrossRefGoogle Scholar
Higgs, T. V. (1984). Teaching for proficiency: The organizing principle. Lincolnwood, IL: National Textbook.Google Scholar
Hulstijn, J. H. (2007). The shaky ground beneath the CEFR: Quantitative and qualitative dimensions of language proficiency. Modern Language Journal, 91, 663667.CrossRefGoogle Scholar
Hulstijn, J. H., & Schoonen, R. (2006, February). Scientific report of ESF-sponsored Exploratory Workshop: Bridging the gap between research on second language acquisition and research on language testing. (European Science Foundation Report No. EW05-208-SCH). Retrieved September 9, 2007, fromhttp://www.esf.org/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/be_user/ew_docs/05-208_Report.pdf&t=1313602831&hash=1257bb8f0d57cdeec5c605b8d61527ae.Google Scholar
Interagency Language Roundtable. (n.d.). ILR speaking skill scale. Retrieved February, 25, 2010, fromhttp://www.govtilr.org/Skills/ILRscale2.htm.Google Scholar
Jaeger, T. F. (2008). Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Journal of Memory and Language, 59, 434446.CrossRefGoogle ScholarPubMed
Kanno, K., Hasegawa, T., Ikeda, K., Ito, Y., & Long, M. H. (2007). Relationships between prior language-learning experience and variation in the linguistic profiles of advanced English-speaking learners of Japanese. In Brinton, D. & Kagan, O. (Eds.), Heritage language: A new field emerging (pp. 165180). Mahwah, NJ: Erlbaum.Google Scholar
Kunnan, A. J., & Jang, E. E. (2009). Diagnostic feedback in language assessment. In Long, M. H. & Doughty, C. J. (Eds.), Handbook of second and foreign language teaching (pp. 610627). Oxford: Blackwell.CrossRefGoogle Scholar
Lantolf, J. P., & Frawley, W. (1985). Oral proficiency testing: A critical analysis. Modern Language Journal, 69, 337345.Google Scholar
Lantolf, J. P., & Frawley, W. (1988). Proficiency: Understanding the construct. Studies in Second Language Acquisition, 10, 181195.CrossRefGoogle Scholar
Lantolf, J. P., & Frawley, W. (1992). Rejecting the OPI—again: A response to Hagen. ADFL Bulletin, 23, 3437.CrossRefGoogle Scholar
Lee, Y.-G., Kim, H.-S. H., Kong, D.-K., Hong, J.-M., & Long, M. H. (2005). Variation in the linguistic profiles of advanced English-speaking learners of Korean. Language Research, 41, 437456.Google Scholar
Lett, J. A. (2005). Foreign language needs assessment in the US military. In Long, M. H. (Ed.), Second language needs analysis (pp. 105124). New York: Cambridge University Press.CrossRefGoogle Scholar
Long, M. H. (1991). Focus on form: A design feature in language teaching methodology. In de Bot, K., Ginsberg, R. B., & Kramsch, C. (Eds.), Foreign language research in cross-cultural perspective (pp. 3952). Amsterdam: Benjamins.CrossRefGoogle Scholar
Long, M. H. (2007). Problems in SLA. Mahwah, NJ: Erlbaum.Google Scholar
Long, M. H. (2009). Methodological principles for language teaching. In Long, M. H. & Doughty, C. J. (Eds.), Handbook of language teaching (pp. 373394). Oxford: Blackwell.CrossRefGoogle Scholar
Long, M. H., Jackson, S., Aquil, R., Cagri, I., Gor, K., & Lee, S.-Y. (2006). Linguistic Correlates of Proficiency: Rationale, Methodology, and Content (Technical report). College Park: University of Maryland.Google Scholar
Musumeci, D. (2009). History of language teaching. In Long, M. H. & Doughty, C. J. (Eds.), Handbook of language teaching (pp. 373394). Oxford: Blackwell.Google Scholar
North, B. (2000). Linking language assessments: An example in a low stakes context. System, 28, 555577.CrossRefGoogle Scholar
North, B., & Schneider, G. (1998). Scaling descriptors for language proficiency scales. Language Testing, 15, 217263.CrossRefGoogle Scholar
Pawlikowska-Smith, G. (2000). Canadian language benchmarks 2000: English as a second language for adults. Ottowa: Citizenship and Immigration Canada.Google Scholar
Pienemann, M. (1985). Learnability and syllabus construction. In Hyltenstam, K. & Pienemann, M. (Eds.), Modelling and assessing second language acquisition (pp. 2375). Bristol, UK: Multilingual Matters.Google Scholar
Pienemann, M., Johnston, M., & Brindley, G. (1988). Constructing an acquisition-based procedure for second language assessment. Studies in Second Language Acquisition, 10, 217243.CrossRefGoogle Scholar
Pinheiro, J. C., & Bates, D. M. (2000). Mixed-effects models in S and S-PLUS. New York: Springer Verlag.CrossRefGoogle Scholar
R Development Core Team. (2008). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. ISBN 3-900051-07-0, URLhttp://www.R-project.org.Google Scholar
Wylie, E., & Ingram, D. E. (1999). International Second Language Proficiency Ratings (ISLPR): General Proficiency Version for English (Rev. ed.). Brisbane: Center for Applied Linguistics and Languages, Griffith University.Google Scholar