Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-23T09:42:26.010Z Has data issue: false hasContentIssue false

Technologies for Language Assessment

Published online by Cambridge University Press:  19 November 2008

Extract

Technology continues to move forward with smaller and faster hardware, more efficient software, multimedia capabilities, and universal reach through advanced telecommunications. Evidence for the successful use of technology in education is overwhelming (Kulik 1994, Snow and Mandinach 1991) given that there are reasoned use of that technology—uses that reflect an understanding of what is to be taught and tested. With the above in mind, this paper reviews current and developing technology uses that are relevant to language assessment and discusses examples of recent liguistic applications from our own laboratory at Educational Testing Service. Below, we describe the processes of language test development and the functions that they serve from the perspective of a large testing organization. We encourage readers to think of technology broadly, to include not only hardware, software, and telecommunications, but other technologies as well, such as job analysis, Item Response Theory (IRT) scaling, standards-setting, and group assessments. Only in this way can the broad activities of language testing, and the relevance of new technologies, be assessed.

Type
Technology in Language Instruction
Copyright
Copyright © Cambridge University Press 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

UNANNOTATED BIBLIOGRAPHY

Abraham, R. G. and Liou, H.-C.. 1991. Interaction generated by three computer programs. In Dunkel, P. (ed.) Computer-assisted language learning and testing: Research issues and practice. New York: Newbury House. 85109.Google Scholar
Adam, J. 1993. Special report: Interactive multimedia. IEEE Spectrum. 3.2239.CrossRefGoogle Scholar
Alty, J., Bergan, M., Crauford, P. and Dolphin, C.. 1993. Experiments using multimedia interface in process control: Some initial results. Computers and Graphics. 17.3.205218.CrossRefGoogle Scholar
Apple Computer. 1979. Lemonade stand. Cupertino, CA: Apple Computer. [Computer program.]Google Scholar
Aust, R., Kelley, M. J., and Roby, W.. 1993. The use of hyper-reference and conventional dictionaries. Educational Technology Research & Development. 41.4.6373.CrossRefGoogle Scholar
Bachman, L. 1990. Fundamental considerations in language testing. New York: Oxford University Press.Google Scholar
Bailin, A. and Levin, L.. 1989. Introduction: Intelligent computer-assisted language instruction. Computers and the Humanities. 23.311.Google Scholar
Bee-Lay, S. and Yee-Ping, S.. 1991. English by e-mail: Creating a global classroom via the medium of computer technology. ELT Journal. 45.287292.CrossRefGoogle Scholar
Bejar, I. I. and Braun, H. I.. 1994. On the synergy between assessment and instruction: Early lesssons from computer-based simulations. Machine-Mediated Learning. 4.1.525.Google Scholar
Bell, T. E. (ed.) 1994. Technology 1994: Analysis and forecast issue. [Special issue of IEEE Spectrum 31. 1.]Google Scholar
Bernstein, J., Cohen, M., Murveit, H., Rtischev, D. and Weintraub, M.. 1990. Automatic evaluation and training in English pronunciation. In PROC. ICSLP-90. Kobe, Japan: International Conference on Spoken Language Systems. 11851188.Google Scholar
Borras, I. 1993. Developing and asessing Practicing spoken French: A Multimedia program for improving speaking skills. Educational Technology Research & Development. 41.4.91103.CrossRefGoogle Scholar
Burstein, J. C. and Kaplan, R. M.. 1995. On the application of context to natural language processing applied to the analysis of test responses. In Proceedings from the 1995 workshop on context in NLP. Montreal: International Joint Conference on Artificial Intelligence.Google Scholar
Burston, J. 1993. Exploiting available technology. CALICO Journal. 11.1.4752.CrossRefGoogle Scholar
Cali, , Inc. 1992. ELLIS. American Fork, UT: CALI, Inc. [Computer program.]Google Scholar
Cherry, S. 1982. Eliza. New York: Gnosis, Inc.. [Computer program.]Google Scholar
Chun, D. A. and Plass, J-L.. 1995. Project CyberBuch: A Hypermedia approach to computer-assisted language learning. Journal of Educational Multimedia and Hypermedia. 4.1.95116.Google Scholar
Cognition and Technology Group at Vanderbilt. 1990. Anchored instruction and its relationship to situated cognition. Educational Researcher. 19.6.210.CrossRefGoogle Scholar
Collier, R. 1983. The word processor and revision strategies. College Composition and Communication. 34.134.CrossRefGoogle Scholar
Dalton, D. W. and Hannafin, M. J.. 1987. The effects of word processing on written composition. Journal of Educational Research. 80.237245.Google Scholar
Daneman, M. and Carpenter, P. A.. 1980. Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior 19.450466.CrossRefGoogle Scholar
Daneman, M. and Carpenter, P. A.. 1983. Individual differences in integrating information between and within sentences. Journal of Experimental Psychology: Learning, Memory, and Cognition. 9.561584.Google Scholar
Daneman, M. and Green, I.. 1986. Individual differences in comprehending and producing words in context. Journal of Memory and Language. 25.118.CrossRefGoogle Scholar
Davey, D., Jones, K. G. and Fox, J.. 1995. Multimedia for language learning: Some design issuse. Computer Assisted Language Learning. 8.1.3144.CrossRefGoogle Scholar
Efremenko, N. 1994. Learn to speak French. Knoxville, TN: The Learning Company. [Computer program.]Google Scholar
Engle, R. 1988. Working memory: An individual differences approach. Columbia, SC: Department of Psychology, University of South Carolina. [Tech. Rep. No. 1.]CrossRefGoogle Scholar
Fox, E. A., Akscyn, R. M., Furuta, R. K. and Leggett, J. J. (eds.) 1995. Digital libraries [Special issue of Communication of the ACM. 38.4.]CrossRefGoogle Scholar
Frase, L. T. 1981. Ethics of imperfect measures. IEEE Transactions on Profesional Communications. 24.4850.CrossRefGoogle Scholar
Frase, L. T. 1987. Computer analysis of written materials In Reinking, D. (ed.) Computers and reading. New York: Teacher's College Press. 7696.Google Scholar
Frase, L. T. and Faletti, J.. 1995. Computer Analyis of The TOEFL Test of Written English. Personal Communication.Google Scholar
Friedlander, A. and Markel, M.. 1990. Some effect of the Macintosh on technical writing assignments. Computer and Composition. 8.1.6979.CrossRefGoogle Scholar
Gitomer, D., Steinberg, L., and Mislevy, R.. 1993. Diagnostic assessment of troubleshooting skill in an intellingent tutoring system. In Nichols, P., Chipman, S. and Brennan, R. (eds.) Cognitively Diagnostic Assessment. Hillsdale, NJ: Lawrence Erlbaum. 73101.Google Scholar
Harasym, P., Gondocz, S. and McCreary, J.. 1993. The evaluation of a microcomputer scoring program to mark examinee write-in responses. Journal of Educational Computing Research. 9.7988.CrossRefGoogle Scholar
Hartley, J. 1993. Writing, thinking and computers. British Journal of Educational Technology. 24.1.2231.CrossRefGoogle Scholar
Higgins, J. and Johns, T.. 1984. Computers in language learning. London: CollinsGoogle Scholar
Ide, N. and Vèronis, J. (eds.) 1995. The text encoding intitiative: Background and contexts. [Special issue of Computer and the Humanities. 29. 3]CrossRefGoogle Scholar
Jacoby, S. 1993. Assisting secondary limited English proficient students through the implementation of computer-asisted language learning. [ED 364101]Google Scholar
Jamieson, J., Campbell, J., Norfleet, L. and Berbisada, N.. 1993. Reliability for an computerized scoring routine for an open-ended task. System. 21.305322.CrossRefGoogle Scholar
Just, M. A. 1990. Individual difference in verbal comprehension. ETS Seminar presentation. Princeton, NJ: Educational Testing Service.Google Scholar
Kaplan, R. M. and Bennett, R. M.. 1994. Using the free-response scoring tool to automatically score the formulating-hypotheses item. Princeton, NJ: Educational Testing Service. [Research Report 94–08.]CrossRefGoogle Scholar
Kay, Elemetrics. 1980. Visi-Pitch. Lincoln Park, NJ: Kay Elemetrics. [Computer Program.]Google Scholar
Kaya-Carton, E., Carton, A. and Dandonoli, P.. 1991. Developing a computer-adaptive test of French reading proficiency. In Dunkel, P. (ed.) Computerasisted language learning and testing: Research and practice. New York: Newbury House. 259284.Google Scholar
Kud, J. M., Krupa, G. R. and Rau, L. F.. 1994. Methods for categorizing short answer responses. In Proceedings of the Educational Testing Service Conference on Natural Language Processing Techniques and Technology in Education and Assessment. Princeton, NJ: Educational Testing Services. 3140Google Scholar
Kulik, J. 1994. Meta-analytic studies of findings on computer-based instruction. In Baker, E. L. and O'Neil, H. F. Jr., (eds.) Technology assessment in education and training. Hillsdale, NJ: Lawrence Erlbaum. 933.Google Scholar
Lajoie, S. and Lesgold, A.. 1992. Apprenticeship training in the workplace: Computer-coached practice environment as a new form of apprenticeship. In Farr, M. J. and Psotka, J. (eds.) Intelligent instruction by computer. Philadelphia: Taylor and Francis. 1536.Google Scholar
Lam, A. 1983. Error analysis exercise (Articles). Unpublished computer program.Google Scholar
Learning Company. 1994. Learn to speak French. Knoxville, TN: The Learning Company. [Computer program.]Google Scholar
Madsen, H. 1991. Computer-adaptive testing of listening and reading comprehension: The Brigham Young University approach. In Dunkel, P. (eds.) Computer-assisted language learning and testing: Research issuess and practice. New York: Newbury House. 237258.Google Scholar
McAllister, C. and Louth, R.. 1988. The effect of word processing on the quality of basic writers' revisions. Research in the Teaching of English. 22.417427.Google Scholar
McKeown, K. 1985. Text generation. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Micro Video Corporation. 1984. Video voice. Ann Arbor, MI: Micro Video. [Computer program.]Google Scholar
Mielke, A. and Flores, C.. 1994. Bilingual technology equalizes opportunities in elementary classrooms. [ED 372647]Google Scholar
Miller, B. 1994. Special report: Biometrics (Vital signs of identity). IEEE Spectrum. 4.2.2230.CrossRefGoogle Scholar
Mislevy, R., Yamamoto, K. and Anacker, S.. 1992. Toward a test theory for assessing student understanding. In Lesh, R. A. and Lamon, S. (eds.) Assessments of authentic performance in school mathematics. Washington, DC: American Associtation for the Advancement of Science. 293318.Google Scholar
Noijons, J. J. E. 1994. Testing computer assisted language testing: Towards a checklist for CALT. CALICO Journal. 12.1.3758.CrossRefGoogle Scholar
Pennington, M. 1991. Computer-assisted analysis of English dialect and interlanguage prosodics. In Dunkel, P. (ed.) Computer-assisted language learning and testing: Research issues and practice. New York: Newbury House. 133154.Google Scholar
Pennington, M. 1993. A critical examination of word processing effects in relation to L2 writers. Journal of Second Language Writing. 2.227255.CrossRefGoogle Scholar
Phinney, M. 1994. Processing your thoughts: Writing with computers. Boston, MA: Heinle and Heinle.Google Scholar
Poulsen, E. 1991. Writing process with word processing in teaching English as a foreign language. Computers and Education. 16.7781.CrossRefGoogle Scholar
Shohamy, E. 1995. Performance assessment in language testing. In Grabe, W. et al. , (eds.) Annual Review of Applied Linguistics, 15. Overview of Applied Linguistics, New York: Cambridge University Press. 188211.Google Scholar
Snow, R. E. and Mandinach, E. B.. 1991. Integrating assessment and instruction: A research and development agenda. Princeton, NJ: Educational Testing Service. Unpublished report.CrossRefGoogle Scholar
Stocking, M. and Swanson, L.. 1993. A method for severely constrained item selection in adaptive testing. Applied Psychological Measurement. 17.277292.CrossRefGoogle Scholar
Tatsuoka, K. K. 1993. Item construction and psychometric models appropriate for constructed responses. In Bennett, R. E. and Ward, W. C. (eds.) Issues in constructed response, performance testing and portfolio assessment. Hillsdale, NJ: Lawrence Erlbaum. 107133.Google Scholar
Teichman, M. and Poris, M.. 1989. Initial effects of word processing on writing quality and anxiety in freshman writers. Computers and the Humanities. 23.2.93103.CrossRefGoogle Scholar
Tranparent Language, 1994. Transparent language. Hollis, NH: Transparent Language, Inc. [Computer program.]Google Scholar
van der Linden, E. 1993. Does feedback enhance computer-assisted language learning? Computers and Education. 21.1/2.6166.CrossRefGoogle Scholar
Vega, Scott G. 1995. Processing your thoughts. TESOL Quarterly. 29.211212. Book review.Google Scholar
Vispoel, W. P., Rocklin, T. R. and Wang, T.. 1994. Individual differences and test administration procedures: A comparison of fixed-item, computerized-adaptive, and self-adaptive testing. Applied Measurement in Education. 7.1.5379.CrossRefGoogle Scholar
Wainer, H. and Kiely, G. L.. 1987. Item clusters and computerized adaptive testing: A case for testlets. Journal of Educational Measurement. 24.185201.CrossRefGoogle Scholar
Weizenbaum, J. 1976. Computer power and human reason: From judgment to calculation. San Franciso: W. H. Freeman.Google Scholar