Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-25T23:54:33.046Z Has data issue: false hasContentIssue false

Chapther Five - Myth 5 Chances Are, You Can’t Write

Or, Most Students Can’t Write

Published online by Cambridge University Press:  09 November 2023

Laura Aull
Affiliation:
University of Michigan, Ann Arbor

Summary

The myth that most students can't write begins with the very first college writing exams, then really emerges when headline news begin reporting standardized test results. Consequences include that test results define writing and writing failure, and we accept test-based claims and criteria. We make limited standards the same thing as excellent standards, and we think about writing in terms of control rather than practice. Closer to the truth is that early exam reports sometimes lied, errors are changing but not increasing, and tests and scoring criteria change. Standardized exam writing is limited, but most students write across a broad writing continuum when they are not writing standardized exams.

Type
Chapter
Information
You Can't Write That
8 Myths About Correct English
, pp. 86 - 104
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

5.1 Pick a Century

The following passages hail from fifty years ago, twenty years ago, and two years ago. Can you tell which is which?

  1. 1. University students express themselves clearly when they speak … But when they sit down at a keyboard to put those thoughts on a page, they produce a confusing jumble of jargon, colloquialisms, and random punctuation.

  2. 2. Cambridge is admitting students who, bright as they are, cannot construct coherent essays or write grammatical English.

  3. 3. [M]any of the most intelligent freshmen, in some ways more articulate and sophisticated than ever before, are seriously deficient when it comes to organizing their thoughts on paper.

The message here is strikingly uniform, but you guessed right if you thought the first passage was the most recent. Passage 1 appeared in 2020 in the Sydney Morning Herald, the same year an opinion piece in The Canberra Times claimed, “The dire state of Australian students’ writing is perhaps the worst-kept secret of our education system.” Passage 2 appeared in London’s Telegraph in 2002. Passage 3 comes from a 1975 Newsweek article called “Why Johnny Can’t Write.”

The passages paint a damning picture, and a contradictory one. Students are capable (more articulate and sophisticated than ever) but not able to write (seriously deficient; cannot construct coherent essays). A single London headline in 2006 put it this way: “University students: They can’t write, spell, or present an argument. No, these aren’t university rejects, but students at prestigious establishments.” Even accounting for the best, so these messages say, most students can’t write.

This myth rests on all the mythical thinking we’ve seen so far. Once correct writing is narrowly defined, regulated by schools, indicative of intelligence, and measured by narrow tests, we are left with very limited ideas about writing. When correct writing is then scaled up in standardized exams, we get this myth, that most students can’t do it.

If this weren’t a myth, we might question why we expect everyone to do something that most people – including successful students – can’t do. But myths are untroubled by their contradictions, and multiple generations had been wearing myth glasses by the time these opening passages were written. Rather than expanding what correct writing is or how we are measuring it, we’ve done more regulating and more lamenting.

Our fifth origin story begins when examiners started complaining about students’ written English exams, which is to say: As soon as students started writing English exams.

5.2 Context for the Myth

5.2.1 Early Exam Graders Say Most Students Can’t Write

Summarizing the results of the first Harvard English exams, examiner Adams Sherman Hill described “almost exactly one half, failed to pass.” (Spoiler alert: not so, as we will see later. But this claim has nonetheless been repeated over time.)1 A decade later, Harvard’s examiner lamented “unillumined incompetency” in three-quarters of the exam books. An even smaller percentage impressed Cambridge’s 1883 Seniors English Composition examiner, who wrote that “seven and a half percent of the essays were extremely well done.”

5.2.2 Early Exam Graders Sometimes Clarified What Students Were Doing Wrong

From early exam reports, we can tell that examiners lamented that student writing that didn’t follow correct writing usage preferences. Harvard’s examiners overwhelmingly emphasized conventions, including punctuation (using commas between words that “no rational being” would separate), capitalization, and spelling (“as if starting a spelling reform”). One condemned students’ “second-rate diction” (confusion regarding shall versus will) and “inaccuracy” (the use of ain’t and like I do) – “crimes,” he complained, that were also committed by college professors and presidents.

Early Cambridge reports showed similar dissatisfaction with student usage in English writing exams, including misspelling, “carelessness in punctuation and arrangement,” and “inelegant style” due to short, separate sentences. (Early Cambridge examiners were similarly unimpressed with English grammar exams: The exams “exhibited great want of thought and much blundering,” and “much random guesswork and strange ignorance as to the meaning of some common English words.”)

Volume also seemed to matter. Early Cambridge examiners were impressed by the few essays that were “9, 10, 12, and even 14 pages of closely-written matter, excellent in neatness as well as in quality.” The examiners saw this as “a surprising achievement,” given the age of the writers and the time constraints of the exam. More substantive feedback included “a painstaking and generous fairness of mind that was very striking,” particularly in essays by female students, “many of whom summed up so conscientiously and sympathetically both for and against, that it was impossible to be sure on which side their adherence lay.”

Other criteria were not so straightforward. Harvard grader Byron Satterlee Hurlbut wrote that students’ “lack of the feeling of possession, of power over words” constituted a “very grave fault” in examination essays. Worst of all, according to Hurlbut, was a student who avoided “common expressions” to compose writing “stuffed with fine phrases.” The ideal writer was instead “natural,” able to “express his individuality” and “avoid all fine writing.” 1880s Cambridge examiners similarly praised “simplicity and directness of style.”

Harvard grader L. Briggs alluded to a “serious fault” in his 1888 report, the “fancied necessity of infusing morality somewhere … usually the end.” To illustrate this unhappy strategy, Briggs included the following ending from a student essay: “Many people can write a pretty frivolous story, but few is the number, of those, who can put into that story lessons that, if a reader learns them, he can follow all through life. This power has been given to Miss Austen.”

Three kinds of criteria appeared in this early feedback: superficial and clear; substantive and clear; and decidedly vague. The first two categories favor patterns on the right side of the continuum rather than the left – correct writing usage preferences, and impersonal stance patterns. Criteria in the final category are difficult to connect to language patterns.

  • Superficial and clear

    • Correct writing usage preferences and conventions, neatness, length

  • Substantive and clear

    • “Fairness of mind,” or impersonal treatment of multiple views

  • Decidedly vague

    • “Inelegant style,” “feeling of possession, of power over words,” “directness of style,” “fancied necessity of infusing morality somewhere”

5.2.3 Mass Media Coverage Says Most Students Cannot Write

Nineteenth-century student writing, it’s clear, was no rosy affair. Examiners were disappointed; criteria were narrow and confusing. Still, the myth that most students can’t write didn’t fully form until most students were required to take standardized writing exams. By the late twentieth century, thousands of students across thousands of schools were taking standardized tests, and media headlines were making claims based on the test results.

A potent example was the 1975 article “Why Johnny Can’t Write” we saw in the opening, which is still among the most read Newsweek articles of all time. The article’s claims were based on results from the National Assessment of Educational Progress (NAEP), the first standardized test taken by all US secondary students. In the article, senior Newsweek writer Meril Sheils argued that scores from the first six years of NAEP (1969 to 1975) were proof that most US students were “unable to write ordinary, expository English with any real degree of structure and lucidity.” (For this, Sheils blamed “the simplistic spoken style of television,” but more on that in myth 8.) Sheils reported that students showed “serious deficiencies in spelling vocabulary and sentence structure,” which she illustrated in four one-sentence examples. The first-year college student example used their for there.

Bleak headlines didn’t stop in the twentieth century, of course. Several twenty-first-century headlines complain that most students can’t write, including several citing the 2011 book Academically Adrift: Limited Learning on College Campuses. The book, by Richard Arum and Josipa Roksa, argued that student writing was improving little during the first two years of college, and it was widely referenced in popular media, featured on ABC’s Nightly News and reviewed in the The New York Review of Books for its “chilling portrait of what the university curriculum has become.” Bill Gates was quoted as saying that before reading it, he “took it for granted that colleges were doing a very good job.”

You probably saw this coming: Claims about student writing in Academically Adrift were based on standardized exam scores. Specifically, they were based on results from the Collegiate Learning Assessment (CLA), taken by 2,323 students at the time of college enrollment and then again in their second year. The book details only the CLA exam scores, not the CLA exam tasks or criteria, but based on past CLA exams, students might have had ninety minutes to read a set of documents and recommend a course of action to a company or a government official.2

These details led to critiques of Academically Adrift: the narrowness of the CLA, the lack of discussion about assessment challenges, and especially, the dearth of information about CLA tasks or criteria, which make the book’s claims unverifiable. Furthermore, critics noted that in the book’s study, 55 percent of students did make gains, which goes unemphasized in favor of claims about the other 45 percent. As happened with intelligence tests in myth 4, however, bleak prognoses – not critiques of test – drew the most attention. Among other sources, a 2017 Study International article cited the book and proclaimed, “Students can’t write properly even after college.”3

Other twenty-first-century headlines send a similar message. An Independent article in 2006 suggested that UK students couldn’t write because they approached punctuation marks as “interchangeable” (a claim similar to nineteenth-century Harvard reports about commas between words that “no rational being” would separate). The same article argued that UK students lacked knowledge of the subject, verb, object parts of a sentence, unlike their US peers, though we have seen plenty of examples suggesting US pundits would not agree. (The contradictions abound.)

The article from The Sydney Morning Herald in the opening passages described student writing as a “confusing jumble,” with “predominantly simple vocabulary” and a lack of “correct paragraphing.”4 These claims were based on results from Australia’s forty-minute NAPLAN writing exam. Punctuation was the most specific detail noted: The article reported that punctuation scores declined from 80 to 62 percent between 2011 and 2020.

5.2.4 Contemporary Exam Tasks and Criteria Can Be Confusing

Like their nineteenth-century forerunners, today’s exams often make more sense inside exam culture than outside of it. We’ll look at the writing tasks and criteria of a recent Cambridge A-levels exam by way of illustration.

Each year, the first writing section on the A-level English Language exam includes two timed tasks, in which students (1) read and comment on the “style and language” of a passage, and (2) write a personalized text related to the first passage. In 2016, for example, students read a speech by Australian prime minister Julia Gillard about a colleague’s sexism. The first task asked students to analyze the style of the speech, and the second task asked them to write a diary entry as though they were Gillard.

Both tasks show the conundrum of exam writing, because they are more suited to exam conditions than real-world writing. The first task requires focusing on the language of the speech rather than both ideas and language. The second task expects students to write something they never otherwise write – someone else’s diary entry. This seems relevant context for what examiners viewed as common mistakes.

For the Cambridge evaluators, common mistakes in the first (analysis) task included:

  • Focusing on the speech topic rather than the speech language (“responses listed the success and justice of the accusations without examining the rhetorical devices employed”)

  • Word choice (“awkward” and “uneven” expression)

In the second (diary) task, common mistakes included:

  • Failure to “reflect a more personal mode of expression”

  • Lack of “careful checking for accurate expression”

In these examples, we see continued reinforcement for this myth and myth 4, because the tasks are designed according to exam culture. Along similar lines, even if one could write with “even expression” in a timed exam without time for revision, criteria like “awkward” and “uneven expression” are decidedly vague. Another recent example of elusive criteria appears in the notoriously-vague sophistication point in the US Advanced Placement (AP) English exam. When I worked with the College Board on cut-off scores for this exam in 2021, even the most experienced evaluators felt this criterion was a “know it when you see it” category.5

5.3 The Myth Emerges

With standardized exam scores in mass media coverage, this myth emerged. It reinforces several earlier themes, including correct writing usage preferences, vague test criteria, and emphasis on test results instead of test details.

5.4 Consequences of the Myth

5.4.1 We Limit Media Messages about Writing

An overall consequence of this myth is that we limit media messages about writing. Not only do many media messages adopt the narrow mold for correct writing from myth 1, they reinforce trust in tests and 2-D ideas about writing. Table 5.1 notes this myth’s more specific consequences, including several that scale up exam culture.

Table 5.1 Consequences of myth 5

Once we believe
Most students can’t write, then…
… Test results define writing failure
… We accept vague criteria
… We don’t question whether tests are the problem
… Writing means control versus practice
… Limited standards are excellent standards, and failure is individual
… We expect cycles of test results and alarm

5.4.2 Test Results Define Writing Failure

Prior myths made it commonplace to equate correct writing with intelligence and character. This myth makes it commonplace to write about test results without providing details about the tests themselves.

In turn, test results are cited as evidence that most students can’t write, whether or not tests are well understood. Writing tasks can change over time, for instance, but coverage will report the results, not the changes. We need only accept test results – not understand them. Even if it means rewarding only “safe, dull essays without mistakes,” as educational historian John Brereton put it, test results decide whether students can write.

5.4.3 We Accept Vague Criteria

This myth solidifies a tradition of accepting test-based criteria – which drive the test’s design, scoring, and use of results – even when those criteria are confusing. Early examiners wanted students to write with “elegance” but “avoid fine phrases,” and to avoid “all fine writing,” but also avoid “general uniformity of expression.” In an especially confusing example, eleven-plus exam founder Cyril Burt had the following expectations for student writing:

The one rule is to be “infinitely various”; to condense, to expand; to blurt, and then to amplify; to balance lengthy statements with a series of brief; and to set off the staccato emphasis of the short, sharp phrase against the complicated harmony, long-drawn and subtly suspended, of the periodic paragraph; to be ever altering, as it were, the dimensions of the block, yet still to preserve the effect of a neat and solid structure.

In the myth that most students cannot write, it is the students who are failing, not the expectations imposed on them. This consequence, in short, means test-trust and test-ignorance.

5.4.4 We Don’t Question Whether Tests Are the Problem

Coverage claiming that most students can’t write is more likely to blame students – or technology, or teachers – than to blame tests and evaluation criteria. As we’ve seen, this was true even before standardized exams: Examiners of the Cambridge 1858 exam lamented that “even when accurate,” the students did not demonstrate “questionings or remarks of their own”; yet the exam task did not ask students to offer their own thinking. It asked students to describe historical details.

More recently, in response to standardized test scores, Australian officials called for more teaching of grammar in schools to improve low NAPLAN scores. Education professors responded with questions, arguing that officials “did not clarify what they saw as the problem or exactly how to resolve it.”6 This myth makes it hard to question whether tests are the problem, and we end up with claims about what students can and cannot do, without requisite interrogation of tests.

5.4.5 Writing Means Control versus Practice

A specific theme in large-scale tests and coverage is the theme of control, rather than experience or practice. Students who perform poorly on a writing exam lack control of correct writing, rather than practice at the writing on the exam. Table 5.2 illustrates a selection of significant examinations and the criteria upon which they are based. These criteria are grouped by their implications: that correct writing is universal; that correctness is superior to other writing considerations; and that context is important to writing.

Table 5.2 Writing exam criteria

The UK AS- and A-level English Language specifications, for instance, allude to “control” and “accurate expression,” as part of achieving a “formal tone.” For high marks, students are to “guide the reader structurally and linguistically, using controlled, accurate expression” and to “organise and sequence topics, using controlled, accurate expression.” By contrast, low marks are associated with “occasional lapses in control.”7 From these, we can gather that a “formal tone” is the most correct, accurate, controlled kind of written English, the only kind with organized topics.

The GCSE English Language exam criteria specify the dialect of standardized English, associating the highest marks with writing that “Uses Standard English consistently and appropriately with secure control of complex grammatical structures.”8 Some GCSE criteria emphasize variation – e.g. variation in sentence types and vocabulary – but not variation from standardized English.

Australia’s timed Tertiary Online Writing Assessment (TOWA) has two sets of criteria, “thought and ideas” and “language: structure and expression.” The “language” criteria are described as “effectiveness of structure and organization, clarity of expression, control of language conventions.”9 If these are understood according to writing myths, then effectiveness, clarity, and control specifically refer to correct writing usage preferences.

The American Association of Colleges and Universities has a written communication rubric referenced widely within and outside of the US. The rubric includes the category “control of syntax and mechanics,” which implies errors and usage are always the same: The “capstone,” or highest-scoring criteria (a score of 4) reads “Uses graceful language that skillfully communicates meaning to readers with clarity and fluency, and is virtually error-free.”10 In this rubric, then, graceful language, clarity, and error appear to be controlled, as well as self-evident and context-free.11

A final example, from the US Framework for Success in Postsecondary Writing, implies that correctness depends on context. The framework outcome for “knowledge of conventions” is described as knowledge of “the formal and informal guidelines that define what is considered to be correct and appropriate, or incorrect and inappropriate, in a piece of writing.”12

These example guidelines fall into three types noted in Table 5.2: those implying that correct writing is always the same, those implying that correct writing is best regardless of context, and those implying that context matters. The most common guidelines imply that correct writing is always the same.

In a testament to language regulation mode, most of the examples emphasize control rather than practice. The most rewarded writers, the criteria suggest, regulate themselves according to correct writing. Other writing and writers are out of control and require more regulation.

5.4.6 Limited Standards Are Excellent Standards, and Failure Is Individual

When we accept the ideal sameness promoted by tests in myth 4, we downplay communal individual learning practices. In this myth, we get more of the same: If most students cannot write, then they are failing to meet excellent standards – not, instead, that the standards are limited or otherwise amiss. Accordingly, failure on standardized tests is due to an individual’s lack of ability, while selective criteria are rigorous criteria. These values easily fuel competitive academic behavior, which connects correct writing to power in favor of certain kinds of language and language users.

Resistance to tests can in turn be framed as a resistance to high standards, as it was in a 1977 Harpers article by John Silber, president of the University of Boston at the time. Titled “The Need for Elite Education,” the article called for what Silber called a “restoration of excellence” including the teaching of standardized English. Three earlier myths, and this one, all appear in Silber’s article:

People are born with varying degrees of intelligence and talent … Lowered expectations are a threat to all our students, since their ability to develop is very largely dependent upon the goals we establish for them.

The passage evokes myths about correct writing, innate intelligence, and school regulation of writing. It also suggests that Silber’s narrow expectations are high expectations.

When we believe that narrow standards are high standards, college selectivity means high admissions standards, rather than specific or limited admissions standards. As such, low college acceptance rates are associated with prestige, even as research shows that selective college admissions practices favor certain kinds of students.

5.4.7 We Expect Cycles of Test Results and Alarm

The headlines we have seen so far contribute to a cycle of poorly understood tests and easily understood complaints. From myth 1, we have seen the public appetite for dire and authoritative claims about correct writing, and here we see it extend to claims about most student writing.

Even with scant or selective evidence, these claims appear to be terribly appealing. The aptly titled article “Why Johnny Can Never, Ever Read” by literacy researcher Bronwyn Williams puts it this way: “Fashion trends and politicians come and go, but one thing that never seems to go out of style is a good old-fashioned literacy crisis.”

5.5 Closer to the Truth

5.5.1 Half of Harvard’s Students Didn’t Fail

We’ll first get closer to the truth by correcting misinformation. Hill’s oft-referenced account of Harvard’s first English exam was not accurate. Half the students did not fail; around a quarter of them did. A follow up study by John Brereton showed that this passing rate was comparable to or better than those in Mathematics, Geography, Latin, and Greek. Thus Hill’s account not only exaggerated English exam failure rates, but also neglected comparisons to other exams. (The Dean’s report documenting the accurate passing rates does echo Hill’s reasons for failure, as “spelling, punctuation, or both.”)

5.5.2 Errors Are Not Increasing

Another claim to dispel is that errors are increasing. Even if we stand by a limited definition of correct writing, the empirical case is that errors change more than they increase. A study of US college writing across the twentieth century found that specific formal errors changed – as did teacher’s interest in particular errors – but overall error frequency did not. More specifically, spelling and capitalization were the most frequent errors in 1917. By 1986, the most frequent error went to “no comma after introductory element.”

Likewise, the claims in the famous article “Why Johnny Can’t Write” were based on declining NAEP scores between 1969 and 1974, yet a series of NAEP reports revealed that writing, like reading, remained roughly stable in that period and after. Differences were small and could be explained by greater access by a wider population to the test. Later, a 2008 report commissioned by the US National Assessment Governing Board showed literacy “constancy” which “contradicted assertions about a major decline.”

5.5.3 Tasks Change

In several ways, late secondary and early college writing exams today are similar to those 100 years ago. They are overwhelmingly timed, lasting from thirty minutes to a few hours. Many continue to emphasize argumentative essays, and many still emphasize literature.

But writing exams have also changed. Many exams today ask students to write on a general topic, rather than on literature. For US college writing placement, for instance, you might have one hour to “Write an essay for a classroom instructor in which you take a position on whether participation in organized school athletics should be required.” For a Cambridge Certificate in Advanced English, you might have an hour and a half to write about whether museums, sports centers, or public gardens should receive money from local authorities. And for Australia’s Written English section of the STAT, you might have an hour to write one essay on education and one on friendship (“Romances come and go, but it is friendship that remains.”). Different writing tasks mean different writing, so it matters that writing exams change over time. Scores from different tasks cannot provide precise comparisons. Reports and headlines that compare scores over time without explaining changes depend on public trust in tests without test details.

For instance, we saw earlier that NAPLAN punctuation scores were used as evidence that university students couldn’t write. The news reported that punctuation scores had declined from 80 percent to 62 percent between 2011 to 2020. What is not mentioned in the article is that within that time, the NAPLAN changed. In 2018, the test shifted from paper to online. Between 2011 and 2018, the test sometimes required narrative writing and sometimes required persuasive writing. Both are significant changes, in test conditions and writing tasks.

Indeed, based on a cautionary tale from US exams, the NAPLAN test change could significantly change student scores. Between 2011 and 2017, the US National Assessment of Educational Progress (NAEP) Grade 8 writing exam changed from laptops with one kind of software to tablet devices with a different software. The 2017 scores showed a pattern of lower performance, and so the National Center for Education Statistics conducted a comparability study. Ultimately, researchers were unable to determine whether the score differences were based on the device or based on students’ writing abilities, so they could not tell if the test conditions disadvantaged students or not.13

5.5.4 Criteria Change

If you ask the question, “When was it that most students could write?,” the answer appears to be: Never. Complaints about students writing are as old as assessments of student writing, so we don’t have evidence of an earlier, better version of writing. This is true even as some expectations for correct writing have changed over the past century.

For instance, a student could disappoint Harvard’s early composition examiners for using “second-rate diction,” such as “the confusion of shall and will.” Today, the distinction between shall and will matters little, and shall is rarely used (in point of fact, shall is now eclipsed by the phrasal verb have to in American and British English corpora, so Briggs must be turning over in his grave).

In another example, while “broad claims” were cited as a “serious fault” in early Harvard entrance exams, such claims are very common in incoming college writing today. They appear in exemplary writing and are common in responses to open-ended exam questions.14 Similarly, a so-called error noted in “Why Johnny Can’t Write” was used widely even when the article was published. The article closed with a “Writer’s Guide: What Not to Do” focused on correct writing usage preferences. In it, students were advised to avoid “faulty agreement of noun and pronoun,” with the following incorrect example: Everyone should check their coat before going into the dance. This use of plural their with singular everyone is grammatically possible and meaningful in English, and it is common across the writing continuum. Already in 1975, it was far more common than everyone used with his or her in books written in English. This trend is even more true today. Indeed, since 2010, everyone used with their has continued to increase, while everyone used with his or her has been declining since 2010.15

We also saw that while organization of ideas was not highlighted in nineteenth-century reports, it is emphasized in twentieth- and twenty-first-century coverage. In a final example, the Harvard graders’ concern that a student failed to use “common expressions” in 1892 seems reversed in the 2020 Sydney Morning Herald complaint that university students use “colloquialisms.”

In all of these cases, even as criteria change, the idea that students can’t write persists, overshadowing changing expectations, and reinforcing test trust over test details.

5.5.5 Limited Does not Mean Excellent, and Standardized Does not Mean Complex

Closer to the truth is that limited criteria are not inherently excellent criteria. They aren’t inherently bad criteria, either. They are narrow – limited to particular kinds of writing and writing expectations. There can be good reasons to narrow criteria according to what writing needs to do in a particular context. But limited does not make something correct, and student ability goes far beyond the domains and parameters conventionally privileged in standardized tests and other college selection metrics.

Along similar lines, standardized writing is not inherently complex writing. GCSE and other criteria imply that “writing with control of Standard English” is the same as writing with “complex grammatical structures.” As the continuum shows, correct writing includes patterns, including dense noun phrases, just as more informal and interpersonal writing includes patterns, including shorter nouns and more verbs. That makes correct writing more grammatically compressed, but not necessarily more grammatically complex, than other writing on the continuum, a point documented in detail by applied linguists Douglas Biber and Bethany Gray.

A recent report from the US National Association for College Admission Counseling (NACAC) and the National Association of Student Financial Aid Administrators (NASFAA) states that beliefs about selectivity are harmful and pervasive, and college admissions selectivity has to date reinforced systemic racial and socioeconomic inequity. Closer to the truth is that selectivity remains elusive and ill-defined. In many cases, selectivity excludes even highly qualified students through what it includes and excludes. Selective admissions tend to emphasize uniform test scores, for instance, and we have seen the historic problems of such scores since IQ testing – sometimes operating as intentional barriers, and always operating as narrow measures.

5.5.6 Standardized Exam Writing Is on a Continuum

Closer to the truth is that like all writing, standardized exam writing is on a continuum. Student performance depends in large part on the exam writing task. Different exam tasks mean different writing, and most exams concern a very narrow part of the continuum.

To add to the writing continuum in this chapter, we will look at writing from two contemporary writing exams used for college admissions and hiring decisions: the UK Advanced-levels (A-levels) diary task we saw above,16 and a New Zealand International English Language Testing System task that asks for an explanation based on a graph or other diagram.17 In Table 5.3, we will specifically look at to two responses considered exemplary by test examiners.

Table 5.3 Exam writing continuum

Like all writing on the continuum, the two samples show cohesion, connection, focus, stance, and usage. But as responses to very different writing tasks, the linguistic patterns for fulfilling these purposes are different. The diary task writing leans more toward the informal, interpersonal, personal end of the continuum, while the summary task lands more at the formal, informational, impersonal end of secondary and college writing.

Below the examples appear in full and are annotated. Marginal notes and annotations include transitional words in bold, connection markers [in brackets], hedges in italics, boosters and generalizations italicized and bolded, and passive verbs [[in double brackets]].

5.5.6.1 Exemplary A-levels Diary Entry

It is absolutely baffling to consider just how shameless some men can be. To go about [your] way being a living insult to the rights of women, blatantly labelling them as less of people and suddenly become pure and innocent one morning and rebuke a smaller version of [yourself] for being just like [you].

Certain and interpersonal stance: The writer moves to more specific details (about what is “shameless”), using the second person and several boosters and attitude markers to show a strong reaction. This detail appears in a long infinitive phrase rather than a “complete sentence” with subject and verb in an independent clause

Certain stance and hourglass cohesion:

Writer opens with a general and boosted statement

The evidence of what that rogue Abbott had to say to vilify women, even [myself], the “witch” is considerable. How anyone can overlook all this and even entertain the thought of dismissing Slipper, however sexist he is, is beyond me. I will not stand for such disrespect. Abbot [has been tolerated] for long enough. I must abase him and leave him in his place.

Personal, certain stance:

The writer moves to mention evidence, including using the first person and mentioning Abbott called her a “witch,” with continued use of boosters and attitude markers

Generalized, personal stance:

The writer closes by generalizing and personalizing a response to the evidence and a call to personal action

5.5.6.2 Exemplary IELTS Graph Summary

The four pie charts compare the electricity generated between Germany and France during 2009, and it is measured in billions kWh. Overall, it [can be seen] that conventional thermal was the main source of electricity in Germany, whereas nuclear was the main source in France.

Informational focus and hourglass cohesion:

The writer opens with overall informational statements focused on the charts and electricity that will be summarized. The transitional word “overall” signals explicitly that these are general opening statements

The bulk of electricity in Germany, whose total output was 560 billion kWh, came from conventional thermal, at 59.6 percent. In France, the total output was lower, at 510 billion kWh, and in contrast to Germany, conventional thermal accounted for just 10.3 percent, with most electricity coming from nuclear power (76 percent). In Germany, the proportion of nuclear power generated electricity was only one fifth of the total.

Informational, impersonal stance:

The writer moves to more specific details, focused on the electricity sources and leading the reader with cohesive phrases that indicate the movement from discussing France to Germany

Moving on to renewables, this accounted for quite similar proportions for both countries, ranging from around 14 percent to 17 percent of the total electricity generated. In detail, in Germany, most of the renewables consisted of wind and biomass, totaling around 75 percent, which was far higher than for hydroelectric (17.7 percent) and solar (6.1 percent). The situation was very different in France, where hydroelectric made up 80.5 percent of renewable electricity, with biomass, wind and solar making up the remaining 20 percent. Neither country used geothermal energy.

Explicit cohesion, informational focus, balanced to certain stance:

The writer signals that they will move on to discuss renewables, with boosted and hedged (“most of”) statements about quantity and proportions. The writing continues to have an informational focus, offering specific details about renewables. In these final sentences, the writer uses boosted statements and explicit transitions, which emphasize the contrast between energy in Germany and France

These continuum examples help illustrate different tasks and different writing. The A-levels task and response is more interpersonal and personal, while the IELTS task and response is more informational and impersonal. These patterns illustrate what is closer to the truth: Tests only test what is on the tests, and the claim that most students can’t write is highly dependent on how writing and can’t write are conceived by tests. Test results offer information about how students write according to the conditions, tasks, and criteria of that test.

5.5.7 Most Students Write

Closer to the truth is that most students write, whether or not their test scores relate to the broad range of writing they do. Simultaneously, many headlines consider correct writing patterns to be the reference patterns for discussing student writing.

Writing exams give us some information, but they are not comprehensive ways to tell if students can write. Still, it is hard to escape the persistent myth that most students can’t write, which disposes people to believe there is a problem whether or not they understand how the problem is being tested. It can even mean – as in the case of the book Academically Adrift – that in the presence of data that affirms student writing, people focus on the bleaker, more attention-grabbing conclusion that students can’t write.

It seems fair to assume that most students today write with varying proficiency, depending on what they are writing and in what circumstances. But unless we explore a range of writing, we will not know if most students can write. Exploring diverse writing patterns can give us more insight than hand-wringing and regulating has done.

But more hand-wringing, alas, is still to come. Like this one, our next myth is also bolstered by standardized tests.

Figure 0

Table 5.1 Consequences of myth 5

Figure 1

Table 5.2 Writing exam criteria

Figure 2

Table 5.3 Exam writing continuum

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×